Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991
A parallel distributed computational model for reasoning and learning is discussed based on a belief network paradigm. Issues like reasoning and learning for the proposed model are discussed. Comparisons between our method and other methods are also given.
International Journal of Approximate Reasoning, 2001
Previous algorithms for the construction of belief networks structures from data are mainly based either on independence criteria or on scoring metrics. The aim of this paper is to present a hybrid methodology that is a combination of these two approaches, which bene®ts from characteristics of each one, and to develop two operative algorithms based on this methodology. Results of the evaluation of the algorithms on the wellknown Alarm network are presented, as well as the algorithms performance issues and some open problems.
Uncertainty in Artificial Intelligence, 1990
We describe a method for incrementally constructing belief networks. We have developed a networkconstruction language similar to a forward-chaining language using data dependencies, but with additional features for specifying distributions. Using this language, we can define parameterized classes of probabilistic models. These parameterized models make it possible to apply probabilistic reasoning to problems for which it is impractical to have a single large, static model.
Lecture Notes in Computer Science, 2000
Previous algoritms for the construction of belief networks structures from data are mainly based either on independence criteria or on scoring metrics. The aim of this paper is to present a hybrid methodology that is a combination of these two approaches, which benefits from characteristics of each one, and to introduce an operative algoritm based on this methodology. We dedicate a special attention to the problem of getting the 'right' size of the belief network induced from data, i.e. finding a trade-off between network complexity and accuracy. We propose several approaches to tackle this matter. Results of the evaluation of the algorithm on the well-known Alarm network are also presented. This work has been supported by the Spanish Comisión Interministerial de Ciencia y Tecnología (CICYT) under Project n. TIC96-0781.
Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence, 1994
The relationship between belief networks and relational databases is examined. Based on this analysis, a method to construct belief networks automatically from statistical rela tional data is proposed. A comparison be tween our method and other methods shows that our method has several advantages when generalization or prediction is deeded. 1 Join dependency (JD) over R1, ... ,Rn, written C><J (Rl> ... , Rn), is satisfied by a relation rover R1 U . .. URn, if and only if 7rR , ( r) C><J ••• C><J 7rRJ r) = r
Scientific Reports, 2016
Belief networks represent a powerful approach to problems involving probabilistic inference, but much of the work in this area is software based utilizing standard deterministic hardware based on the transistor which provides the gain and directionality needed to interconnect billions of them into useful networks. This paper proposes a transistor like device that could provide an analogous building block for probabilistic networks. We present two proof-of-concept examples of belief networks, one reciprocal and one non-reciprocal, implemented using the proposed device which is simulated using experimentally benchmarked models.
Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the eld of arti cial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-de ned way.
Decision Support Systems, 1996
This paper addresses the problem of constructing belief network based expert systems. We discuss a design tool that assists in the development of such expert systems by comparing alternative representations. The design tool uses information theoretic measures to compare alternative structures. Three important capabilities of the design tool are discussed: (i) evaluating alternative structures based on sample data; (ii) finding optimal networks with specified connectivity conditions; and (iii) eliminating weak dependencies from derived network structures. We have examined the performance of the design tool on many sets of simulated data, and show that the design tool can accurately recover the important dependencies across variables in a problem domain. We illustrate how this program can be used to design a belief network for evaluating the financial distress situation for banks.
1996
In recent years belief networks have become a popular representation for reasoning under uncertainty and are used in a wide variety of applications. There are a number of exact and approximate inference algorithms available for performing belief updating, however in general the task is NP-hard. To overcome the problems of computational complexity that occur when modelling larger, real-world problems, researchers have developed variants of stochastic simulation approximation algorithms, and a number of other approaches involve approximating the model or limiting belief updating to nodes of interest. Typically comparisons are made of only a few algorithms, and on a particular example network. We survey the belief network algorithms and propose a system for domain characterisation as a basis for algorithm comparison. We present performance results using this framework from three sets of experiments: (1) on the Likelihood Weighting (LW) and Logic Sampling (LS) stochastic simulation algorithms; (2) on the performance of LW and Jensen's algorithms on state-space abstracted networks, (3) some comparisons of the time performance of LW, LS and the Jensen algorithm. Our results indicate that domain characterisation may be useful for predicting inference algorithm performance on a belief network for a new application domain.
… of Fourth Workshop on Uncertainty in …, 1988
This paper focuses on probability updates in multiply-connected belief networks. Pearl has designed the method of conditioning, which enables us to apply his algorithm for belief updates in singly-connected networks to multiply-connected belief networks by selecting a loop-cutset for the network and instantiating these loop-cutset nodes. We discuss conditions thj:zt need to be satisfied by the selected nodes. We present a heuristic algorithm for finding a loop-cutset that satisfies these conditions.
International Journal of Approximate Reasoning, 1992
Belief networks are important objects for research study and for actual use, as the experience of the MUNIN project demonstrates. There is evidence that humans are quite good at guessing network structure but poor at settling values for the numerical parameters. Determining these parameters by standard statistical techniques often requires too many sample points (test cases') for larger systems, so knowledge engineers have sought direct algorithms to define or adjust the parameters by appeal to selected test cases. It is shown for both Dempster-Shafer networks and Bayesian networks that for very simple networks (trees), defining parameter values (synthesis) or refining expert-estimated values (refinement) can be computationally intractable. These unpleasant results hold even when we settle for approximate values or demand agreement on only a certain percentage of cases.
1996
We develop a system that, given a database containing instances of the variables in a domain of knowledge, captures many of the dependence relationships constrained by those data, and represents them as a belief network. To obtain the network structure, we have designed a new learning algorithm, called BENEDICT, which has been implemented and incorporated as a module within the system. The numerical component, i.e., the conditional probability tables, are estimated directly from the database. We have tested the system on databases generated from simulated networks by using probabilistic sampling, including an extensive database, corresponding to the well-known Alarm Monitoring System. These databases were used as inputs for the learning module, and the networks obtained, compared with the originals, were consistently similar.
This paper describes a process for constructing situation-specific belief networks from a knowledge base of network fragments. A situation-specific network is a minimal querycomplete network constructed from a knowledge base in response to a query for the probability distribution on a set of target variables given evidence and context variables. We present definitions of query completeness and situation-specific networks. We describe conditions on the knowledge base that guarantee query completeness. The relationship of our work to earlier work on KBMC is also discussed.
1991
Belief networks have become an increasingly popular mechanism for dealing with uncertainty insystems. Unfortunately, it is known that finding the probability values of belief network nodes givena set of evidence is not tractable in general. Many different simulation algorithms for approximatingsolutions to this problem have been proposed and implemented. In this report, we describe theimplementation of a collection of such
1993
One topic that is likely to attract an increasing amount of attention within the Knowledge-base systems resesearch community is the coordination of information provided by multiple experts. We envision a situation in which several experts independently encode information as belief networks. A potential user must then coordinate the conclusions and recommendations of these networks to derive some sort of consensus: One approach to such a consensus is the fusion of the contributed networks into a single, consensus model prior to the consideration of any case-specific data (specific observations, test results). This approach requires two types of combination procedures, one for probabilities, and one for graphs. Since the combination of probabilities is relatively well understood, the key barriers to this approach lie in the realm of graph theory. This paper provides formal definitions of some of the operations necessary to effect the necessary graphical combinations, and provides complexity analyses of these procedures. The paper's key result is that most of these operations are NPhard, and its primary message is that the derivation of "good" consensus networks must be done heuristically.
International Journal of Approximate Reasoning, 2000
In the paper we describe a new independence-based approach for learning Belief Networks. The proposed algorithm avoids some of the drawbacks of this approach by making an intensive use of low order conditional independence tests. Particularly, the set of zero-and ®rst-order independence statements are used in order to obtain a prior skeleton of the network, and also to ®x and remove arrows from this skeleton. Then, a re®nement procedure, based on minimum cardinality d-separating sets, which uses a small number of conditional independence tests of higher order, is carried out to produce the ®nal graph. Our algorithm needs an ordering of the variables in the model as the input. An algorithm that partially overcomes this problem is also presented.
Uncertainty in Artificial Intelligence, 1993
Given a belief network with evidence, the task of finding the l most probable ex planations (MPE) in the belief network is that of identifying and ordering the l most probable instantiations of the non-evidence nodes of the belief network. Although many approaches have been proposed for solving this problem, most work only for restricted topologies (i.e., singly connected belief net works). In this paper, we will present a new approach for finding l MPEs in an arbitrary belief network. First, we will present an al gorithm for finding the MPE in a belief net work. Then, we will present a linear time al gorithm for finding the next MPE after find ing the first MPE. And finally, we will discuss the problem of fi nding the MPE for a subset of variables of a belief network, and show that the problem can be efficiently solved by this approach.
1998
This paper describes an example problem of knowledge-based belief network construction (Goldman, Breese and Wellman, 1992; Goldman and Charniak, 1993; Laskey, 1990). Knowledgebased model construction requires declarative representations for encoding modular, abstract, repeatable domain relationships, and procedures for instantiating and combining these knowledge elements to form models for particular problem instances (Egar and Musen, 1993; Regan and Holzman, 1992). Recent work in knowledge representation (Laskey and Mahoney, 1997; Mahoney and Laskey, 1996; Koller and Pfeffer, 1997) represents domain relationships as fragments of belief networks. The object-oriented framework is natural for this purpose, with its ability to represent abstract types with associated structure and methods, inheritance, and encapsulation. The example problem is drawn from the domain of military situation assessment. Although highly simplified, the example problem illustrates many of the issues that must...
In this abstract we give an overview of the work described in 15]. Belief networks provide a graphical representation of causal relationships together with a mechanism for probabilistic inference, allowing belief updating based on incomplete and dynamic information. We present a survey of Belief Network belief updating algorithms and propose a domain characterisation system which is used as a basis for algorithm comparison. We give experimental comparative results of algorithm performance using the proposed framework. We show how domain characterisation may be used to predict algorithm performance.
Uncertainty in Artificial Intelligence, 1993
One topic that is likely to attract an increasing amount of attention within the Knowledge-base systems resesearch community is the coordination of information provided by multiple experts. We envision a situation in which several experts independently encode information as belief networks. A potential user must then coordinate the conclusions and recommendations of these networks to derive some sort of consensus: One approach to such a consensus is the fusion of the contributed networks into a single, consensus model prior to the consideration of any case-specific data (specific observations, test results). This approach requires two types of combination procedures, one for probabilities, and one for graphs. Since the combination of probabilities is relatively well understood, the key barriers to this approach lie in the realm of graph theory. This paper provides formal definitions of some of the operations necessary to effect the necessary graphical combinations, and provides complexity analyses of these procedures. The paper's key result is that most of these operations are NPhard, and its primary message is that the derivation of "good" consensus networks must be done heuristically.
Uncertainty Proceedings 1994, 1994
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple gener alization of Pearl's (1986b) method of loop cutset conditioning. We show that global conditioning, as well as loop-cutset condition ing, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 199Gb). Nonetheless, this approach provides new op portunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) com bining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.