Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1990, National Conference on Artificial Intelligence
The Symbolic Probabilistic Inference (SPI) Algorithm (D'Ambrosio, 19891 provides an efficient framework for resolving general queries on a belief network. It applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike most belief network algorithms, SPI is goal directed, performing only those calculations that are required to respond to
Uncertainty Proceedings 1994, 1994
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple gener alization of Pearl's (1986b) method of loop cutset conditioning. We show that global conditioning, as well as loop-cutset condition ing, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 199Gb). Nonetheless, this approach provides new op portunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) com bining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
1991
Belief networks have become an increasingly popular mechanism for dealing with uncertainty insystems. Unfortunately, it is known that finding the probability values of belief network nodes givena set of evidence is not tractable in general. Many different simulation algorithms for approximatingsolutions to this problem have been proposed and implemented. In this report, we describe theimplementation of a collection of such
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993
A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stochastic simulation methods, which often improve run times, provide an alternative to exact inference algorithms. We present such a stochastic simulation algorithm 2)-BNRAS that is a randomized approximation scheme. To analyze the run time, we parameterize belief networks by the dependence value P E , which is a measure of the cumulative strengths of the belief network dependencies given background evidence E. This parameterization defines the class of f-dependence networks. The run time of 2)-BNRAS is polynomial when f is a polynomial function. Thus, the results of this paper prove the existence of a class of belief networks for which inference approximation is polynomial and, hence, provably faster than any exact algorithm.
Machine Intelligence and Pattern Recognition, 1990
PEOI'@RPAL.COM Although a number of algorithms have been developed to solve probabilistic inference problems on belief networks, they can be divided into two main groups: exact techniques which exploit the conditional independence revealed when the graph structure is relatively sparse, and probabilistic sampling techniques which exploit the "conductance" of an embedded Markov chain when the conditional probabilities have non extreme values. In this paper, we investigate a family of Monte Carlo sampling techniques similar to Logic Sampling [Henrion, 1988] which appear to perform well even in some multiply-connected networks with extreme conditional probabilities, and thus would be generally applicable. We consider several enhancements which reduce the posterior variance using this approach and propose a framework and criteria for choosing when to use those enhancements.
1996
In recent years belief networks have become a popular representation for reasoning under uncertainty and are used in a wide variety of applications. There are a number of exact and approximate inference algorithms available for performing belief updating, however in general the task is NP-hard. To overcome the problems of computational complexity that occur when modelling larger, real-world problems, researchers have developed variants of stochastic simulation approximation algorithms, and a number of other approaches involve approximating the model or limiting belief updating to nodes of interest. Typically comparisons are made of only a few algorithms, and on a particular example network. We survey the belief network algorithms and propose a system for domain characterisation as a basis for algorithm comparison. We present performance results using this framework from three sets of experiments: (1) on the Likelihood Weighting (LW) and Logic Sampling (LS) stochastic simulation algorithms; (2) on the performance of LW and Jensen's algorithms on state-space abstracted networks, (3) some comparisons of the time performance of LW, LS and the Jensen algorithm. Our results indicate that domain characterisation may be useful for predicting inference algorithm performance on a belief network for a new application domain.
1993
Dynamic Belief Networks (DBNs) have been of interest recently as modelling tools for environments that change over time. In such networks, sets of nodes may be added automatically over time to represent current and future states. DBNs are typically multiply connected, and inference complexity is the biggest barrier to successful use. In this paper, we present both exact and heuristic techniques for state space reduction, thresholding and backtracking that can dramatically reduce inference costs without signi cantly reducing accuracy. We illustrate this in the domain of continuous monitoring of mobile robots. In order to control the costs and bene ts of our heuristic methods, the system requires a good model of inference cost for any given network structure. We therefore also present results showing that Kanazawa's \join-tree cost" model gives a very good t to actual inference costs in large networks.
Networks, 1996
The paper presents a new efficient method for uncertainty propagation in discrete Bayesian networks in symbolic, as opposed to numeric, form, when considering some of the probabilities of the Bayesian network as parameters. The algebraic structure of the conditional probabilities of any set of nodes, given some evidence, is characterized as ratios of linear polynomials in the parameters. We use this result to carry out these symbolic expressions efficiently by calculating the coefficients of the polynomials involved, using standard numerical algorithms. The numeric canonical components method is proposed as an alternative to symbolic computations, gaining in speed and simplicity. It is also shown how to avoid redundancy when calculating the numeric canonical components probabilities using standard message-passing methods. The canonical components can also be used to obtain lower and upper bounds for the symbolic expression associated with the probabilities. Finally, we analyze the problem of symbolic evidence, which allows answering multiple queries regarding a given set of evidential nodes. In this case, the algebraic structure of the symbolic expressions obtained for the probabilities are shown to be ratios of non-linear polynomial expressions. Then we can perform symbolic inference with only a small set of symbolic evidential nodes. The methodology is illustrated by examples.
International Journal of Approximate Reasoning, 2009
Qualitative probabilistic networks (QPNs) are basically qualitative derivations of Bayesian belief networks. Originally, QPNs were designed to improve the speed of the construction and calculation of these networks, at the cost of specificity of the result. The formalism can also be used to facilitate cognitive mapping by means of inference in sign-based causal diagrams. Whatever the type of application, any computer based use of QPNs requires an algorithm capable of propagating information throughout the networks. Such an algorithm was developed in the 1990s. This polynomial time sign-propagation algorithm is explicitly or implicitly used in most existing QPN studies.
Computing Research Repository, 2010
Bayesian network is a complete model for the variables and their relationships, it can be used to answer probabilistic queries about them. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems. In the application of Bayesian networks, most of the work is related to probabilistic inferences. Any variable updating in any node of Bayesian networks might result in the evidence propagation across the Bayesian networks. This paper sums up various inference techniques in Bayesian networks and provide guidance for the algorithm calculation in probabilistic inference in Bayesian networks.
… of the Sixth Annual Conference on …, 1990
We present a new algorithm for finding maximum a-posteriori (MAP) assignments of values to belief networks. The belief network is compiled into a network consisting only of nodes with boolean (i.e. only 0 or 1) conditional probabilities. The MAP assignment is then found using a best-first search on the resulting network. We argue that, as one would anticipate, the algorithm is exponential for the general case, but only linear in the size of the network for poly trees.
Uncertainty in Artificial Intelligence, 1993
Given a belief network with evidence, the task of finding the l most probable ex planations (MPE) in the belief network is that of identifying and ordering the l most probable instantiations of the non-evidence nodes of the belief network. Although many approaches have been proposed for solving this problem, most work only for restricted topologies (i.e., singly connected belief net works). In this paper, we will present a new approach for finding l MPEs in an arbitrary belief network. First, we will present an al gorithm for finding the MPE in a belief net work. Then, we will present a linear time al gorithm for finding the next MPE after find ing the first MPE. And finally, we will discuss the problem of fi nding the MPE for a subset of variables of a belief network, and show that the problem can be efficiently solved by this approach.
1994
Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (pointvalued) marginal probability for every node in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specied query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are \too far away" from the queried node to have much impact on its value. LPE has the \anytime" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network.
1990
We describe how to combine probabilistic logic and Bayesian networks to obtain a new framework (\Bayesian logic") for dealing with uncertainty and causal relationships in an expert system. Probabilistic logic, invented by Boole, is a technique for drawing inferences from uncertain propositions for which there are no independence assumptions. A Bayesian network is a \belief net" that can represent complex conditional independence assumptions. We show how to solve inference problems in Bayesian logic by applying Benders decomposition to a nonlinear programming formulation. We also show that the number of constraints grows only linearly with the problem size for a large class of networks.
1993
In recent years the belief network has been used increasingly to model systems in AI that must perf orm uncertain inf erence. The de velopment of efficient algorithms fo r proba bilistic inf erence in belief networks has been a fo cus of much research in AI. Efficient al gorithms fo r certain classes of belief networks have been developed, but the problem of re porting the uncertainty in inf erred probabil ities has received little attention. A system should not only be capable of reporting the values of inf erred probabilities and/or the fa vorable choices of a decision; it should re port the range of possible error in the inf erred probabilities and/or c hoices. Two methods have been developed and im plemented fo r determining the variance in inf erred proba bilities in belief networks. These methods, the Approximate Propagation Method and the Monte Carlo Integration Method are dis cussed and compared in this paper.
International Journal of Approximate Reasoning, 1994
A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that ecient probabilistic inference i n a b elief network is a problem of nding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we dene a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of ecient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance.
Digital Signal Processing, 1998
This paper reviews and formalizes algorithms for probabilistic inferences upon causal probabilistic networks (CPN), also known as Bayesian networks, and introduces Probanet -a development environment for CPNs. Information fusion in CPNs is realized through updating joint probabilities of the variables upon the arrival of new evidences or new hypotheses. Kernel algorithms for some dominant methods of inferences are formalized from discontiguous, mathematics-oriented literatures, with gaps lled in with regards to computability and completeness. Probanet has been designed and developed as a generic shell, a development environment for CPN construction and application. The design aspects and current status of Probanet are described.
International Journal of Approximate Reasoning, 1996
This paper presents a new inference algorithm for belief networks that combines a search-based algorithm with a simulation-based algorithm. The former is an extension of the recursive decomposition (RD) algorithm proposed by Cooper, which is here modified to compute interval bounds on marginal probabilities. We call the algorithm bounded-RD. The latter is a stochastic simulation method known as Pearl's Markov blanket algorithm. Markov simulation is used to generate highly probable instantiations of the network nodes to be used by bounded-RD in the computation of probability bounds. Bounded-RD has the anytime property, and produces successively narrower interval bounds, which converge in the limit to the exact value.
1996
We develop a system that, given a database containing instances of the variables in a domain of knowledge, captures many of the dependence relationships constrained by those data, and represents them as a belief network. To obtain the network structure, we have designed a new learning algorithm, called BENEDICT, which has been implemented and incorporated as a module within the system. The numerical component, i.e., the conditional probability tables, are estimated directly from the database. We have tested the system on databases generated from simulated networks by using probabilistic sampling, including an extensive database, corresponding to the well-known Alarm Monitoring System. These databases were used as inputs for the learning module, and the networks obtained, compared with the originals, were consistently similar.
Uncertainty in Artificial Intelligence, 1990
We describe a method for incrementally constructing belief networks. We have developed a networkconstruction language similar to a forward-chaining language using data dependencies, but with additional features for specifying distributions. Using this language, we can define parameterized classes of probabilistic models. These parameterized models make it possible to apply probabilistic reasoning to problems for which it is impractical to have a single large, static model.
Conference: Proceedings of the Sixth UAI Bayesian Modelling Applications WorkshopAt: Helsinki, Finland, July 9, 2008
In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be speci- fied in the quantification phase. This method enables the derivation of a variable's condi- tional probability table (CPT) in the gen- eral case that the states of the variable are ordered and the states of each of its parent nodes can be ordered with respect to the in- fluence they exercise. EBBN requires only a limited amount of probability assessments from experts to determine a variable's full CPT and uses piecewise linear interpolation. The number of probabilities to be assessed in this method is linear in the number of condi- tioning variables. EBBN's performance was compared with the results achieved by ap- plying both the normal copula vine approach from Hanea & Kurowicka (2007), and by us- ing a simple uniform distribution.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.