Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1992, Uncertainty in Artificial Intelligence
We report on an experimental investigation into opportunities for parallelism in belief net inference. Specifically, we report on a study performed of the available parallelism, on hypercube style machines, of a set of ran domly generated belief nets, using factoring (SPI) style inference algorithms. Our results indicate that substantial speedup is available, but that it is available only through paral lelization of individual conformal product op erations, and depends critically on finding an appropriate factoring. We find negligible op portunity for parallelism at the topological, or clustering tree, level.
National Conference on Artificial Intelligence, 1990
The Symbolic Probabilistic Inference (SPI) Algorithm (D'Ambrosio, 19891 provides an efficient framework for resolving general queries on a belief network. It applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike most belief network algorithms, SPI is goal directed, performing only those calculations that are required to respond to
1991
A parallel distributed computational model for reasoning and learning is discussed based on a belief network paradigm. Issues like reasoning and learning for the proposed model are discussed. Comparisons between our method and other methods are also given.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993
A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stochastic simulation methods, which often improve run times, provide an alternative to exact inference algorithms. We present such a stochastic simulation algorithm 2)-BNRAS that is a randomized approximation scheme. To analyze the run time, we parameterize belief networks by the dependence value P E , which is a measure of the cumulative strengths of the belief network dependencies given background evidence E. This parameterization defines the class of f-dependence networks. The run time of 2)-BNRAS is polynomial when f is a polynomial function. Thus, the results of this paper prove the existence of a class of belief networks for which inference approximation is polynomial and, hence, provably faster than any exact algorithm.
ual.es
We present an efficient procedure for factorising probabilistic potentials represented as probability trees. This new procedure is able to detect some regularities that cannot be captured by existing methods. In cases where an exact decomposition is not achievable, we propose a heuristic way to carry out approximate factorisations guided by a parameter called factorisation degree, which is fast to compute. We show how this parameter can be used to control the tradeoff between complexity and accuracy in approximate inference algorithms for Bayesian networks.
Uncertainty Proceedings 1994, 1994
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple gener alization of Pearl's (1986b) method of loop cutset conditioning. We show that global conditioning, as well as loop-cutset condition ing, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 199Gb). Nonetheless, this approach provides new op portunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) com bining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
Bayesian network inference can be formulated as a combinatorial optimization problem, concerning in the computation of an optimal factoring for the distribution represented in the net. Since the determination of an optimal factoring is a computationally hard problem, heuristic greedy strategies able to nd approximations of the optimal factoring are usually adopted. In the present paper we investigate an alternative approach based on a combination of genetic algorithms (GA) and case-based reasoning (CBR). We show how the use of genetic algorithms can improve the quality of the computed factoring in case a static strategy is used (as for the MPE computation), while the combination of GA and CBR can still provide advantages in the case of dynamic strategies. Some preliminary results on di erent kinds of nets are then reported.
Uncertainty in Artificial Intelligence, 1990
This paper analyzes the circumstances under which Bayesian networks can be pruned in order to reduce computational complexity without altering the computation for variables of interest. Given a problem instance which consists of a query and evidence for a set nf nodes in the network, it is possible to delete portions of the network which do not participate in the computation for the query. Savings in computational complexity can be large when the original network is not singly connected. Results analogous to those described in this paper have been derived before[Geiger, Verma, and Pearl 89, Shachter 88] but the implications for reducing complexity of the computations in Bayesian networks have not been stated explicitly. We show how a preprocess: ing step can be used to prune a Bayesian network prior to using standard algorithms to solve a given problem instance. We also show how our results can be used in a parallel distributed implementation in order to achieve greater savings. We define a minimal computationally equivalent subgraph of a Bayesian network. The algorithm developed in [Geiger, Verma, and Pearl 89] is modified to construct the subgraphs described in this paper with O(e) complexity, where e is the number of edges in the Bayesian network. Finally, we define a minimal computationally equivalent subgraph and prove that the subgraphs described are minimal.
The research conducted by ICOT is firmly based on the paradigm of parallel logic progr~mmlng. We have developed a fifth generation computer system (FGCS) prototype and evaluated its performance and appropriateness with applications from various domains. Our experience in this area so far indicates that the functions of the FGCS can benefit from the use of massive parallelism for computationally intensive tasks such as pattern matching and brute-force searching. The logical inference, however, should retain its control over the entire problem solving process. As an example, in this extended abstract, we provide a brief overview of two parallel inference applications; one in the domain of legal reasoning and one is a Go game playing program. We, then, describe how massive parallelism can play a role in enhancing the performance of these applications.
Scalable probabilistic reasoning is the key to unlocking the full potential of the age of big data. From untangling the biological processes that govern cancer to effectively targeting products and advertisements, probabilistic reasoning is how we make sense of noisy data and turn information into understanding and action. Unfortunately, the algorithms and tools for sophisticated structured probabilistic reasoning were developed for the sequential Von Neumann architecture and have therefore been unable to scale with big data. In this thesis we propose a simple set of design principles to guide the development of new parallel and distributed algorithms and systems for scalable probabilistic reasoning. We then apply these design principles to develop a series of new algorithms for inference in probabilistic graphical models and derive theoretical tools to characterize the parallel properties of statistical inference. We implement and assess the efficiency and scalability of the new inference algorithms in the multicore and distributed settings demonstrating the substantial gains from applying the thesis methodology to real-world probabilistic reasoning. Based on the lessons learned in statistical inference we introduce the GraphLab parallel abstraction which generalizes the thesis methodology and enable the rapid development of new efficient and scalable parallel and distributed algorithms for probabilistic reasoning. We demonstrate how the GraphLab abstraction can be used to rapidly develop new scalable algorithms for probabilistic reasoning and assess their performance on real-world problems in both the multicore and distributed settings. Finally, we identify a unique challenge associated with the underlying graphical structure in a wide range of probabilistic reasoning tasks. To address this challenge we introduce PowerGraph which refines the GraphLab abstraction and achieves orders of magnitude improvements in performance relative to existing systems. Research is a team effort and I was fortunate enough to be a part of an amazing team. I would like to thank my advisor Carlos Guestrin, who helped me focus on the important problems, guided me through the challenges of research, and taught me how to more effectively teach and communicate ideas both in writing and in presentations. In addition, Carlos gave me the opportunity to work with, learn from, and lead an exceptional team. Much of the work in this thesis was done with Yucheng Low, who taught me a lot about systems, software engineering, and how to persevere through challenging bugs and complicated and even impossible proofs. Our many long discussions shaped both the key principles in this thesis as well as their execution. In addition, Yucheng was instrumental in developing many of the systems and theoretical techniques used to evaluate the ideas in this thesis. Finally, Yucheng's exceptional skills as a world class barista made possible many late nights of successful research. Early in my graduate work at CMU I had the opportunity to work with Andreas Krause on Gaussian process models for signal quality estimation in wireless sensor networks. Andreas showed me how to apply the scientific method to design effective experiments, isolate bugs, and understand complex processes. Around the same time I also started to work with David O'Hallaron. As I transitioned my focus to the work in this thesis, David provided early guidance on scalable algorithm and system design and research focus. In addition, David introduced me to standard techniques in scientific computing and helped me build collaborations with Intel research.
Machine Intelligence and Pattern Recognition, 1990
PEOI'@RPAL.COM Although a number of algorithms have been developed to solve probabilistic inference problems on belief networks, they can be divided into two main groups: exact techniques which exploit the conditional independence revealed when the graph structure is relatively sparse, and probabilistic sampling techniques which exploit the "conductance" of an embedded Markov chain when the conditional probabilities have non extreme values. In this paper, we investigate a family of Monte Carlo sampling techniques similar to Logic Sampling [Henrion, 1988] which appear to perform well even in some multiply-connected networks with extreme conditional probabilities, and thus would be generally applicable. We consider several enhancements which reduce the posterior variance using this approach and propose a framework and criteria for choosing when to use those enhancements.
International Journal of Approximate Reasoning, 1994
A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that ecient probabilistic inference i n a b elief network is a problem of nding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we dene a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of ecient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance.
Uncertainty in Artificial Intelligence, 1993
Given a belief network with evidence, the task of finding the l most probable ex planations (MPE) in the belief network is that of identifying and ordering the l most probable instantiations of the non-evidence nodes of the belief network. Although many approaches have been proposed for solving this problem, most work only for restricted topologies (i.e., singly connected belief net works). In this paper, we will present a new approach for finding l MPEs in an arbitrary belief network. First, we will present an al gorithm for finding the MPE in a belief net work. Then, we will present a linear time al gorithm for finding the next MPE after find ing the first MPE. And finally, we will discuss the problem of fi nding the MPE for a subset of variables of a belief network, and show that the problem can be efficiently solved by this approach.
AIP Conference Proceedings, 2008
2002
Bayesian networks can be seen as a factorisation of a joint probability distribution over a set of variables, based on the conditional independence relations amongst the variables. In this paper we show how it is possible to achieve a finer factorisation decomposing the origninal factors in which some conditions hold. The new ideas can be applied to algorithms able to deal wih factorised probabilistic potentials, as Lazy Propagation, Lazy-Penniless as well as Monte Carlo methods based on Importance Sampling.
1991
Belief networks have become an increasingly popular mechanism for dealing with uncertainty insystems. Unfortunately, it is known that finding the probability values of belief network nodes givena set of evidence is not tractable in general. Many different simulation algorithms for approximatingsolutions to this problem have been proposed and implemented. In this report, we describe theimplementation of a collection of such
Compiling Bayesian networks (BNs) to junction trees and performing belief propagation over them is among the most prominent approaches to computing posteriors in BNs. However, belief propagation over junction tree is known to be computationally intensive in the general case. Its complexity may increase dramatically with the connectivity and state space cardinality of Bayesian network nodes. In this paper, we address this computational challenge using GPU parallelization. We develop data structures and algorithms that extend existing junction tree techniques, and specifically develop a novel approach to computing each belief propagation message in parallel. We implement our approach on an NVIDIA GPU and test it using BNs from several applications. Experimentally, we study how junction tree parameters affect parallelization opportunities and hence the performance of our algorithm. We achieve speedups ranging from 0.68 to 9.18 for the BNs studied. * junction tree is a clique computed from the moralized graph based on the original BN.
IEEE Transactions on Knowledge and Data …, 2002
Conference: Proceedings of the Sixth UAI Bayesian Modelling Applications WorkshopAt: Helsinki, Finland, July 9, 2008
In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be speci- fied in the quantification phase. This method enables the derivation of a variable's condi- tional probability table (CPT) in the gen- eral case that the states of the variable are ordered and the states of each of its parent nodes can be ordered with respect to the in- fluence they exercise. EBBN requires only a limited amount of probability assessments from experts to determine a variable's full CPT and uses piecewise linear interpolation. The number of probabilities to be assessed in this method is linear in the number of condi- tioning variables. EBBN's performance was compared with the results achieved by ap- plying both the normal copula vine approach from Hanea & Kurowicka (2007), and by us- ing a simple uniform distribution.
Neural Information Processing Systems, 1999
Local "belief propagation" rules of the sort proposed by Pearl [15] are guaranteed to converge to the correct posterior probabilities in singly connected graphical models. Recently, a number of researchers have empirically demonstrated good performance of "loopy belief propagation"using these same rules on graphs with loops. Perhaps the most dramatic instance is the near Shannon-limit performance of "Turbo codes", whose decoding algorithm is equivalent to loopy belief propagation. Except for the case of graphs with a single loop, there has been little theoretical understanding of the performance of loopy propagation. Here we analyze belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly Gaussian random variables. We give an analytical formula relating the true posterior probabilities with those calculated using loopy propagation. We give sufficient conditions for convergence and show that when belief propagation converges it gives the correct posterior means for all graph topologies, not just networks with a single loop. The related "max-product" belief propagation algorithm finds the maximum posterior probability estimate for singly connected networks. We show that, even for non-Gaussian probability distributions, the convergence points of the max-product algorithm in loopy networks are maxima over a particular large local neighborhood of the posterior probability. These results help clarify the empirical performance results and motivate using the powerful belief propagation algorithm in a broader class of networks. Problems involving probabilistic belief propagation arise in a wide variety of applications, including error correcting codes, speech recognition and medical diagnosis. If the graph is singly connected, there exist local message-passing schemes to calculate the posterior probability of an unobserved variable given the observed variables. Pearl [15] derived such a scheme for singly connected Bayesian networks and showed that this "belief propagation" algorithm is guaranteed to converge to the correct posterior probabilities (or "beliefs"). Several groups have recently reported excellent experimental results by running algorithms
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.