Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Bayesian network inference can be formulated as a combinatorial optimization problem, concerning in the computation of an optimal factoring for the distribution represented in the net. Since the determination of an optimal factoring is a computationally hard problem, heuristic greedy strategies able to nd approximations of the optimal factoring are usually adopted. In the present paper we investigate an alternative approach based on a combination of genetic algorithms (GA) and case-based reasoning (CBR). We show how the use of genetic algorithms can improve the quality of the computed factoring in case a static strategy is used (as for the MPE computation), while the combination of GA and CBR can still provide advantages in the case of dynamic strategies. Some preliminary results on di erent kinds of nets are then reported.
Pattern Recognition Letters, 1999
Abductive inference in Bayesian belief networks is the process of generating the u most probable con®gurations given an observed evidence. When we are only interested in a subset of the network's variables, this problem is called partial abductive inference. Both problems are NP-hard, and so exact computation is not always possible. This paper describes an approximate method based on genetic algorithms to perform partial abductive inference. We have tested the algorithm using the alarm network and from the experimental results we can conclude that the algorithm presented here is a good tool to perform this kind of probabilistic reasoning. Ó 0167-8655/99/$ -see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 -8 6 5 5 ( 9 9 ) 0 0 0 8 8 -4
International Journal of Approximate Reasoning, 1994
A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that ecient probabilistic inference i n a b elief network is a problem of nding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we dene a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of ecient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance.
IEEE Transactions on Evolutionary Computation, 2002
Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the most probable configurations given observed evidence. When we are interested only in a subset of the network's variables, this problem is called partial abductive inference. Both problems are NP-hard and so exact computation is not always possible. In this paper, a genetic algorithm is used to perform partial abductive inference in BBNs. The main contribution is the introduction of new genetic operators designed specifically for this problem. By using these genetic operators, we try to take advantage of the calculations previously carried out, when a new individual is evaluated. The algorithm is tested using a widely used Bayesian network and a randomly generated one and then compared with a previous genetic algorithm based on classical genetic operators. From the experimental results, we conclude that the new genetic operators preserve the accuracy of the previous algorithm, and also reduce the number of operations performed during the evaluation of individuals. The performance of the genetic algorithm is, thus, improved.
2005
This paper addresses exact learning of Bayesian network structure from data and expert's knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods, such as hill-climbing, dynamic programming and sampling methods. Secondly, a branch and bound algorithm is presented that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score function. It is an any-time procedure because, if stopped, it provides the best current solution and an estimation about how far it is from the global solution. We show empirically the advantages of the properties and the constraints, and the applicability of the algorithm to large data sets (up to one hundred variables) that cannot be handled by other current techniques (limited to around 30 variables).
IEEE Transactions on Knowledge and Data Engineering, 2012
Reducing the computational complexity of inference in Bayesian Networks (BNs) is a key challenge. Current algorithms for inference convert a BN to a junction tree structure made up of clusters of the BN nodes and the resulting complexity is time exponential in the size of a cluster. The need to reduce the complexity is especially acute where the BN contains continuous nodes. We propose a new method for optimizing the calculation of Conditional Probability Tables (CPTs) involving continuous nodes, approximated in Hybrid Bayesian Networks (HBNs), using an approximation algorithm called dynamic discretization. We present an optimized solution to this problem involving binary factorization of the arithmetical expressions declared to generate the CPTs for continuous nodes for deterministic functions and statistical distributions. The proposed algorithm is implemented and tested in a commercial Hybrid Bayesian Network software package and the results of the empirical evaluation show significant performance improvement over unfactorized models.
2002
Bayesian networks can be seen as a factorisation of a joint probability distribution over a set of variables, based on the conditional independence relations amongst the variables. In this paper we show how it is possible to achieve a finer factorisation decomposing the origninal factors in which some conditions hold. The new ideas can be applied to algorithms able to deal wih factorised probabilistic potentials, as Lazy Propagation, Lazy-Penniless as well as Monte Carlo methods based on Importance Sampling.
Information Sciences, 2013
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Bayesian networks are one of the most widely used class of these models. Some of the inference and learning tasks in Bayesian networks involve complex optimization problems that require the use of meta-heuristic algorithms. Evolutionary algorithms, as successful problem solvers, are promising candidates for this purpose. This paper reviews the application of evolutionary algorithms for solving some NP-hard optimization tasks in Bayesian network inference and learning.
Mathematical and Computer Modelling, 2008
Bayesian networks model conditional dependencies among the domain variables, and provide a way to deduce their interrelationships as well as a method for the classification of new instances. One of the most challenging problems in using Bayesian networks, in the absence of a domain expert who can dictate the model, is inducing the structure of the network from a large, multivariate data set. We propose a new methodology for the design of the structure of a Bayesian network based on concepts of graph theory and nonlinear integer optimization techniques.
IEICE Transactions on Information and Systems, 2008
This paper addresses exact learning of Bayesian network structure from data and expert's knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods, such as hill-climbing, dynamic programming and sampling methods. Secondly, a branch and bound algorithm is presented that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score function. It is an any-time procedure because, if stopped, it provides the best current solution and an estimation about how far it is from the global solution. We show empirically the advantages of the properties and the constraints, and the applicability of the algorithm to large data sets (up to one hundred variables) that cannot be handled by other current techniques (limited to around 30 variables).
Conference: Proceedings of the Sixth UAI Bayesian Modelling Applications WorkshopAt: Helsinki, Finland, July 9, 2008
In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be speci- fied in the quantification phase. This method enables the derivation of a variable's condi- tional probability table (CPT) in the gen- eral case that the states of the variable are ordered and the states of each of its parent nodes can be ordered with respect to the in- fluence they exercise. EBBN requires only a limited amount of probability assessments from experts to determine a variable's full CPT and uses piecewise linear interpolation. The number of probabilities to be assessed in this method is linear in the number of condi- tioning variables. EBBN's performance was compared with the results achieved by ap- plying both the normal copula vine approach from Hanea & Kurowicka (2007), and by us- ing a simple uniform distribution.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996
We present a new approach to structure learning in the field of Bayesian networks: We tackle the problem of the search for the best Bayesian network structure, given a database of cases, using the genetic algorithm philosophy for searching among alternative structures. We start by assuming an ordering between the nodes of the network structures. This assumption is necessary to guarantee that the networks that are created by the genetic algorithms are legal Bayesian network structures. Next, we release the ordering assumption by using a "repair operator" which converts illegal structures into legal ones. We present empirical results and analyze them statistically. The best results are obtained with an elitist genetic algorithm that contains a local optimizer.
1998
A method for improving search-based inference techniques in Bayesian networks by obtaining a prior estimation of the error is presented. The method is based on a recently introduced algorithm for calculating the contribution of a given set of instantiations to the total probability mass. If a certain accuracy for the solution is desired, the method provides us with the number of replications (i.e., the sample size) needed for obtaining the approximated values with the desired accuracy. In addition to providing a prior stopping rule, the method substantially reduces the structure of the search tree and, hence, the computer time required for the process. Important savings are obtained in the case of Bayesian networks with extreme probabilities, as it is shown with the examples reported in the paper. As an example of several possible applications of the method, the problem of finding a maximal posteriori (MAP) instantiation of the Bayesian network variables, given a partial value assignment as an initial constraint, is presented.
International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06), 2005
The search of optimal Bayesian Network from a database of observations is NP-hard. Nevertheless, several heuristic search strategies have been found to be effective. We present a new population-based algorithm to learn the structure of Bayesian Networks without assuming any ordering of nodes and allowing for the presence of both discrete and continuous random variables. Numerical performances of our Mixed-Genetic Algorithm, (M-GA), are investigated on a case study taken from the literature and compared with greedy search.
Uncertainty in Artificial Intelligence, 1993
Bayesian belief networks can be used to represent and to reason about complex systems with uncertain, incomplete and conflicting information. Belief networks are graphs encoding and quantifying probabilistic dependence and conditional independence amoo g variables. One type of reasoning of interest in diagnosis is called abductive inference (determination of the global most probable system description given the values of any partial subset of variables). In some cases, abductive inference can be performed with exact algorithms using distributed network computations but it is an NP-hard problem and complexity increases drastically with the pesence of undirected cycles, number of discrete states per variable, and number of variables in the network. This paper describes an • approximate method based on genetic algorithms to perform abductive inference in large, multiply connected networks for which complexity is a COilCfZJl when using most exact methods and for which systematic search methods are not feasible. The theoretical adequacy of the method is discussed and preliminary experlmental results are presented.
Advances in Evolutionary Algorithms, 2008
2004
A method to induce Bayesian networks from data to overcome some limitations of other learning algorithms is proposed. One of the main features of this method is a metric to evaluate Bayesian networks combining different quality criteria. A fuzzy system is proposed to enable the combination of different quality metrics. In this fuzzy system a metric of classification is also proposed, a criterium that is not often used to guide the search while learning Bayesian networks. Finally, the fuzzy system is integrated to a genetic algorithm, used as a search method to explore the space of possible Bayesian networks, resulting in a robust and flexible learning method with performance in the range of the best learning algorithms of Bayesian networks developed up to now.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.