Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991, Uncertainty Proceedings 1991
This paper discuses multiple Bayesian networks representation paradigms for encoding asymmetric independence assertions. We offer three contributions: (1) an inference mechanism that makes explicit use of asymmetric independence to speed up computations, (2) a simplified definition of similarity networks and extensions of their theory, and (3) a generalized representation scheme that encodes more types of asymmetric independence assertions than do similarity networks.
Proceedings of the Twelfth …, 1996
Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation scheme---tree-structured CPTs---for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.
We characterize probabilities in Bayesian networks in terms of algebraic expressions called quasi-probabilities. These are arrived at by casting Bayesian networks as noisy AND-OR-NOT networks, and viewing the subnetworks that lead to a node as arguments for or against a node. Quasi-probabilities are in a sense the "natural" algebra of Bayesian networks: we can easily compute the marginal quasi-probability of any node recursively, in a compact form; and we can obtain the joint quasi-probability of any set of nodes by multiplying their marginals (using an idempotent product operator). Quasi-probabilities are easily manipulated to improve the efficiency of probabilistic inference. They also turn out to be representable as square-wave pulse trains, and joint and marginal distributions can be computed by multiplication and complementation of pulse trains.
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 1996
This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected].
Artificial Intelligence, 1996
We examine two representation schemes for uncertain knowledge: the similarity network (Heckerman, 1991) and the Bayesian multinet. These schemes are extensions of the Bayesian network model in that they represent asymmetric independence assertions. We explicate the notion of relevance upon which similarity networks are based and present an efficient inference algorithm that works under the assumption that every event has a nonzero probability. Another inference algorithm is developed that works under no restriction albeit less efficiently. We show that similarity networks are not inferentially complete-namely-not every query can be answered. Nonetheless, we show that a similarity network can always answer any query of the form: "What is the posterior probability of an hypothesis given evidence?" We call this property diagnostic complete-IZESS. Finally, we describe a generalization of similarity networks that can encode more types of asymmetric conditional independence assertions than can ordinary similarity networks.
1990
We describe how to combine probabilistic logic and Bayesian networks to obtain a new framework (\Bayesian logic") for dealing with uncertainty and causal relationships in an expert system. Probabilistic logic, invented by Boole, is a technique for drawing inferences from uncertain propositions for which there are no independence assumptions. A Bayesian network is a \belief net" that can represent complex conditional independence assumptions. We show how to solve inference problems in Bayesian logic by applying Benders decomposition to a nonlinear programming formulation. We also show that the number of constraints grows only linearly with the problem size for a large class of networks.
International Journal of Approximate Reasoning, 2018
Directed separation (d-separation) played a fundamental role in the founding of Bayesian networks (BNs) and continues to be useful today in a wide range of applications. Given an independence to be tested, current implementations of d-separation explore the active part of a BN. On the other hand, an overlooked property of d-separation implies that d-separation need only consider the relevant part of a BN. We propose a new method for testing independencies in BNs, called relevant path separation (rp-separation), which explores the intersection between the active and relevant parts of a BN. Favourable experimental results are reported.
Uncertainty Proceedings 1991, 1991
The graphoid axioms for conditional independence, originally described by Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 1988]. Such axioms provide a mechanism for manipulating conditional independence assertions without resorting to their numerical definition. This paper explores a representation for independence statements using multiple undirected graphs and some simple graphical transformations. The independence statements derivable in this system are equivalent to those obtainable by the graphoid axioms. Therefore, this is a purely graphical proof technique for conditional independence.
Research and Development in Intelligent Systems XXIII, 2007
Nowadays, Bayesian networks are seen by many researchers as standard tools for reasoning with uncertainty. Despite the fact that Bayesian networks are graphical representations, representing dependence and independence information, normally the emphasis of the visualisation of the reasoning process is on showing changes in the associated marginal probability distributions due to entering observations, rather than on changes in the associated graph structure. In this paper, we argue that it is possible and relevant to look at Bayesian network reasoning as reasoning with a graph structure, depicting changes in the dependence and independence information. We propose a new method that is able to modify the graphical part of a Bayesian network to bring it in accordance with available observations. In this way, Bayesian network reasoning is seen as reasoning about changing dependences and independences as reflected by changes in the graph structure.
The Symbolic Probabilistic Inference (SPI) Algorithm [D'Ambrosio, 19891 provides an efficient framework for resolving general queries on a belief network. It applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations.
Applied Logic Series, 2001
See the introduction to this volume for more on the distinction between logical and empirical Bayesianism. Such forms of Bayesianism are often referred to as 'objective' Bayesian positions, and confusion can arise because physical or empirical probability (frequency, propensity or chance) is often called 'objective' probability in order to distinguish it from Bayesian 'subjective' probability. In this chapter I will draw the latter distinction, using 'objective' to refer to empirical interpretations of causality and probability that are to do with objects external to an agent, and using 'subjective' to refer to interpretations of causality and probability that depend on the perspective of an agent subject. 2 If has no parents, Ô´ µ is just Ô´ µ. 3 The Bayesian network independence assumption is often called the Markov or causal Markov condition. 4 The joint distribution Ô can be determined by the direct method: Ô´ ½ AE µ É AE ½ Ô´ µ where is the state of the direct causes of which is consistent with ½ AE. Alternatively Ô may be determined by potentially more efficient propagation algorithms. See [Pearl 1988] or [Neapolitan 1990] here and for more on the formal properties of Bayesian networks. 5 See [Williamson 2000] for more on the probabilistic approach to diagnosis.
Digital Signal Processing, 1998
This paper reviews and formalizes algorithms for probabilistic inferences upon causal probabilistic networks (CPN), also known as Bayesian networks, and introduces Probanet -a development environment for CPNs. Information fusion in CPNs is realized through updating joint probabilities of the variables upon the arrival of new evidences or new hypotheses. Kernel algorithms for some dominant methods of inferences are formalized from discontiguous, mathematics-oriented literatures, with gaps lled in with regards to computability and completeness. Probanet has been designed and developed as a generic shell, a development environment for CPN construction and application. The design aspects and current status of Probanet are described.
Inductive Logic …, 2005
Computational Intelligence, 2017
Testing independencies is a fundamental task in reasoning with Bayesian networks (BNs). In practice, d-separation is often used for this task, since it has linear-time complexity. However, many have had difficulties understanding d-separation in BNs. An equivalent method that is easier to understand, called m-separation, transforms the problem from directed separation in BNs into classical separation in undirected graphs. Two main steps of this transformation are pruning the BN and adding undirected edges. In this paper, we propose u-separation as an even simpler method for testing independencies in a BN. Our approach also converts the problem into classical separation in an undirected graph. However, our method is based upon the novel concepts of inaugural variables and rationalization. Thereby, the primary advantage of u-separation over m-separation is that m-separation can prune unnecessarily and add superfluous edges. Our experiment results show that u-separation performs 73% fewer modifications on average than m-separation.
Arxiv preprint cs/9612101, 1996
A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as "or", "sum" or "max", on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.
2011
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
Having presented both theoretical and practical reasons for artificial intelligence to use probabilistic reasoning, we now introduce the key computer technology for dealing with probabilities in AI, namely Bayesian networks. Bayesian networks (BNs) are graphical models for reasoning under uncertainty, where the nodes represent variables (discrete or continuous) and arcs represent direct connections between them. These direct connections are often causal connections. In addition, BNs model the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available. In this chapter we will describe how Bayesian networks are put together (the syntax) and how to interpret the information encoded in a network (the semantics). We will look at how to model a problem with a Bayesian network and the types of reasoning that can be performed.
Computing Research Repository, 2010
Bayesian network is a complete model for the variables and their relationships, it can be used to answer probabilistic queries about them. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems. In the application of Bayesian networks, most of the work is related to probabilistic inferences. Any variable updating in any node of Bayesian networks might result in the evidence propagation across the Bayesian networks. This paper sums up various inference techniques in Bayesian networks and provide guidance for the algorithm calculation in probabilistic inference in Bayesian networks.
Conference: Proceedings of the Sixth UAI Bayesian Modelling Applications WorkshopAt: Helsinki, Finland, July 9, 2008
In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be speci- fied in the quantification phase. This method enables the derivation of a variable's condi- tional probability table (CPT) in the gen- eral case that the states of the variable are ordered and the states of each of its parent nodes can be ordered with respect to the in- fluence they exercise. EBBN requires only a limited amount of probability assessments from experts to determine a variable's full CPT and uses piecewise linear interpolation. The number of probabilities to be assessed in this method is linear in the number of condi- tioning variables. EBBN's performance was compared with the results achieved by ap- plying both the normal copula vine approach from Hanea & Kurowicka (2007), and by us- ing a simple uniform distribution.
We investigate probabilistic propositional logic as a way of expressing and reasoning about uncertainty. In contrast to Bayesian networks, a logical approach can easily cope with incomplete information like probabilities that are missing or only known to lie in some interval. However, probabilistic propositional logic as described e.g. by Halpern [9], has no way of expressing conditional independence, which is important for compact specification in many cases. We define a logic with conditional independence formulae. We give an axiomatization which we show to be complete for the kind of inferences allowed by Bayesian networks, while still being suitable for reasoning under incomplete information.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.