Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical first-order logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating first-order logic, probability, and machine learning. This paper presents Multi-entity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures, and can express a probability distribution over models of any consistent, finitely axiomatizable first-order theory. We present the logic using an example inspired by the Paramount Series Star Trek.
Uncertainty in Artificial …, 2005
Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical first-order logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating first-order logic, probability, and machine learning. This paper presents Multi-entity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures, and can express a probability distribution over models of any consistent, finitely axiomatizable first-order theory. We present the logic using an example inspired by the Paramount Series Star Trek.
Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory.
Draft Version, 2005
An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.
2005
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical first-order logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a built-in, semantically grounded capability for reasoning under uncertainty renders classical first-order logic inadequate for many important classes of problems. General-purpose languages are beginning to emerge for which the fundamental logical basis is probability. Increasingly expressive probabilistic languages demand a theoretical foundation that fully integrates classical first-order logic and probability. In first-order Bayesian logic (FOBL), probability distributions are defined over interpretations of classical first-order axiom systems. Predicates and functions of a classical first-order theory correspond to a random variables in the corresponding first-order Bayesian theory. This is a natural correspondence, given that random variables are formalized in mathematical statistics as measurable functions on a probability space. A formal system called Multi-Entity Bayesian Networks (MEBN) is presented for composing distributions on interpretations by instantiating and combining parameterized fragments of directed graphical models. A construction is given of a MEBN theory that assigns a non-zero probability to any satisfiable sentence in classical first-order logic. By conditioning this distribution on consistent sets of sentences, FOBL can represent a probability distribution over interpretations of any finitely axiomatizable first-order theory, as well as over interpretations of infinite axiom sets when a limiting distribution exists. FOBL is inherently open, having the ability to incorporate new axioms into existing theories, and to modify probabilities in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on first-order Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because FOBL contains classical first-order logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical first-order logic or subsets thereof.
1995
We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative logic programming. We define a fixpoint theory, declarative semantics, and proof procedure for the new class of probabilistic logic programs. Compared to other approaches to quantitative logic programming, we provide a true probabilistic framework with potential applications in probabilistic expert systems and decision support systems. We also discuss the relationship between such programs and Bayesian networks, thus moving toward a unification of two major approaches to automated reasoning.
Fundamenta Informaticae
Although probabilistic knowledge representations and probabilistic reasoning have by now secured their position in arti cial intelligence, it is not uncommon to encounter misunderstanding of their foundations and lack of appreciation for their strengths. This paper describes ve properties of probabilistic knowledge representations that are particularly useful in intelligent systems research. (1) Directed probabilistic graphs capture essential qualitative properties of a domain, along with its causal structure. (2) Concepts such as relevance and con icting evidence have a natural, formally sound meaning in probabilistic models. (3) Probabilistic schemes support sound reasoning at a variety of levels ranging from purely quantitative to purely qualitative levels. (4) The role of probability theory in reasoning under uncertainty can be compared to the role of rst order logic in reasoning under certainty. Probabilistic knowledge representations provide insight into the foundations of logic-based schemes, showing their di culties in highly uncertain domains. Finally, (5) probabilistic knowledge representations support automatic generation of understandable explanations of inference for the sake of user interfaces to intelligent systems.
Although probabilistic knowledge representations and probabilistic reasoning have by now secured their position in intelligent systems research, it is not uncommon to encounter misunderstanding of their foundations and lack of appreciation for their strengths. This paper discusses ve issues related to intelligent systems research and shows how they are addressed by the probabilistic knowledge representations. Directed probabilistic graphs capture essential qualitative properties of a domain, along with its causal structure. Concepts such as relevance and con icting evidence have a natural, formally sound meaning in probabilistic models. Probabilistic schemes support sound reasoning at a variety of levels ranging from purely quantitative to purely qualitative levels. Probabilistic knowledge representations provide insight into the foundations of logic-based schemes for reasoning under uncertainty, showing their di culties in highly uncertain domains. Finally, probabilistic knowledge representations support automatic generation of understandable explanations of inference for the sake of user interfaces to intelligent systems.
2006
Abstract In this paper, we describe the syntax and semantics for a probabilistic relational language (PRL). PRL is a recasting of recent work in Probabilistic Relational Models (PRMs) into a logic programming framework. We show how to represent varying degrees of complexity in the semantics including attribute uncertainty, structural uncertainty and identity uncertainty. Our approach is similar in spirit to the work in Bayesian Logic Programs (BLPs), and Logical Bayesian Networks (LBNs).
Having presented both theoretical and practical reasons for artificial intelligence to use probabilistic reasoning, we now introduce the key computer technology for dealing with probabilities in AI, namely Bayesian networks. Bayesian networks (BNs) are graphical models for reasoning under uncertainty, where the nodes represent variables (discrete or continuous) and arcs represent direct connections between them. These direct connections are often causal connections. In addition, BNs model the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available. In this chapter we will describe how Bayesian networks are put together (the syntax) and how to interpret the information encoded in a network (the semantics). We will look at how to model a problem with a Bayesian network and the types of reasoning that can be performed.
Inductive Logic …, 2005
2011
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
1990
We describe how to combine probabilistic logic and Bayesian networks to obtain a new framework (\Bayesian logic") for dealing with uncertainty and causal relationships in an expert system. Probabilistic logic, invented by Boole, is a technique for drawing inferences from uncertain propositions for which there are no independence assumptions. A Bayesian network is a \belief net" that can represent complex conditional independence assumptions. We show how to solve inference problems in Bayesian logic by applying Benders decomposition to a nonlinear programming formulation. We also show that the number of constraints grows only linearly with the problem size for a large class of networks.
2006
Ontologies have become ubiquitous in currentgeneration information systems. An ontology is an explicit, formal representation of the entities and relationships that can exist in a domain of application. Following a well-trodden path, initial research in computational ontology has neglected uncertainty, developing almost exclusively within the framework of classical logic. As appreciation grows of the limitations of ontology formalisms that cannot represent uncertainty, the demand from user communities increases for ontology formalisms with the power to express uncertainty. Support for uncertainty is essential for interoperability, knowledge sharing, and knowledge reuse. Bayesian ontologies are used to describe knowledge about a domain with its associated uncertainty in a principled, structured, sharable, and machine-understandable way. This paper considers Multi-Entity Bayesian Networks (MEBN) as a logical basis for Bayesian ontologies, and describes PROWL , a MEBN-based probabilistic extension to the ontology language OWL. To illustrate the potentialities of Bayesian probabilistic ontologies in the development of AI systems, we present a case study in information security, in which ontology development played a key role.
2005
Possibilistic logic and Bayesian networks have provided advantageous methodologies and techniques for computerbased knowledge representation. This paper proposes a framework that combines these two disciplines to exploit their own advantages in uncertain and imprecise knowledge representation problems. The framework proposed is a possibilistic logic based one in which Bayesian nodes and their properties are represented by local necessity-valued knowledge base. Data in properties are interpreted as set of valuated formulas. In our contribution possibilistic Bayesian networks have a qualitative part and a quantitative part, represented by local knowledge bases. The general idea is to study how a fusion of these two formalisms would permit representing compact way to solve efficiently problems for knowledge representation. We show how to apply possibility and necessity measures to the problem of knowledge representation with large scale data.
Possibilistic logic and Bayesian networks have provided advantageous methodologies and techniques for computerbased knowledge representation. This paper proposes a framework that combines these two disciplines to exploit their own advantages in uncertain and imprecise knowledge representation problems. The framework proposed is a possibilistic logic based one in which Bayesian nodes and their properties are represented by local necessity-valued knowledge base. Data in properties are interpreted as set of valuated formulas. In our contribution possibilistic Bayesian networks have a qualitative part and a quantitative part, represented by local knowledge bases. The general idea is to study how a fusion of these two formalisms would permit representing compact way to solve efficiently problems for knowledge representation. We show how to apply possibility and necessity measures to the problem of knowledge representation with large scale data.
An important research enterprise for the Artificial Intelligence community since the 1970s has been the design of expert or "knowledge-based" systems. These programs used explicitly encoded human knowledge, often in the form of a production rule system, to solve problems in the areas of diagnostics and prognostics. The earliest research/development program in expert systems was created by Professor Edward Feigenbaum at Stanford University . Because the expert system often addresses problems that are imprecise and not fully proposed, with data sets that are often inexact and unclear, the role of various forms of probabilistic support for reasoning is important.
Applied Sciences
Multi-Entity Bayesian Network (MEBN) is a knowledge representation formalism combining Bayesian Networks (BNs) with First-Order Logic (FOL). MEBN has sufficient expressive power for general-purpose knowledge representation and reasoning, and is the logical basis of Probabilistic Web Ontology Language (PR-OWL), a representation language for probabilistic ontologies. Developing an MEBN model to support a given application is a challenge, requiring definition of entities, relationships, random variables, conditional dependence relationships, and probability distributions. When available, data can be invaluable both to improve performance and to streamline development. By far the most common format for available data is the relational database (RDB). Relational databases describe and organize data according to the Relational Model (RM). Developing an MEBN model from data stored in an RDB therefore requires mapping between the two formalisms. This paper presents MEBN-RM, a set of mapping...
1989
Causal probabilistic networks have proved to be a useful knowledge representation tool for modelling domains where causal relations in a broad sense are a natural way of relating domain objects and where uncertainty is inherited in these relations. This paper outlines an implementation the HUGIN shell -for handling a domain model expressed by a causal probabilistic network. The only topological restriction imposed on the network is that, it must not contain any directed loops. The approach is illustrated step by step by solving a. genetic breeding problem. A graph representation of the domain model is interactively created by using instances of the basic network componentsnodes and arcs-as building blocks. This structure, together with the quantitative relations between nodes and their immediate causes expressed as conditional probabilities, are automatically transformed into a tree structure, a junction tree. Here a computationally efficient and conceptually simple algebra of Bayesian belief universes supports incorporation of new evidence, propagation of information, and calculation of revised beliefs in the states of the nodes in the network. Finally, as an exam ple of a real world application, MUN1N an expert system for electromyography is discussed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.