Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical first-order logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a built-in, semantically grounded capability for reasoning under uncertainty renders classical first-order logic inadequate for many important classes of problems. General-purpose languages are beginning to emerge for which the fundamental logical basis is probability. Increasingly expressive probabilistic languages demand a theoretical foundation that fully integrates classical first-order logic and probability. In first-order Bayesian logic (FOBL), probability distributions are defined over interpretations of classical first-order axiom systems. Predicates and functions of a classical first-order theory correspond to a random variables in the corresponding first-order Bayesian theory. This is a natural correspondence, given that random variables are formalized in mathematical statistics as measurable functions on a probability space. A formal system called Multi-Entity Bayesian Networks (MEBN) is presented for composing distributions on interpretations by instantiating and combining parameterized fragments of directed graphical models. A construction is given of a MEBN theory that assigns a non-zero probability to any satisfiable sentence in classical first-order logic. By conditioning this distribution on consistent sets of sentences, FOBL can represent a probability distribution over interpretations of any finitely axiomatizable first-order theory, as well as over interpretations of infinite axiom sets when a limiting distribution exists. FOBL is inherently open, having the ability to incorporate new axioms into existing theories, and to modify probabilities in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on first-order Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because FOBL contains classical first-order logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical first-order logic or subsets thereof.
Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory.
Lecture Notes in Computer Science, 2007
This paper surveys first-order probabilistic languages (FOPLs), which combine the expressive power of first-order logic with a probabilistic treatment of uncertainty. We provide a taxonomy that helps make sense of the profusion of FOPLs that have been proposed over the past fifteen years. We also emphasize the importance of representing uncertainty not just about the attributes and relations of a fixed set of objects, but also about what objects exist. This leads us to Bayesian logic, or BLOG, a language for defining probabilistic models with unknown objects. We give a brief overview of BLOG syntax and semantics, and emphasize some of the design decisions that distinguish it from other languages. Finally, we consider the challenge of constructing FOPL models automatically from data.
Journal of Algorithms, 2008
This paper develops connections between objective Bayesian epistemology-which holds that the strengths of an agent's beliefs should be representable by probabilities, should be calibrated with evidence of empirical probability, and should otherwise be equivocal-and probabilistic logic. After introducing objective Bayesian epistemology over propositional languages, the formalism is extended to handle predicate languages. A rather general probabilistic logic is formulated and then given a natural semantics in terms of objective Bayesian epistemology. The machinery of objective Bayesian nets and objective credal nets is introduced and this machinery is applied to provide a calculus for probabilistic logic that meshes with the objective Bayesian semantics.
Synthese, 2008
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the λ = 0 function of Carnap's continuum of inductive methods.
2006
Ontologies have become ubiquitous in currentgeneration information systems. An ontology is an explicit, formal representation of the entities and relationships that can exist in a domain of application. Following a well-trodden path, initial research in computational ontology has neglected uncertainty, developing almost exclusively within the framework of classical logic. As appreciation grows of the limitations of ontology formalisms that cannot represent uncertainty, the demand from user communities increases for ontology formalisms with the power to express uncertainty. Support for uncertainty is essential for interoperability, knowledge sharing, and knowledge reuse. Bayesian ontologies are used to describe knowledge about a domain with its associated uncertainty in a principled, structured, sharable, and machine-understandable way. This paper considers Multi-Entity Bayesian Networks (MEBN) as a logical basis for Bayesian ontologies, and describes PROWL , a MEBN-based probabilistic extension to the ontology language OWL. To illustrate the potentialities of Bayesian probabilistic ontologies in the development of AI systems, we present a case study in information security, in which ontology development played a key role.
2008
The DL-Lite family of tractable description logics lies between the semantic web languages RDFS and OWL Lite. In this paper, we present a probabilistic generalization of the DL-Lite description logics, which is based on Bayesian networks. As an important feature, the new probabilistic description logics allow for flexibly combining terminological and assertional pieces of probabilistic knowledge.
Discrete Applied Mathematics, 1992
Directed Agent) is a model for autonomous learning in probabilistic domains [desJ ardins,
2005
Possibilistic logic and Bayesian networks have provided advantageous methodologies and techniques for computerbased knowledge representation. This paper proposes a framework that combines these two disciplines to exploit their own advantages in uncertain and imprecise knowledge representation problems. The framework proposed is a possibilistic logic based one in which Bayesian nodes and their properties are represented by local necessity-valued knowledge base. Data in properties are interpreted as set of valuated formulas. In our contribution possibilistic Bayesian networks have a qualitative part and a quantitative part, represented by local knowledge bases. The general idea is to study how a fusion of these two formalisms would permit representing compact way to solve efficiently problems for knowledge representation. We show how to apply possibility and necessity measures to the problem of knowledge representation with large scale data.
2015
Bayesian ontology languages are a family of probabilistic ontology languages that allow to encode probabilistic information over the axioms of an ontology with the help of a Bayesian network. The Bayesian ontology language BEL is an extension of the lightweight Description Logic (DL) EL within the above-mentioned framework. We present the system BORN that implements the probabilistic subsumption problem for BEL.
2012
We present DISPONTE, a semantics for probabilistic ontologies that is based on the distribution semantics for probabilistic logic programs. In DISPONTE the axioms of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the axiom, while the statistical probability considers the populations to which the axiom is applied.
The interest in the combination of probability with logics for modeling the world has rapidly increased in the last few years. One of the most effective approaches is the Distribution Semantics which was adopted by many logic programming languages and in Descripion Logics. In this paper, we illustrate the work we have done in this research field by presenting a probabilistic semantics for description logics and reasoning and learning algorithms. In particular, we present in detail the system TRILL P , which computes the probability of queries w.r.t. probabilistic knowledge bases, which has been implemented in Prolog. Note: An extended abstract / full version of a paper accepted to be presented at the Doctoral
PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for autonomous learning in probabilistic domains [desJardins, 1992] that incorporates innovative techniques for using the agent's existing knowledge to guide and constrain the learning process and for representing, reasoning with, and learning probabilistic knowledge. This paper describes the probabilistic representation and inference mechanism used in PAGODA. PAGODA forms theories about the effects of its actions and the world state on the environment over time. These theories are represented as conditional probability distributions. A restriction is imposed on the structure of the theories that allows the inference mechanism to find a unique predicted distribution for any action and world state description. These restricted theories are called uniquely predictive theories. The inference mechanism, Probability Combination using Independence (PCI), uses minimal independence assumptions to combine the probabilities ...
1995
We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative logic programming. We define a fixpoint theory, declarative semantics, and proof procedure for the new class of probabilistic logic programs. Compared to other approaches to quantitative logic programming, we provide a true probabilistic framework with potential applications in probabilistic expert systems and decision support systems. We also discuss the relationship between such programs and Bayesian networks, thus moving toward a unification of two major approaches to automated reasoning.
2011
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
One of the major weaknesses of current research on the Semantic Web (SW) is the lack of proper means to represent and reason with uncertainty. A number of recent efforts from the SW community, the W3C, and others have recently emerged to address this gap. Such efforts have the positive side effect of bringing together two fields of research that have been apart for historical reasons, the artificial intelligence and the SW communities. One example of the potential research gains of this convergence is the current development of Probabilistic OWL (PR-OWL), an extension of the OWL Web Ontology Language that provides a framework to build probabilistic ontologies, thus enabling proper representation and reasoning with uncertainty within the SW context. PR-OWL is based on Multi-Entity Bayesian Networks (MEBN), a first-order probabilistic logic that combines the representational power of first-order logic (FOL) and Bayesian Networks (BN). However, PR-OWL and MEBN are still in development, lacking a software tool that implements their underlying concepts. The development of UnBBayes-MEBN, an open source, Java-based application that is currently in alpha phase (public release March 08), addresses this gap by providing both a GUI for building probabilistic ontologies and a reasoner based on the PR-OWL/MEBN framework. This work focuses on the major challenges of UnBBayes-MEBN implementation, describes the features already implemented, and provides an overview of the major algorithms, mainly the one used for building a Situation Specific Bayesian Network (SSBN) from a MEBN Theory.
Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical first-order logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating first-order logic, probability, and machine learning. This paper presents Multi-entity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures, and can express a probability distribution over models of any consistent, finitely axiomatizable first-order theory. We present the logic using an example inspired by the Paramount Series Star Trek.
Fundamenta Informaticae
Although probabilistic knowledge representations and probabilistic reasoning have by now secured their position in arti cial intelligence, it is not uncommon to encounter misunderstanding of their foundations and lack of appreciation for their strengths. This paper describes ve properties of probabilistic knowledge representations that are particularly useful in intelligent systems research. (1) Directed probabilistic graphs capture essential qualitative properties of a domain, along with its causal structure. (2) Concepts such as relevance and con icting evidence have a natural, formally sound meaning in probabilistic models. (3) Probabilistic schemes support sound reasoning at a variety of levels ranging from purely quantitative to purely qualitative levels. (4) The role of probability theory in reasoning under uncertainty can be compared to the role of rst order logic in reasoning under certainty. Probabilistic knowledge representations provide insight into the foundations of logic-based schemes, showing their di culties in highly uncertain domains. Finally, (5) probabilistic knowledge representations support automatic generation of understandable explanations of inference for the sake of user interfaces to intelligent systems.
Although probabilistic knowledge representations and probabilistic reasoning have by now secured their position in intelligent systems research, it is not uncommon to encounter misunderstanding of their foundations and lack of appreciation for their strengths. This paper discusses ve issues related to intelligent systems research and shows how they are addressed by the probabilistic knowledge representations. Directed probabilistic graphs capture essential qualitative properties of a domain, along with its causal structure. Concepts such as relevance and con icting evidence have a natural, formally sound meaning in probabilistic models. Probabilistic schemes support sound reasoning at a variety of levels ranging from purely quantitative to purely qualitative levels. Probabilistic knowledge representations provide insight into the foundations of logic-based schemes for reasoning under uncertainty, showing their di culties in highly uncertain domains. Finally, probabilistic knowledge representations support automatic generation of understandable explanations of inference for the sake of user interfaces to intelligent systems.
Advances in Soft Computing, 2008
This paper proposes a common framework for various probabilistic logics. It consists of a set of uncertain premises with probabilities attached to them. This raises the question of the strength of a conclusion, but without imposing a particular semantics, no general solution is possible. The paper discusses several possible semantics by looking at it from the perspective of probabilistic argumentation.
Uncertainty in Artificial …, 2005
Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical first-order logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating first-order logic, probability, and machine learning. This paper presents Multi-entity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures, and can express a probability distribution over models of any consistent, finitely axiomatizable first-order theory. We present the logic using an example inspired by the Paramount Series Star Trek.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.