Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
The interest in the combination of probability with logics for modeling the world has rapidly increased in the last few years. One of the most effective approaches is the Distribution Semantics which was adopted by many logic programming languages and in Descripion Logics. In this paper, we illustrate the work we have done in this research field by presenting a probabilistic semantics for description logics and reasoning and learning algorithms. In particular, we present in detail the system TRILL P , which computes the probability of queries w.r.t. probabilistic knowledge bases, which has been implemented in Prolog. Note: An extended abstract / full version of a paper accepted to be presented at the Doctoral
1995
We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative logic programming. We define a fixpoint theory, declarative semantics, and proof procedure for the new class of probabilistic logic programs. Compared to other approaches to quantitative logic programming, we provide a true probabilistic framework with potential applications in probabilistic expert systems and decision support systems. We also discuss the relationship between such programs and Bayesian networks, thus moving toward a unification of two major approaches to automated reasoning.
Journal of Algorithms, 2008
This paper develops connections between objective Bayesian epistemology-which holds that the strengths of an agent's beliefs should be representable by probabilities, should be calibrated with evidence of empirical probability, and should otherwise be equivocal-and probabilistic logic. After introducing objective Bayesian epistemology over propositional languages, the formalism is extended to handle predicate languages. A rather general probabilistic logic is formulated and then given a natural semantics in terms of objective Bayesian epistemology. The machinery of objective Bayesian nets and objective credal nets is introduced and this machinery is applied to provide a calculus for probabilistic logic that meshes with the objective Bayesian semantics.
2007
Formal logical tools are able to provide some amount of reasoning support for information analysis, but are unable to represent uncertainty. Bayesian network tools represent probabilistic and causal information, but in the worst case scale as poorly as some formal logical systems and require specialized expertise to use effectively. We describe a framework for systems that incorporate the advantages of both Bayesian and logical systems. We define a formalism for the conversion of automatically generated natural deduction proof trees into Bayesian networks. We then demonstrate that the merging of such networks with domain-specific causal models forms a consistent Bayesian network with correct values for the formulas derived in the proof. In particular, we show that hard evidential updates in which the premises of a proof are found to be true force the conclusions of the proof to be true with probability one, regardless of any dependencies and prior probability values assumed for the causal model. We provide several examples that demonstrate the generality of the natural deduction system by using inference schemas not supportable in Prolog.
Information and Computation, 1990
Inductive Logic …, 2005
Decision Support Systems, 1994
by calculation. But it is not so well known that he We combine probabilistic logic and Bayesian networks to understood the importance of capturing uncerobtain the advantages of each in what we call Bayesian logic, tainty in logic. To do so he invented probabilistic Like probabilistic logic, it is a theoretically grounded way of logic, in which he assigned formulas continuous representing and reasoning with uncertainty that uses only as probability values rather than simply one or zero much probabilistic information as one has, since it permits to indicate true or false [3,4]. Boole's probabilistic one to specify probabilities as intervals rather than precise values. Like Bayesian networks, it can capture conditional logic is of the highest relevance today, since it independence relations, which are probably our richest source provides a basis for dealing with uncertainty in of probabilistic knowledge. The inference problem in Bayesian knowledge-based systems that is not only well logic can be solved as a nonlinear program (which becomes a grounded theoretically but has some practical adlinear program in ordinary probabilistic logic). We show that vantages as well. Benders decomposition, applied to the nonlinear program, allows one to use the same column generation methods in The fundamental problem of probabilistic in-Bayesian logic that are now being used to solve inference ference is to determine the probability of a conproblems in probabilistic logic. We also show that if the clusion that is inferred from uncertain premises. independence conditions are properly represented, the num-In his careful study of Boole's work [16,18], T. ber of nonlinear constraints grows only linearly with the Hailperin pointed out that this problem can be number of nodes in a large class of networks (rather than exponentially, as in the general case), naturally captured in a linear programming model, which Boole himself all but formulated.
Latest Advances in Inductive Logic Programming, 2014
Forthcoming in the Oxford Handbook of Probability and Philosophy, edited by A. Hájek and Chris Hitchcock.
This chapter is about probabilistic logics: systems of logic in which logical consequence is defined in probabilistic terms. We will classify such systems and state some key references, and we will present one class of probabilistic logics in more detail: those that derive from Ernest Adams' work.
1990
We describe how to combine probabilistic logic and Bayesian networks to obtain a new framework (\Bayesian logic") for dealing with uncertainty and causal relationships in an expert system. Probabilistic logic, invented by Boole, is a technique for drawing inferences from uncertain propositions for which there are no independence assumptions. A Bayesian network is a \belief net" that can represent complex conditional independence assumptions. We show how to solve inference problems in Bayesian logic by applying Benders decomposition to a nonlinear programming formulation. We also show that the number of constraints grows only linearly with the problem size for a large class of networks.
2013
A framework is presented for a computational theory of probabilistic argument. The Probabilistic Reasoning Environment encodes knowledge at three levels. At the deepest level are a set of schemata encoding the system's domain knowledge. This knowledge is used to build a set of second-level arguments, which are structured for efficient recapture of the knowledge used to construct them. Finally, at the top level is a Bayesian network constructed from the arguments. The system is designed to facilitate not just propagation of beliefs and assimilation of evidence, but also the dynamic process of constructing a belief network, evaluating its adequacy, and revising it when necessary.
Uncertainty Proceedings 1991, 1991
This paper discuses multiple Bayesian networks representation paradigms for encoding asymmetric independence assertions. We offer three contributions: (1) an inference mechanism that makes explicit use of asymmetric independence to speed up computations, (2) a simplified definition of similarity networks and extensions of their theory, and (3) a generalized representation scheme that encodes more types of asymmetric independence assertions than do similarity networks.
2002
Many problems in artificial intelligence can be naturally approached by generating and manipulating probability distributions over structured objects. In this paper we represent structured objects by first-order logic terms (lists, trees, tuples, and nestings thereof) and higher-order terms (sets, multisets), and we study the question how to define probability distributions over such terms. We present two Bayesian approaches that employ such probability distributions over structured objects: the first is an upgrade of the well-known naive Bayesian classifier to deal with first-order and higher-order terms, and the second is an upgrade of propositional Bayesian networks to deal with nested tuples.
Discrete Applied Mathematics, 1992
Directed Agent) is a model for autonomous learning in probabilistic domains [desJ ardins,
2006
Abstract Logical argument forms are investigated by second order probability density functions. When the premises are expressed by beta distributions, the conclusions usually are mixtures of beta distributions. If the shape parameters of the distributions are assumed to be additive (natural sampling), then the lower and upper bounds of the mixing distributions (Pólya-Eggenberger distributions) are parallel to the corresponding lower and upper probabilities in conditional probability logic.
Advances in Soft Computing, 2008
This paper proposes a common framework for various probabilistic logics. It consists of a set of uncertain premises with probabilities attached to them. This raises the question of the strength of a conclusion, but without imposing a particular semantics, no general solution is possible. The paper discusses several possible semantics by looking at it from the perspective of probabilistic argumentation.
Lecture Notes in Computer Science, 2011
We begin with a brief overview of Probabilistic Logic Networks, distinguish PLN from other approaches to reasoning under uncertainty, and describe some of the main conceptual foundations and goals of PLN. We summarize how knowledge is represented within PLN and describe the four basic truth-value types. We describe a few basic firstorder inference rules and formulas, outline PLN's approach to handling higher-order inference via reduction to first-order rules, and follow this by a brief summary of PLN's handling of quantifiers. Since PLN was and continues to be developed as one of several major components of a broader and more general artificial intelligence project, we next describe the OpenCog project and PLN's roles within the project.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.