Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013
A framework is presented for a computational theory of probabilistic argument. The Probabilistic Reasoning Environment encodes knowledge at three levels. At the deepest level are a set of schemata encoding the system's domain knowledge. This knowledge is used to build a set of second-level arguments, which are structured for efficient recapture of the knowledge used to construct them. Finally, at the top level is a Bayesian network constructed from the arguments. The system is designed to facilitate not just propagation of beliefs and assimilation of evidence, but also the dynamic process of constructing a belief network, evaluating its adequacy, and revising it when necessary.
International Journal of Approximate Reasoning, 2017
Errors in reasoning about probabilistic evidence can have severe consequences. In the legal domain a number of recent miscarriages of justice emphasises how severe these consequences can be. These cases, in which forensic evidence was misinterpreted, have ignited a scientific debate on how and when probabilistic reasoning can be incorporated in (legal) argumentation. One promising approach is to use Bayesian networks (BNs), which are well-known scientific models for probabilistic reasoning. For non-statistical experts, however, Bayesian networks may be hard to interpret. Especially since the inner workings of Bayesian networks are complicated, they may appear as black box models. Argumentation models, on the contrary, can be used to show how certain results are derived in a way that naturally corresponds to everyday reasoning. In this paper we propose to explain the inner workings of a BN in terms of arguments. We formalise a two-phase method for extracting probabilistically supported arguments from a Bayesian network. First, from a Bayesian network we construct a support graph, and, second, given a set of observations we build arguments from that support graph. Such arguments can facilitate the correct interpretation and explanation of the relation between hypotheses and evidence that is modelled in the Bayesian network.
Informatica (lithuanian Academy of Sciences) - INFORMATICALT, 2005
The general concept of probabilistic argumentation systems PAS is restricted to the two types of variables: assumptions, which model the uncertain part of the knowledge, and proposi- tions, which model the rest of the information. Here, we introduce a third kind into PAS: so-called decision variables. This new kind allows to describe the decisions a user can make to react on some state of the system. Such a decision allows then possibly to reach a certain goal state of the sys- tem. Further, we present an algorithm, which exploits the special structure of PAS with decision variables. *
2010
This paper presents a technique with which instances of argument structures in the Carneades model can be given a probabilistic semantics by translating them into Bayesian networks. The propagation of argument applicability and statement acceptability can be expressed through conditional probability tables. This translation suggests a way to extend Carneades to improve its utility for decision support in the presence of uncertainty.
National Conference on Artificial Intelligence, 1998
Our argumentation system, NAG, uses Bayesian networks in a user model and in a normative model to assemble and as- sess arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when
Comprehensible explanations of probabilistic reasoning are a prerequisite for wider acceptance of Bayesian methods in expert systems and decision support systems. A study of human reasoning under uncertainty suggests two different strategies for explaining probabilistic reasoning specially attuned to human thinking: The first, qualitative belief propagation, traces the qualitative effect of evidence through a belief network from one variable to the next. This propagation algorithm is an alternative to the graph reduction algorithms of Wellman (1988) for inference in qualitative probabilistic networks. It is based on a qualitative analysis of intercausal reasoning, which is a generalization of Pearl's "explaining away", and an alternative to Wellman's definition of qualitative synergy. The other, Scenario-based reasoning, involves the generation of alternative causal "stories" accounting for the evidence. Comparing a few of the most probable scenarios provides an approximate way to explain the results of probabilistic reasoning. Both schemes employ causal as well as probabilistic knowledge. Probabilities may be presented as phrases and/or numbers. Users can control the style, abstraction and completeness of explanations.
Annals of Mathematics and Artificial Intelligence, 2015
Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn't mean that the information is worthlessrather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.
2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, 2012
One of the most substantial advantages that human analysts have over machine algorithms is the ability to seamlessly integrate sensed data into a situation-based internal narrative. Replicating an analogous internal representation algorithmically has proved to be a challenging problem that is the focus of much current research. For a machine to more accurately make complex decisions over a stable, consistent and useful representation, situations must be inferred from prior experience and corroborated by incoming data. We believe that a common mathematical framework for situations that addresses varying levels of complexity and uncertainty is essential to meeting this goal. In this paper, we present work in progress on developing the mathematics for probabilistic situations.
Proceedings of the first …, 2000
We describe a mechanism which generates rebuttals to a user's rejoinders in the context of arguments generated from Bayesian networks. This mechanism is implemented in an interactive argumentation system. Given an argument generated by the system and an interpretation of a user's rejoinder, the generation of the rebuttal takes into account the intended effect of the user's rejoinder, determined on a model of the user's beliefs, and its actual effect, determined on a model of the system's beliefs. We consider three main rebuttal strategies: refute the user's rejoinder, strengthen the argument goal, and dismiss the user's line of reasoning.
Evidence-based reasoning is at the core of many problemsolving and decision-making tasks in a wide variety of domains. Generalizing from the research and development of cognitive agents in several such domains, this paper presents progress toward a computational theory for the development of instructable cognitive agents for evidence-based reasoning tasks. The paper also illustrates the application of this theory to the development of four prototype cognitive agents in domains that are critical to the government and the public sector. Two agents function as cognitive assistants, one in intelligence analysis, and the other in science education. The other two agents operate autonomously, one in cybersecurity and the other in intelligence, surveillance, and reconnaissance. The paper concludes with the directions of future research on the proposed computational theory. Developed in the framework of the scientific method, the computational approach to evidence-based reasoning views this process as ceaseless discovery of evidence, hypotheses, and arguments in a non-stationary world, involving collaborative computational processes of evidence in search of hypotheses, hypotheses in search of evidence, and evidentiary testing of hypotheses (see Figure ). First, through abductive (imaginative) reasoning that shows that something is possibly true, one generates alternative hypotheses that may explain an observation of interest or answer an important question. Next, through deductive reasoning that shows that something is ______________________________________________________________________
IEEE Transactions on Systems, Man, and Cybernetics, 1989
attention is being devoted to formalisnis for representing and processing uncertainty in automated reasoning systems. These formal algorithms rely on obtaining judgments from experts about degrees of uncertainty, or the strength of evidential relationships. A reasoning system and associated assessment methodology built upon a natural schema for an evidential argument, are discussed. This argument schema is based on the underlying causal chains linking conclusions and evidence. The framework couples a probabilistic calculus with qualitative approaches to evidential reasoning. The resulting knowledge structure leads to a natural arsessment methodology in which the expert first specifies a quulifufice argument from evidence to conclusion. Next the expert specifies a series of premkes on which the argument is based. Invalidating any of these premises would disrupt the causal link between evidence and conclusion. The final step is the assessment of the sfrengfh of the argument, in the form of degrees of belief for the premises underlying the argument. The expert may also explicitly adopt assumptions affecting the strength of evidential arguments. A higher-level "metareasoning" process is described, in which assumptions underlying the strength and direction of evidential arguments may be revised in response to conflict.
2012
In this paper, we extend Dung's seminal argument framework to form a probabilistic argument framework by associating probabilities with arguments and defeats. We then compute the likelihood of some set of arguments appearing within an arbitrary argument framework induced from this probabilistic framework.
International Joint Conference on Artificial Intelligence, 1999
We describe an interactive system which supports the exploration of arguments generated from Bayesian networks. In particular, we consider key features which support interactive behaviour: (1) an attentional mechanism which updates the activation of concepts as the interaction progresses; (2) a set of exploratory responses; and (3) a set of probabilistic patterns and an Argument Grammar which support the generation
2007
Formal logical tools are able to provide some amount of reasoning support for information analysis, but are unable to represent uncertainty. Bayesian network tools represent probabilistic and causal information, but in the worst case scale as poorly as some formal logical systems and require specialized expertise to use effectively. We describe a framework for systems that incorporate the advantages of both Bayesian and logical systems. We define a formalism for the conversion of automatically generated natural deduction proof trees into Bayesian networks. We then demonstrate that the merging of such networks with domain-specific causal models forms a consistent Bayesian network with correct values for the formulas derived in the proof. In particular, we show that hard evidential updates in which the premises of a proof are found to be true force the conclusions of the proof to be true with probability one, regardless of any dependencies and prior probability values assumed for the causal model. We provide several examples that demonstrate the generality of the natural deduction system by using inference schemas not supportable in Prolog.
1990
Comprehensible explanations of probabilistic reasoning are a prerequisite for wider acceptance of Bayesian methods in expert systems and decision support systems. A study of human reasoning under uncertainty suggests two different strategies for explaining probabilistic reasoning specially attuned to human thinking: The first, qualitative belief propagation, traces the qualitative effect of evidence through a belief network from one variable to the next. This propagation algorithm is an alternative to the graph reduction algorithms of Wellman (1988) for inference in qualitative probabilistic networks. It is based on a qualitative analysis of intercausal reasoning, which is a generalization of Pearl's "explaining away", and an alternative to Wellman's definition of qualitative synergy. The other, Scenario-based reasoning, involves the generation of alternative causal "stories" accounting for the evidence. Comparing a few of the most probable scenarios provides an approximate way to explain the results of probabilistic reasoning. Both schemes employ causal as well as probabilistic knowledge. Probabilities may be presented as phrases and/or numbers. Users can control the style, abstraction and completeness of explanations.
1998
Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user.
2008
There is an escalating perception in some quarters that the conclusions drawn from digital evidence are the subjective views of individuals and have limited scientific justification. This paper attempts to address this problem by presenting a formal model for reasoning about digital evidence. A Bayesian network is used to quantify the evidential strengths of hypotheses and, thus, enhance the reliability and traceability of the results produced by digital forensic investigations. The validity of the model is tested using a real court case. The test uses objective probability assignments obtained by aggregating the responses of experienced law enforcement agents and analysts. The results confirmed the guilty verdict in the court case with a probability value of 92.7%.
1989
Causal probabilistic networks have proved to be a useful knowledge representation tool for modelling domains where causal relations in a broad sense are a natural way of relating domain objects and where uncertainty is inherited in these relations. This paper outlines an implementation the HUGIN shell -for handling a domain model expressed by a causal probabilistic network. The only topological restriction imposed on the network is that, it must not contain any directed loops. The approach is illustrated step by step by solving a. genetic breeding problem. A graph representation of the domain model is interactively created by using instances of the basic network componentsnodes and arcs-as building blocks. This structure, together with the quantitative relations between nodes and their immediate causes expressed as conditional probabilities, are automatically transformed into a tree structure, a junction tree. Here a computationally efficient and conceptually simple algebra of Bayesian belief universes supports incorporation of new evidence, propagation of information, and calculation of revised beliefs in the states of the nodes in the network. Finally, as an exam ple of a real world application, MUN1N an expert system for electromyography is discussed.
2011
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.