Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014
…
191 pages
1 file
Computer vision is an ever growing discipline whose ambitious goal is to enable machines with the intelligent visual skills humans and animals are provided by Nature, allowing them to interact effortlessly with complex, dynamic environments. Designing automated visual recognition and sensing systems typically involves tackling a number of challenging tasks, and requires an impressive variety of sophisticated mathematical tools. In most cases, the knowledge a machine has of its surroundings is at best incomplete – missing data is a common problem, and visual cues are affected by imprecision. The need for a coherent mathematical ‘language’ for the description of uncertain models and measurements then naturally arises from the solution of computer vision problems. The theory of evidence (sometimes referred to as ‘evidential reasoning’, ‘belief theory’ or ‘Dempster- Shafer theory’) is, perhaps, one of the most successful approaches to uncertainty modelling, as arguably the most straightforward and intuitive approaches to a generalized probability theory. Emerging in the last Sixties from a profound criticism of the more classical Bayesian theory of inference and modelling of uncertainty, it stimulated in the last decades an extensive discussion of the epistemic nature of both subjective ‘degrees of beliefs’ and frequentist ‘chances’ or relative frequencies. More recently, a renewed interest in belief functions, the mathematical generalization of probabilities which are the object of study of the theory of evidence, has seen a blossoming of applications to a variety of fields of applied science. In this Book we are going to show how, indeed, the fruitful interaction of computer vision and evidential reasoning is able stimulate a number of advances in both fields. From a methodological point of view, novel theoretical advances concerning the geometric and algebraic properties of belief functions as mathematical objects will be illustrated in some detail in Part II, with a focus on a perspective ‘geometric approach’ to uncertainty and an algebraic solution of the issue of conflicting evidence. In Part III we will illustrate how these new perspectives on the theory of belief functions arise from important computer vision problems, such as articulated object tracking, data association and object pose estimation, to which in turn the evidential formalism can give interesting new solutions. Finally, some initial steps towards a generalization of the notion of total probability to belief functions will be taken, in the perspective of endowing the theory of evidence with a complete battery of estimation and inference tools to the benefit of scientists and practitioners.
Arxiv preprint, arXiv:1810.10341, 2018
In this Book we argue that the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. From a methodological point of view, novel theoretical results concerning the geometric and algebraic properties of belief functions as mathematical objects are illustrated and discussed in Part II, with a focus on both a perspective 'geometric approach' to uncertainty and an algebraic solution to the issue of conflicting evidence. In Part III we show how these theoretical developments arise from important computer vision problems (such as articulated object tracking, data association and object pose estimation) to which, in turn, the evidential formalism is able to provide interesting new solutions. Finally, some initial steps towards a generalization of the notion of total probability to belief functions are taken, in the perspective of endowing the theory of evidence with a complete battery of estimation and inference tools to the benefit of all scientists and practitioners.
IEEE Transactions on Systems, Man, and Cybernetics, Part C, 2008
In this paper we propose a geometric approach to the theory of evidence based on convex geometric interpretations of its two key notions of belief function and Dempster's sum. On one side, we analyze the geometry of belief functions as points of a polytope in the Cartesian space called belief space, and discuss the intimate relationship between basic probability assignment and convex combination. On the other side, we study the global geometry of Dempster's rule by describing its action on those convex combinations. By proving that Dempster's sum and convex closure commute we become able to depict the geometric structure of conditional subspaces, i.e. sets of belief functions conditioned by a given function b. Natural applications of these geometric methods to classical problems like probabilistic approximation and canonical decomposition are outlined.
Artificial Intelligence: Foundations, Theory, and Algorithms, Springer Nature, 2011
The principal aim of this book is to introduce to the widest possible audience an original view of belief calculus and uncertainty theory. In this geometric approach to uncertainty, uncertainty measures can be seen as points of a suitably complex geometric space, and manipulated in that space, for example, combined or conditioned. In the chapters in Part I, Theories of Uncertainty, the author offers an extensive recapitulation of the state of the art in the mathematics of uncertainty. This part of the book contains the most comprehensive summary to date of the whole of belief theory, with Chap. 4 outlining for the first time, and in a logical order, all the steps of the reasoning chain associated with modelling uncertainty using belief functions, in an attempt to provide a self-contained manual for the working scientist. In addition, the book proposes in Chap. 5 what is possibly the most detailed compendium available of all theories of uncertainty. Part II, The Geometry of Uncertainty, is the core of this book, as it introduces the author’s own geometric approach to uncertainty theory, starting with the geometry of belief functions: Chap. 7 studies the geometry of the space of belief functions, or belief space, both in terms of a simplex and in terms of its recursive bundle structure; Chap. 8 extends the analysis to Dempster’s rule of combination, introducing the notion of a conditional subspace and outlining a simple geometric construction for Dempster’s sum; Chap. 9 delves into the combinatorial properties of plausibility and commonality functions, as equivalent representations of the evidence carried by a belief function; then Chap. 10 starts extending the applicability of the geometric approach to other uncertainty measures, focusing in particular on possibility measures (consonant belief functions) and the related notion of a consistent belief function. The chapters in Part III, Geometric Interplays, are concerned with the interplay of uncertainty measures of different kinds, and the geometry of their relationship, with a particular focus on the approximation problem. Part IV, Geometric Reasoning, examines the application of the geometric approach to the various elements of the reasoning chain illustrated in Chap. 4, in particular conditioning and decision making. Part V concludes the book by outlining a future, complete statistical theory of random sets, future extensions of the geometric approach, and identifying high-impact applications to climate change, machine learning and artificial intelligence. The book is suitable for researchers in artificial intelligence, statistics, and applied science engaged with theories of uncertainty. The book is supported with the most comprehensive bibliography on belief and uncertainty theory.
2021
Conditioning is crucial in applied science when inference involving time series is involved. Belief calculus is an effective way of handling such inference in the presence of epistemic uncertainty – unfortunately, different approaches to conditioning in the belief function framework have been proposed in the past, leaving the matter somewhat unsettled. Inspired by the geometric approach to uncertainty, in this paper we propose an approach to the conditioning of belief functions based on geometrically projecting them onto the simplex associated with the conditioning event in the space of all belief functions. We show here that such a geometric approach to conditioning often produces simple results with straightforward interpretations in terms of degrees of belief. This raises the question of whether classical approaches, such as for instance Dempster’s conditioning, can also be reduced to some form of distance minimisation in a suitable space. The study of families of combination rul...
arXiv (Cornell University), 2018
Mathematical Theory of Evidence (MTE) is known as a foundation for reasoning when knowledge is expressed at various levels of detail. Though much research effort has been committed to this theory since its foundation, many questions remain open. One of the most important open questions seems to be the relationship between frequencies and the Mathematical Theory of Evidence. The theory is blamed to leave frequencies outside (or aside of) its framework. The seriousness of this accusation is obvious: no experiment may be run to compare the performance of MTE-based models of real world processes against real world data. In this paper we develop a frequentist model of the MTE bringing to fall the above argument against MTE. We describe, how to interpret data in terms of MTE belief functions, how to reason from data about conditional belief functions, how to generate a random sample out of a MTE model, how to derive MTE model from data and how to compare results of reasoning in MTE model and reasoning from data. It is claimed in this paper that MTE is suitable to model some types of destructive processes
IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2007
In this paper, we analyze from a geometric perspective the meaningful relations taking place between belief and probability functions in the framework of the geometric approach to the theory of evidence. Starting from the case of binary domains, we identify and study three major geometric entities relating a generic belief function (b.f.) to the set of probabilities P: 1) the dual line connecting belief and plausibility functions; 2) the orthogonal complement of P; and 3) the simplex of consistent probabilities. Each of them is in turn associated with a different probability measure that depends on the original b.f. We focus in particular on the geometry and properties of the orthogonal projection of a b.f. onto P and its intersection probability, provide their interpretations in terms of degrees of belief, and discuss their behavior with respect to affine combination.
The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, to be later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. Belief theory and the closely related random set theory form a natural framework for modelling situations in which data are missing or scarce: think of extremely rare events such as volcanic eruptions or power plant meltdowns, problems subject to huge uncertainties due to the number and complexity of the factors involved (e.g. climate change), but also the all-important issue with generalisation from small training sets in machine learning. This short talk abstracted from an upcoming half-day tutorial at IJCAI 2016 is designed to introduce to non-experts the principles and rationale of random sets and belief function theory, review its rationale in the context of frequentist and Bayesian interpretations of probability but also in relationship with the other main approaches to non-additive probability, survey the key elements of the methodology and the most recent developments, discuss current trends in both its theory and applications. Finally, a research program for the future is outlined, which include a robustification of Vapnik' statistical learning theory for an Artificial Intelligence 'in the wild'.
International Journal of Finance, Entrepreneurship & Sustainability
The main purpose of this article is to introduce the Dempster-Shafer (DS) Theory of Belief Functions. The DS Theory is founded on the mathematical theory of probability and is a broader framework than the probability theory. It reduces to probability theoryunder a special condition. In addition, the article illustrates problems in representing pure positive, pure negative evidence, and ambiguity under probability theory and shows how this problem is resolved under DS Theory. Next, the article describes and illustrates Dempster’s rule of combination to combine two or more items of evidence. Also, the article introduces Evidential Reasoning approach and its applications to various disciplines such as accounting, auditing, information systems, and information quality. Examples are provided where DS Theory is being used for developing AI and Expert systems within the business disciplines and outside of the business disciplines.
Intelligent Data Analysis, 2010
In this paper we introduce three alternative combinatorial formulations of the theory of evidence (ToE), by proving that both plausibility and commonality functions share the structure of \sum function" with belief functions. We compute their Moebius inverses, which we call basic plausibility and commonality assignments. In the framework of the geometric approach to uncertainty measures the equivalence of the associated formulations of the ToE is mirrored by the geometric congruence of the related simplices. We can then describe the point-wise geometry of these sum functions in terms of rigid transformations mapping them onto each other. Combination rules can be applied to plausibility and commonality functions through their Moebius inverses, leading to interesting applications of such inverses to the probabilistic transformation problem.
Canadian Journal of Statistics, 1990
The Dempster Shafer theory of belief functions is a method of quantifying uncertainty that generalizes probability theory. We review the theory of belief functions in the context of statistical inference. We mainly focus on a particular belief function based on the likelihood function and its application to problems with partial prior information. We also consider connections to upper and lower probabilities and Bayesian robustness. RESUME La thCorie de Dempster et Shafer au sujet des fonctions de confiance permet de quantifier I'incertitude d'une faqon qui gknkralise la thCorie des probabilitiCs. On prksente un survol de la thCorie des fonctions de confiance dans le contexte de I'infkrence statistique. L'accent est m i s sur une fonction de confiance particuli5re bade sur la fonction de vraisemblance. On discute de son application B des problbmes avec information a priori partielle. On analyse aussi les liens avec les concepts de probabilitks infkrieure et suptkieure, et avec la robustesse bayesienne.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2000
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1988
Elsevier eBooks, 1991
Lecture Notes in Computer Science, 1999
Artificial Intelligence, 1992
2016 19th International Conference on Information Fusion (FUSION), 2016
Proc. of ISAIM, 2008
International Journal of Approximate Reasoning, 1990
International Journal of Approximate Reasoning, 1993
Belief Functions: Theory and Applications, 2018
Demonstratio Mathematica
Computational Intelligence, 1991