Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1990, Journal of Statistical Planning and Inference
…
14 pages
1 file
We investigate conditions under which conditional probability distributions approach each other and approach certainty as available data increase. Our purpose is to enhance Savage's (1954) results, in defense of 'personalism', about the degree to which consensus and certainty follow from shared evidence. For problems of consensus, we apply a theorem of Blackwell and Dubins (1962), regarding pairs of distributions, to compact sets of distributions and to cases of static coherence without dynamic coherence. We indicate how the topology under which the set of distributions is compact plays an important part in determining the extent to which consensus can be achieved. In our discussion of the approach to certainty, we give an elementary proof of the Lebesgue density theorem using a result of Halmos (1950). AIMS Subject Classifications: Primary 60BlO; secondary 62M20. Key words: Merging of opinions. 0378.3758/90/$3.50 0 1990, Elsevier Science Publishers B.V. (North-Holland)
Philosophy of Science
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersub-jective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian " convergence to the truth " for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling.
IEEE Transactions on Systems, Man, and Cybernetics, 1990
Ahstrurl -Questions of consensus among Bayesian investigators from tno perspectives are discussed: I ) Inference: What are the agreement\ in posterior probilities that result from increasing shared data? 2) Decisions:
2000
We discuss the justifications of Bayesianism by Cox and Jaynes, and relate them to a recent critique by Halpern(JAIR, vol 10(1999), pp 67–85). We show that a problem with Halperns example is that a finite and natural refinement of the model leads to inconsistencies, and that the same is the case with every model in which rescalability to probability cannot be done. We also discuss other problems with the justifications and assumptions usually made on the function F describing plausibility of conjunction. We note that the commonly postulated monotonicity condition should be strengthened to strict monotonicity before Cox justification becomes convincing. On the other hand, we note that the commonly assumed regularity requirements on F (like continuity) or its domain (like denseness) are unnecessary.
Stochastics An International Journal of Probability and Stochastic Processes, 2013
Let (Ω, B) be a measurable space, An ⊂ B a sub-σ-field and µn a random probability measure on (Ω, B), n ≥ 1. In various frameworks, one looks for a probability P on B such that µn is a regular conditional distribution for P given An for all n. Conditions for such a P to exist are given. The conditions are quite simple when (Ω, B) is a compact Hausdorff space equipped with the Borel or the Baire σ-field (as well as under similar assumptions). Applications to Gibbs measures and Bayesian statistics are given as well. 1. The problem Let (Ω, B) be a measurable space and P the collection of all probability measures on B. For B ∈ B and any map µ : Ω → P, we let µ(B) denote the function on Ω given by ω → µ(ω)(B). We also let σ(µ) = σ µ(B) : B ∈ B. Let P ∈ P and A ⊂ B a sub-σ-field. A regular conditional distribution (r.c.d.), for P given A, is a map µ : Ω → P such that µ(B) is a version of E P I B | A for all B ∈ B. For a r.c.d. to exist, it suffices that P is perfect and B countably generated. This note originates from the following question. Given a sub-σ-field A ⊂ B and a map µ : Ω → P such that σ(µ) ⊂ A, under what conditions is there P ∈ P such that µ is a r.c.d. for P given A ? Such a question is easily answered. Once stated, however, it grows quickly into the following new question. Suppose we are given {A n , µ n : n ∈ I}, where A n ⊂ B is a sub-σ-field, µ n : Ω → P is a map such that σ(µ n) ⊂ A n , and I = {1, 2,. . .} or I = {1,. .. , k} for some k ≥ 1. Under what conditions is there P ∈ P such that µ n is a r.c.d. for P given A n for all n ∈ I ? If such a P exists, the µ n are said to be consistent. We aim to give conditions for the µ n to be consistent; see Theorems 6 and 7. Throughout, M denotes the (possibly empty) set M = {P ∈ P : µ n is a r.c.d. for P given A n for all n ∈ I}. 2. Motivations Reasonable conditions for consistency, if available, are of potential interest. As a first (heuristic) example suppose that, for each n ∈ I, expert n declares his/her opinions on a certain phenomenon conditionally on his/her information A n. This produces a collection of random probability measures µ n : Ω → P such that σ(µ n) ⊂ A n. In this framework, most literature focus on how to summarize the
Theory and …, 2000
It is always possible to construct a real function φ, given random quantities X and Y with continuous distribution functions F and G, respectively, in such a way that φ(X) and φ(Y ), also random quantities, have both the same distribution function, say H . This result of De Finetti introduces an alternative way to somehow describe the 'opinion' of a group of experts about a continuous random quantity by the construction of Fields of coincidence of opinions (FCO). A Field of coincidence of opinions is a finite union of intervals where the opinions of the experts coincide with respect to that quantity of interest. We speculate on (dis)advantages of Fields of Opinion compared to usual 'probability' measures of a group and on their relation with a continuous version of the well-known Allais' paradox.
Journal of The Royal Statistical Society Series B-statistical Methodology, 2001
We consider a sequence of posterior distributions based on a data-dependent prior (which we shall refer to as a pseudoposterior distribution) and establish simple conditions under which the sequence is Hellinger consistent. It is shown how investigations into these pseudo posteriors assist with the understanding of some true posterior distributions, including Pólya trees, the infinite dimensional exponential family and mixture models.
Springer eBooks, 2021
Information content is classically measured by entropy measures in probability theory, that can be interpreted as a measure of internal inconsistency of a probability distribution. While extensions of Shannon entropy have been proposed for quantifying information content of a belief function, other trends have been followed which rather focus on the notion of consistency between sets. Relying on previous general entropy measures of probability, we propose in this paper to establish some links between the different measures of internal inconsistency of a belief functions. We propose a general formulation which encompasses inconsistency measures derived from Shannon entropy as well as those derived from the N-consistency family of measures.
Springer Series in Statistics, 2010
Until fairly recently, conjugate prior distributions served as essential tools in the implementation of Bayesian inference. Today, they occupy a much less prominent place in Bayesian theory and practice. We discuss the reasons for this, and argue that the devaluation of the role of conjugacy may be somewhat premature. To assist in this argument, we introduce a Bayesian version of the notion of "self consistency" and discuss its relevance to conjugacy. The self-consistency of inferential procedures has arisen in a number of statistical contexts, most prominently in the area of survival analysis. In the present paper, self consistency is defined in the context of Bayes estimation, relative to squared error loss, of a parameter ϑ of an exponential family of distributions. In this setting, a prior distribution π with mean ϑ * (or the corresponding Bayes estimator ϑˆπ) is said to be self consistent (SC) if the equation E(ϑ | ϑˆ = ϑ *) = ϑ * is satisfied, where ϑˆ is assumed to be a sufficient and unbiased estimator of ϑ. The SC condition simply states that if your experimental outcome agrees with your prior opinion about the parameter, then the experiment should not change your opinion about it. Surprisingly, many prior distributions, including both "objective" and proper priors, do not enjoy this property. We will study self consistency and its extended form (the estimator T(ϑˆ) of ϑ is generalized SC relative to a prior π with mean ϑ * if T(ϑ *) = ϑ *). Instances where the broader notion of self consistency is relevant include the study of linear Bayes estimators of ϑ and also in the study of Bayes estimators relative to loss functions other than squared error. The problem of estimating ϑ based on the prior opinion received from k experts will be examined, and the properties of a particular class of "consensus estimators" will be studied. Conditions ensuring a generalized form of self-consistency and an important convexity property of these estimators are identified. The paper concludes by applying Samaniego and Reneau's (1994) results to the class of consensus estimators considered here, leading to a characterization of the circumstances in which a consensus estimator will outperform classical procedures.
Journal for General Philosophy of Science, 2013
There are different Bayesian measures to calculate the degree of confirmation of a hypothesis H in respect of a particular piece of evidence E. Zalabardo (Analysis 69:630-635, 2009) is a recent attempt to defend the likelihood-ratio measure (LR) against the probability-ratio measure (PR). The main disagreement between LR and PR concerns their sensitivity to prior probabilities. Zalabardo invokes intuitive plausibility as the appropriate criterion for choosing between them. Furthermore, he claims that it favours the ordering of pairs evidence/hypothesis generated by LR. We will argue, however, that the intuitive non-numerical example provided by Zalabardo does not show that prior probabilities do not affect the degree of confirmation. On account of this, we conclude that there is no compelling reason to endorse LR qua measure of degree of confirmation. On the other side, we should not forget some technicalities which still benefit PR.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Games and Economic Behavior, 2010
Philosophy of Science, 2021
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1988
American Mathematical Monthly, 2019
Philosophy of Science, 2001
Between Probability and Certainty, 2016
Philosophy of Science, 1999
Lecture Notes in Computer Science, 2014