Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
9 pages
1 file
This paper critiques David Christensen's proposed measure S for confirmation in light of his concerns regarding the limitations of standard measures, specifically the difference measure. It argues that standard measures can successfully navigate the so-called "problem of old evidence" and reveals shortcomings in Christensen's measure S. An analysis is presented on how both standard measures and Christensen's measure are impacted by the probability of evidence and calls into question the feasibility of an adequate probabilistic measure of confirmation.
Philosophy of Science, 1999
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity.
Journal for General Philosophy of Science, 2013
There are different Bayesian measures to calculate the degree of confirmation of a hypothesis H in respect of a particular piece of evidence E. Zalabardo (Analysis 69:630-635, 2009) is a recent attempt to defend the likelihood-ratio measure (LR) against the probability-ratio measure (PR). The main disagreement between LR and PR concerns their sensitivity to prior probabilities. Zalabardo invokes intuitive plausibility as the appropriate criterion for choosing between them. Furthermore, he claims that it favours the ordering of pairs evidence/hypothesis generated by LR. We will argue, however, that the intuitive non-numerical example provided by Zalabardo does not show that prior probabilities do not affect the degree of confirmation. On account of this, we conclude that there is no compelling reason to endorse LR qua measure of degree of confirmation. On the other side, we should not forget some technicalities which still benefit PR.
In the recent literature on confirmation, there are two leading approaches to the provision of a probabilistic measure of the degree to which a hypothesis is confirmed by evidence. The first is to construe the degree to which evidence E confirms hypothesis H as a function that is directly proportional to p(H | E) and inversely proportional to p(H). I shall refer to this as the probability approach. The second approach construes the notion as a function that is directly proportional to the true-positive rate-the probability of the evidence if the hypothesis is true, p(E | H)-and inversely proportional to the false-positive rate-the probability of the evidence if the hypothesis is false, p(E | ~H). These reverse conditional probabilities-of the evidence on the truth or falsehood of the hypothesis-are sometimes known as likelihoods. I shall refer to the approach to confirmation that uses them as the likelihood approach.
SSRN Electronic Journal, 2000
This note argues that a representation of the epistemic state of the individual through a non-additive measure provides a novel account of Keynes's view of probability theory proposed in his Treatise on Probability. The paper shows, first, that Keynes's "nonnumerical probabilities" can be interpreted in terms of decisional weights and distorsions of the probability priors. Second, that the degree of non-additivity of the probability measure can account for the confidence in the assessment without any reference to a second order probability. And, third, that the criterion for decision making under uncertainty derived in the non-additive literature incorporates a measure of the degree of confidence in the probability assessment. The paper emphasises the Keynesian derivation of Ellsberg's analysis: the parallel between Keynes and Ellsberg is deemed to be significant since Ellsberg's insights represent the main starting point of the modern developments of decision theory under uncertainty and ambiguity.
Philosophy and Phenomenological Research, 2016
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or "closeness to the truth" (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightlygeneralized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show that there is no strictly proper scoring rule for imprecise probabilities. The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. 1 We argue instead that another Joycean assumptioncalled strict immodesty-should be rejected, and we prove a representation theorem that characterizes all "mildly" immodest measures of inaccuracy.
2005
Hamminga exploits the structuralist terminology adopted in ICR in defining the relations between confirmation, empirical progress and truth approximation. In his paper, the fundamental problem of Lakatos' classical concept of scientific progress is clarified, and its way of evaluating theories is compared to the real problems of scientists who face the far from perfect theories they wish to improve and defend against competitors. Among other things, Hamminga presents a provocative diagnosis of Lakatos' notion of "novel facts", by arguing that it is not so much related to Popper's notion of "empirical content" of a theory, but rather to its allowed possibilities. Miller examines the view-advanced by McAllister (1996) and endorsed, with new arguments, by Kuipers (2002)-that aesthetical criteria may reasonably play a role in the selection of scientific theories. After evaluating the adequacy of Kuipers' approach to truth approximation, Miller discusses Kuipers' account of the nature and role of empirical and aesthetic criteria in the evaluation of scientific theories and, in particular, the thesis that "beauty can be a road to truth". Finally, he examines McAllister's doctrine that scientific revolutions are characterized above all by novelty of aesthetic judgments.
Ly and Wagenmakers (in pressb) critiqued the Full Bayesian Significance Test (FBST) and the associated statistic FBST ev: similar to the frequentist p-value, FBST ev cannot quantify evidence for the null hypothesis, allows sampling to a foregone conclusion, and suffers from the Jeffreys-Lindley paradox. In response, Kelter (in press) suggested that the critique is based on a measure-theoretic premise that is often inappropriate in practice, namely the assignment of non-zero prior mass to a point null hypothesis. Here we argue that the key aspects of our initial critique remain intact when the point-null hypothesis is replaced either by a peri-null hypothesis or by an interval-null hypothesis; hence, the discussion on the validity of a point-null hypothesis is a red herring. We suggest that it is tempting yet fallacious to test a hypothesis by estimating a parameter that is part of a different model. By rejecting any null hypothesis before it is tested, FBST is begging the question. ...
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2020
We consider a set of comparative probability judgements over a finite possibility space and study the structure of the set of probability measures that are compatible with them. We relate the existence of some compatible probability measure to Walley's behavioural theory of imprecise probabilities, and introduce a graphical representation that allows us to bound, and in some cases determine, the extreme points of the set of compatible measures. In doing this, we generalise some earlier work by Miranda and Destercke on elementary comparisons.
Acta Analytica, 2014
This paper presents a new argument for the likelihood ratio measure of confirmation by showing that one of the adequacy criteria used in another argument (Zalabardo 2009) can be replaced by a more plausible and better supported criterion which is a special case of the weak likelihood principle. This new argument is also used to show that the likelihood ratio measure is also to be preferred not to a measure that has recently received support in the literature.
Gordon Belot argues that Bayesian theory is epistemologically immodest. In response, we show that the topological conditions that underpin his criticisms of asymptotic Bayes-ian conditioning are self-defeating. They require extreme a priori credences regarding, for example, the limiting behavior of observed relative frequencies. We offer a different explication of Bayesian modesty using a goal of consensus: rival scientific opinions should be responsive to new facts as a way to resolve their disputes. Also we address Adam Elga's rebuttal to Belot's analysis, which focuses attention on the role that the assumption of countable additivity plays in Belot's criticisms. 1. Introduction. Consider the following compound result about asymp-totic statistical inference. A community of Bayesian investigators who begin an investigation with conflicting opinions about a common family of statistical hypotheses use shared evidence to achieve a consensus about which hypothesis is the true one. Specifically, suppose the investigators agree on a partition of statistical hypotheses and share observations of an increasing sequence of random samples with respect to whichever is the true statistical hypothesis from this partition. 1 Then, under various combinations of formal conditions that we review in this essay, ex ante (i.e., before accepting the new evidence) it is practically certain that each of the investigators' conditional probabilities approach 1 for the one true hypothesis in the partition. The result is compound: individual investigators achieve asymptotic certainty about the unknown, true statistical hypothesis. Second, the shared ev
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Fuzzy Sets and Systems, 2013
Studies in History and Philosophy of Science, 2022
Philosophy of Science, 2010
arXiv (Cornell University), 2022
Poznan Studies in the Philosophy of the Sciences and the Humanities, 2016
The British Journal for the Philosophy of Science, 2008
The British Journal for the Philosophy of Science, 2021
Philosophy of Science, 2007
Studia Logica, 1972