Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, The Encyclopedia of Clinical Psychology
Confirmation and falsification are different strategies for testing theories and characterizing the outcomes of those tests. Roughly speaking, confirmation is the act of using evidence or reason to verify or certify that a statement is true, definite, or approximately true, whereas falsification is the act of classifying a statement as false in the light of observation reports. After expounding the intellectual history behind confirmation and falsificationism, reaching back to Plato and Aristotle, I survey some of the main controversial issues and arguments that pertain to the choice between these strategies: the Raven Paradox, the Duhem/Quine problem and the Grue Paradox. Finally, I outline an evolutionary criticism of inductive Bayesian approaches based on my assumption of doxastic involuntarism.
arXiv: History and Philosophy of Physics, 2015
Corroboration or confirmation is a prominent philosophical debate of the 20th century. Many philosophers have been involved in this debate most notably the proponents of confirmation led by Hempel and its most powerful criticism by the falsificationists led by Popper. In both cases however the debates were primarily based on the arguments from logic. In this paper we review these debates and suggest that a different perspective on falsification versus confirmation can be taken by grounding arguments in cognitive psychology.
Boston Studies in the Philosophy of Science, 2010
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. 1 The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens.
Philosophies, 2024
When an empirical prediction E of hypothesis H is observed to be true, such observation is said to confirm, i.e., support (although not prove) the truth of the hypothesis. But why? What justifies the claim that such evidence supports the hypothesis? The widely accepted answer is that it is justified by induction. More specifically, it is commonly held that the following argument: (1) If H then E; (2) E; (3) Therefore, (probably) H—here referred to as ‘hypothetico-deductive con-firmation argument’—is inductively strong. Yet this argument looks nothing like an inductive generalisation, i.e., it doesn’t seem inductive in the term's traditional, enumerative sense. If anything, it has the form of the fallacy of affirming the consequent. This paper aims to solve this puzzle. True, in recent decades, ‘in-duction’ has been sometimes used more broadly to encompass any non-deductive, i.e., ampliative, argument. Applying Bayesian confirmation theory has famously demonstrated that hypothetico-deductive confirmation is indeed inductive in this broader, ampliative sense. Nonetheless, it will be argued here that, despite appearance, hypothetico-deductive confirmation can also be recast as a strong enumerative induction. Hence, by being enumeratively inductive, the scientific method of hypothetico-deductive confirmation is justified through this traditional, more restrictive type of induction rather than merely by ampliative induction.
Bayesian confirmation theories (BCTs) might be the best standing theories of confirmation to date, but they are certainly not paradox-free. Here I recognize that BCTs' appeal mainly comes from the fact that they capture some of our intuitions about confirmation better than those theories that came before them and that the superiority of BCTs is sufficiently justified by those advantages. Instead, I will focus on Sylvan and Nola's claim that it is desirable that our best theory of confirmation be as paradox-free as possible. For this reason, I will show that, as they respond to different interests, the project of the BCTs is not incompatible with Sylvan and Nola's project of a paradox-free confirmation logic. In fact, it will turn out that, provided we are ready to embrace some degree of non-classicality, both projects complement each other nicely.
International Studies in the Philosophy of Science, 2020
Philosophers such as Goodman (1954), Scheffler ( 1963) and Glymour(1983) aim to answer the Paradox of the Ravens by distinguishingbetween confirmationsimpliciterandselectiveconfirmation. Thelatter evidential relation occurs when data not only confirms ahypothesis, but also disconfirms one of its‘rival’hypotheses. Theappearance of paradox is allegedly due to a conflation of validintuitions about selective confirmation with our intuitions aboutconfirmationsimpliciter. Theories of evidence, like the standardBayesian analysis, should only be understood as explications ofconfirmationsimpliciter; when we disambiguate between selectiveconfirmation and confirmationsimpliciter, there is no longer aparadox from these theories. Bandyopadhyay and Brittan (2006)have revived this answer within a sophisticated Bayesian analysisof confirmation and severe testing. I argue that, despite theattractive features of the Selective Confirmation Answer, there isno analysis of this evidential relation that satisfactorily answersthe Paradox of the Ravens, and the prospects for any answeralong these lines are bleak. We must look elsewhere.
Minnesota studies in the philosophy of science, 1983
The Bayesian framework is intended, at least in part, as a formalization and systematization of the sorts of reasoning that we all carry on at an intuitive level. One of the most attractive features of the Bayesian approach is the apparent ease and elegance with which it can deal with typical strategies for the confirmation of hypotheses in science. Using the apparatus of the mathematical theory of probability, the Bayesian can show how the acquisition of evidence can result in increased confidence in hypotheses, in accord with our best intuitions. Despite the obvious attractiveness of the Bayesian account of confirmation, though, some philosophers of science have resisted its manifest charms and raised serious objections to the Bayesian framework. Most of the objections have centered on the unrealistic nature of the assumptions required to establish the appropriateness of modeling an individual's beliefs by way of a pointvalued, additive function. l But one recent attack is of a different sort. In a recent book on confirmation theory, Clark Glymour has presented an argument intended to show that the Bayesian account of confirmation fails at what it was thought to do best. 2 Glymour claims that there is an important class of scientific arguments, cases in which we are dealing with the apparent confirmation of new hypotheses by old evidence, for which the Bayesian account of confirmation seems hopelessly inadequate. In this essay I shall examine this difficulty, what I call the problem of old evidence. I shall argue that the problem of old evidence is generated by the
2008
Whether scientists test their hypotheses as they ought to has interested both cognitive psychologists and philosophers of science. Classic analyses of hypothesis testing assume that people should pick the test with the largest probability of falsifying their current hypothesis, while experiments have shown that people tend to select tests consistent with that hypothesis. Using two different normative standards, we prove that seeking evidence predicted by your current hypothesis is optimal when the hypotheses in question are deterministic and other reasonable assumptions hold. We test this account with two experiments using a sequential prediction task, in which people guess the next number in a sequence. Experiment 1 shows that people's predictions can be captured by a simple Bayesian model. Experiment 2 manipulates people's beliefs about the probabilities of different hypotheses, and shows that they confirm whichever hypothesis they are led to believe is most likely.
,” Journal for General Philosophy of Science, 2005
In spite of several attempts to explicate the relationship between a scientific hypothesis and evidence, the issue still cries for a satisfactory solution. Logical approaches to confirmation, such as the hypothetico-deductive method and the positive instance account of confirmation, are problematic because of their neglect of the semantic dimension of hypothesis confirmation. Probabilistic accounts of confirmation are no better than logical approaches in this regard. An outstanding probabilistic account of confirmation, the Bayesian approach, for instance, is found to be defective in that it treats evidence as a formal entity and this creates the problem of relevance of evidence to the hypothesis at issue, in addition to the difficulties arising from the subjective interpretation of probabilities. This essay purports to satisfy the need for a successful account of hypothesis confirmation by offering an original formulation based on the notion of instantiation of the relation urged by an hypothesis.
New Ideas in Psychology, 1991
Mathematical and Computational Forestry Natural Resource Sciences, 2010
The reason that Professor Zeide (Zeide, 2010) objects so strongly with Popper's falsification view of scientific theory ) is because Professor Popper and Professor Zeide are defining, interpreting and using the terms "induction", "verification" and "falsification" in different ways: they have different ontologies and metadata. There are many possible types of "induction" (logical, empirical-scientific, statistical, mathematical (which deductive, not inductive) ...etc.) but Professor Zeide seems to be defining (by usage) an "induction" I do not recognise, and seems not that of . Consider the certain demonstration that a particular swan is black (let us assume the bird is a bird and not a fish, it a swan and not a goose, or ugly duckling) by examining every feather (and assume a black swan is defined in terms of its feather colours). This is NOT "verification" in the inductive sense, as used by Popper (1968), or Hume (1748) or the ancient Greeks (e.g. Sextus Empiricus , 200), even though the word may be used in this way in colloquial English language (validate: "to prove that something is true", OED(2010)). "Inductive verification" is only meaningful in relation to the general assertion "all swans are white", which is posited as a result of induction from a number of particular cases of swans being white. Also, Popper (1968) is referring to general empirical-scientific theories when he says we can never be certain of the truth of scientific theories, the same point made about any logical induction by Hume (1748). Examples of such theories are Newton's and Einstein's theories of inertial motion and gravitation; the steady state and big-bang models of the universe; quantum mechanics; dark matter and dark energy. History confirms that we cannot be certain of the absolute truth of these grand theories. However, this does not mean we cannot be (reasonably) certain of simple empirical laws, like Boyle's law for gases, or Reineker's or the 3//2 self thinning law for unthininned forests, since these are simple descriptive relationships of empirical data under specific conditions. However, these empirical descriptions should not be claimed to be "true", since the models are only descriptive, and may not work under extreme conditions. We may note that Aristotle (350BC) believed induction could prove a "truth", but he also considered empirical evidence unnecessary for proof of truth. Many empirical descriptive relations are fitted from sample data using statistical methods, and of course we cannot be sure of absolute truth of statistical estimates. In common sense terms most people
2011
A long standing tradition in epistemology and the philosophy of science sees the notion of confirmation as a fundamental relationship between a piece of evidence E and a hypothesis H. A number of philosophical accounts of confirmation, moreover, have been cast or at least could be cast in terms of a formally defined model c.H; E/ involving evidence and hypothesis.1 Ideally, a full-fledged and satisfactory confirmation model c.H; E/ would meet a series of desiderata, including the following: (1) c.H; E/ should be grounded on some simple and intuitively appealing “core intuition”; (2) c.H; E/ should exhibit a set of properties which formally express sound intuitions; (3) it should be possible to specify the role and relevance of c.H; E/ in science as well as in other forms of inquiry. In what follows we will focus on accounts of confirmation arising from the Bayesian framework and we will mainly address issues (1) and (2). Bayesianism arguably is a major theoretical perspective in con...
Analysis, 2013
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
Psychological Review, 1987
Strategies for hypothesis testing in scientific investigation and everyday reasoning have interested both psychologists and philosophers. A number of these scholars stress the importance of disconfir. marion in reasoning and suggest that people are instead prone to a general deleterious "confirmation bias" In particula~ it is suggested that people tend to test those cases that have the best chance of verifying current beliefs rather than those that have the best chance of falsifying them. We show, howeve~ that many phenomena labeled "confirmation bias" are better understood in terms of a general positive test strate~. With this strategy, there is a tendency to test cases that are expected (or known) to have the property of interest rather than those expected (or known) to lack that property. This strategy is not equivalent to confirmation bias in the first sense; we show that the positive test strategy can be a very good heuristic for determining the truth or falsity of a hypothesis under realistic conditions~ It can, howeve~ lead to systematic errors or inefficiencies. The appropriateness of human hypotheses-testing strategies and prescriptions about optimal strategies must he understood in terms of the interaction between the strategy and the task at hand.
The British Journal for the Philosophy of Science, 2021
When a proposition is established, it can be taken as evidence for other propositions. Can the Bayesian theory of rational belief and action provide an account of establishing? I argue that it can, but only if the Bayesian is willing to endorse objective constraints on both probabilities and utilities, and willing to deny that it is rationally permissible to defer wholesale to expert opinion. I develop a new account of deference that accommodates this latter requirement.
Australasian Journal of Philosophy
According to the Dogmatism Puzzle presented by Gilbert Harman, knowledge induces dogmatism because, if one knows a proposition that p, one knows that any evidence against p is misleading and therefore one can ignore it when gaining the evidence in the future. I try to offer a new solution to the puzzle by explaining why the principle is false that evidence known to be misleading can be ignored. I argue that knowing that some evidence is misleading doesn't always damage the credential of the evidence, and therefore it doesn't always entitle one to ignore it. I also explain in what kind of cases and to what degree such knowledge allows one to ignore evidence. Hopefully, through the discussion, we can not only understand better where the dogmatism puzzle goes wrong, but also understand better in what sense rational believers should rely on their evidence and when they can ignore it.
Gürol Irzık and Güven Güzeldere (eds.), Turkish Studies in the History and Philosophy of Science (Boston Studies in the Philosophy of Science, Vol. 244) (Dordrecht: Springer), pp.103-112, 2005
The British Journal for the Philosophy of Science, 2021
According to the Variety of Evidence Thesis items of evidence from independent lines of investigation are more confirmatory, ceteris paribus, than e.g. replications of analogous studies. This thesis is known to fail Bovens and Hartmann (2003), Claveau (2013). However, the results obtained by the former only concern instruments whose evidence is either fully random or perfectly reliable; instead in Claveau (2013), unreliability is modelled as deterministic bias. In both cases, the unreliable instrument delivers totally irrelevant information. We present a model which formalises both reliability, and unreliability, differently. Our instruments are either reliable, but affected by random error, or they are biased but not deterministically so. Bovens and Hartmann's results are counter-intuitive in that in their model a long series of consistent reports from the same instrument does not raise suspicion of "too-good-tobe-true" evidence. This happens precisely because they neither contemplate the role of systematic bias, nor unavoidable random error of reliable instruments. In our model the Variety of Evidence Thesis fails as well, but the area of failure is considerably smaller than for Bovens and Hartmann (2003), Claveau (2013) and holds for (the majority of) realistic cases (that is, where biased instruments are very biased). The essential mechanism which triggers VET failure is the rate of false to true positives for the two kinds of instruments. Our emphasis is on modelling beliefs about sources of knowledge and their role in hypothesis confirmation in interaction with dimensions of evidence, such as variety and consistency.
Information, 2011
This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing scientific hypotheses, respectively: (a) Decision theoretic Bayesian statistics and Bayes factors; (b) Frequentist statistics and p-values; (c) Constructive Bayesian statistics and e-values. This article examines with special care the Zero Probability Paradox (ZPP), related to the verification of sharp or precise hypotheses. Finally, this article makes some remarks on Lakatos' view of mathematics as a quasi-empirical science.
SSRN Electronic Journal, 2000
Karl Popper rightly contests the possibility of a verification of basic statements. At the same time he strictly believes in the possibility of growth of empirical knowledge. Knowledge growth, however, is only possible if empirical theories can be falsified. This raises the question, how theories can be falsified, if a verification of those statements that falsify theories -i.e. basic statementsis not possible. This problem is often referred to as the "basic problem" or "problem of the empirical basis". In this paper I show that -from a logical point of view -a falsification of theories is possible without a verification of basic statements. Furthermore I show that knowledge growth in the empirical sciences will be possible if two assumptions are valid. These assumptions can neither be proven nor falsified. However, they have to be postulated by everybody in everyday life. . This paper is a summary of appendix 3 of Rainer Maurer (2004), Zwischen Erkenntnisinteresse und Handlungsbedarf -eine Einführung in die methodologischen Probleme der Wirtschaftswissenschaft, Metropolis-Verlag, Marburg. Discussion Paper -2 -
2008
Evidence and Evolution has four chapters: (1) Evidence, (2) Intelligent Design, (3) Natural Selection, and (4) Common Ancestry. The first chapter develops tools that are used in the rest of the book, though more ideas about evidence are added along the way. The first chapter gives a brief introduction to Bayesianism and Likelihoodism. Bayesianism is based on using Bayes' Theorem, either to compute the probability that hypotheses have in the light of given evidence, or, more modestly, to say which of the competing hypotheses is most probable. The central concept of Likelihoodism is the Law of Likelihood: (LoL) Evidence E favors hypothesis H 1 over hypothesis H 2 if and only if Pr(EjH 1) > Pr(EjH 2). Note that the right-hand side of the (LoL) describes the probability of the evidence according to the different competing hypotheses (the symbol ''j'' means given), not how probable the hypotheses are, given that evidence. The (LoL) isn't a theorem; rather, it is a proposed explication. It is something I repeatedly put to work in Evidence and Evolution-comparing evolutionary theory and creationism (Chapter 2), comparing natural selection and drift (Chapter 3), and comparing common and separate ancestry (Chapter 4). After the introduction to Bayesianism and Likelihoodism, Chapter 1 continues with a discussion of Frequentist approaches to theory evaluation. I am critical of two prominent Frequentist tools-significance tests and Neyman-Pearson hypothesis testing. I locate the former procedure under the heading of probabilistic modus tollens; this is the idea that we should reject a hypothesis because it says that what we observe BOOK SYMPOSIUM 661
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.