Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Theory and Decision
…
34 pages
1 file
When is it possible to decide that a theory is confirmed by the available evidence? Probabilities seem first to be the good framework for addressing this question. But the philosophers of science did not succeed in building any probabilistic criterion of confirmation beyond dispute. We examine two of the main reasons for this failure. First, the principles of adequacy used by philosophers are often logically inconsistent with each other. We show in the paper how to build consistent subsets of these principles. We identify three main subsets which embody the principles of adequacy for two main kinds of confirmation, namely the relative confirmation and the absolute confirmation. Second, we prove the impossibility of building any probabilistic criterion for absolute confirmation.
Poznan Studies in the Philosophy of the Sciences and the Humanities, 2016
According to a recent proposal (Shogenji 2012, Atkinson 2012), the degree of justification for accepting a hypothesis should be given in the form of a function of the conditional probability and prior probability of the hypothesis. Shogenji and Atkinson impose a certain condition on measures of justification, and prove that there is only one function (up to monotonic transformations) that satisfies this condition. In this paper we show that Shogenji’s function J is not suitable for the purpose of expressing the degree of justification of hypotheses by evidence. We identify a particular property that the function J shares with most of the known measures of confirmation, and whose possession by a function implies that it will always include a pair of mutually inconsistent hypotheses in the set of justifiably accepted hypotheses given some evidence. We formulate a general criterion that predicts which functions are prone to the problem of mutually exclusive hypotheses, and we discuss what functions can escape this problem.
Metaphysics and Certainty: Beyond Justification, 2003
In an earlier paper 1 I argued that for much of philosophy's history, the model of certainty provided by mathematics has provided a seductive but, ultimately, baleful paradigm of what philosophy should strive for. The employment of the mathematical model of certainty leads inevitably to failure and, consequent upon that, to scepticism, whether in the classical mode, or in the post-modern style of Rorty and others. Where, then, does that leave us? If, as I argued, certainty of a mathematical kind is unattainable and the scepticism that results from despair at the unattainability of such certainty is ultimately self-stultifying, from what starting point, if any, can we begin our philosophical inquiries? My conclusion was that the ineliminable starting point of every inquiry could not but be the very act of inquiry itself. Furthermore, the conditions of the act of enquiry and whatever follows ineluctably from the act of inquiry cannot, without performative contradiction, be gainsaid.
Gürol Irzık and Güven Güzeldere (eds.), Turkish Studies in the History and Philosophy of Science (Boston Studies in the Philosophy of Science, Vol. 244) (Dordrecht: Springer), pp.103-112, 2005
,” Journal for General Philosophy of Science, 2005
In spite of several attempts to explicate the relationship between a scientific hypothesis and evidence, the issue still cries for a satisfactory solution. Logical approaches to confirmation, such as the hypothetico-deductive method and the positive instance account of confirmation, are problematic because of their neglect of the semantic dimension of hypothesis confirmation. Probabilistic accounts of confirmation are no better than logical approaches in this regard. An outstanding probabilistic account of confirmation, the Bayesian approach, for instance, is found to be defective in that it treats evidence as a formal entity and this creates the problem of relevance of evidence to the hypothesis at issue, in addition to the difficulties arising from the subjective interpretation of probabilities. This essay purports to satisfy the need for a successful account of hypothesis confirmation by offering an original formulation based on the notion of instantiation of the relation urged by an hypothesis.
2005
Hamminga exploits the structuralist terminology adopted in ICR in defining the relations between confirmation, empirical progress and truth approximation. In his paper, the fundamental problem of Lakatos' classical concept of scientific progress is clarified, and its way of evaluating theories is compared to the real problems of scientists who face the far from perfect theories they wish to improve and defend against competitors. Among other things, Hamminga presents a provocative diagnosis of Lakatos' notion of "novel facts", by arguing that it is not so much related to Popper's notion of "empirical content" of a theory, but rather to its allowed possibilities. Miller examines the view-advanced by McAllister (1996) and endorsed, with new arguments, by Kuipers (2002)-that aesthetical criteria may reasonably play a role in the selection of scientific theories. After evaluating the adequacy of Kuipers' approach to truth approximation, Miller discusses Kuipers' account of the nature and role of empirical and aesthetic criteria in the evaluation of scientific theories and, in particular, the thesis that "beauty can be a road to truth". Finally, he examines McAllister's doctrine that scientific revolutions are characterized above all by novelty of aesthetic judgments.
Synthese
After Karl Popper's original work, several approaches were developed to provide a sound explication of the notion of verisimilitude. With few exceptions, these contributions have assumed that the truth to be approximated is deterministic. This collection of ten papers addresses the more general problem of approaching probabilistic truths. They include attempts to find appropriate measures for the closeness to probabilistic truth and to evaluate claims about such distances on the basis of empirical evidence. The papers employ multiple analytical approaches, and connect the research to related issues in the philosophy of science. The idea that science, and human knowledge more generally, aims at approaching the truth about the world is quite widespread among scientists, philosophers, and laypeople. Indeed, many more-or-less realist philosophers of science think that scientific progress consists in approach towards truth or increasing verisimilitude (or truthlikeness). A typical example of such position is the fallibilist program of Karl Popper (1963), who emphasized that scientific theories are always conjectural and
Epistēmēs Metron Logos
In this work, we undertake the task of laying out some basic considerations towards straightening out the foundations of an abstract logical system. We venture to explain what theory is as well as what is not theory, to discriminate between the roles of truth in theory and in reality, as well as to open the road towards clarifying the relationship between theory and the real world. Etymological, cultural and conceptual analyses of truth are brought forth in order to reveal problems in modern approaches and to set the stage for more consistent solutions. One such problem addressed here is related to negation per se, to its asymmetry towards affirmative statements and to the essential ramifications of this duality with respect to the common perceptual and linguistic aspects of words indicating concepts akin to truth in various languages and to attitudes reflected and perpetuated in them and to their consequent use in attempted informal or formal logic and its understanding. Finally, a...
Minnesota studies in the philosophy of science, 1983
The Bayesian framework is intended, at least in part, as a formalization and systematization of the sorts of reasoning that we all carry on at an intuitive level. One of the most attractive features of the Bayesian approach is the apparent ease and elegance with which it can deal with typical strategies for the confirmation of hypotheses in science. Using the apparatus of the mathematical theory of probability, the Bayesian can show how the acquisition of evidence can result in increased confidence in hypotheses, in accord with our best intuitions. Despite the obvious attractiveness of the Bayesian account of confirmation, though, some philosophers of science have resisted its manifest charms and raised serious objections to the Bayesian framework. Most of the objections have centered on the unrealistic nature of the assumptions required to establish the appropriateness of modeling an individual's beliefs by way of a pointvalued, additive function. l But one recent attack is of a different sort. In a recent book on confirmation theory, Clark Glymour has presented an argument intended to show that the Bayesian account of confirmation fails at what it was thought to do best. 2 Glymour claims that there is an important class of scientific arguments, cases in which we are dealing with the apparent confirmation of new hypotheses by old evidence, for which the Bayesian account of confirmation seems hopelessly inadequate. In this essay I shall examine this difficulty, what I call the problem of old evidence. I shall argue that the problem of old evidence is generated by the
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The British Journal for the Philosophy of Science, 2021
Andrés Rivadulla: Éxito, razón y cambio en física, Madrid: Ed. Trotta, 2004
Analysis, 2013
IRA International Journal of Education and Multidisciplinary Studies, 2017
Journal of Philosophical Logic, 1997
Studies in History and Philosophy of Science, 2010
2017
Journal of Statistical Planning and Inference, 1990
Israel Journal of Mathematics, 1985
Erkenntnis, 1990
Artificial Intelligence, 1992