Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1985, Israel Journal of Mathematics
…
12 pages
1 file
Any sequence of events can be "explained" by any of an infinite number of hypotheses. Popper describes the "logic of discovery" as a process of choosing from a hierarchy of hypotheses the first hypothesis which is not at variance with the observed facts. Blum and Blum formalized these hierarchies of hypotheses as hierarchies of infinite binary sequences and imposed on them certain decidability conditions. In this paper we also consider hierarchies of infinite binary sequences but we impose only the most elementary Bayesian considerations. We use the structure of such hierarchies to define "confirmation". We then suggest a definition of probability based on the amount of confirmation a particular hypothesis (i.e. pattern) has received. We show that hypothesis confirmation alone is a sound basis for determining probabilities and in particular that Carnap's logical and empirical criteria for determining probabilities are consequences of the confirmation criterion in appropriate limiting cases.
2019
Science neither aims at having the monopoly over the truth about the world nor establishing a dogmatic knowledge. Natural light of experience is held by empiricists to be the reliable source of human knowledge. Inductive logic has been a leading tool of empirical experiments in justifying and confirming scientific theories with evidence. Science cannot reach where it has reached without inductive logic. Inductive logic has, therefore, played an important role in making science what it is today. Inductive logic helps science to justify its theories not form convictional opinions of scientists but from factual propositions. However, inductive logic has been problematic in the sense that its logic of justification led philosophers of science to demarcation, the distinction of episteme from doxa. At present, some philosophers of science and scientists attempt to justify why science carries out a reliable knowledge. Some have argued for structuralism and realism of scientific theories rather than believing in the course of miracles and others for their historicity. Both views are explanatories of how science works and progresses. This essay recalls the arguments for structures of scientific theories and their historicity. First, the essay analyses the controversy between Rudolf Carnap and Karl Popper on how the problem of inductive logic in confirming scientific theories can be solved. In so doing, the essay refers to empirical probabilities as well as the limits calculus. Second, the essay merges frequentist and Bayesian approaches to determine how scientific theories are to be confirmed or refuted. Third, the use of a new form of Bayesian Theorem will show how mathematical and logical structures respond to some of the important questions that arise from the historical and realistic views about scientific theories. The essay argues for epistemic objectivity behind inductive probability, the key issue of the controversy in question, and proves that the truth about the world is symmetric. Keywords: Science; Induction; Probability; Demarcation; Deduction; Frequentism; Bayesianism.
Philosophical Perspectives, 2005
Many philosophers think of Bayesianism as a theory of practical rationality. This is not at all surprising given that the view's most striking successes have come in decision theory. Ramsey (1931), Savage (1972), and De Finetti (1964) showed how to interpret subjective degrees of belief in terms of betting behavior, and how to derive the central probabilistic requirement of coherence from reflections on the nature of rational choice. This focus on decision-making can obscure the fact that Bayesianism is also an epistemology. Indeed, the great statistician Harold Jeffries (1939), who did more than anyone else to further Bayesian methods, paid rather little heed to the work of Ramsey, de Finetti, and Savage. Jeffries, and those who followed him, saw Bayesianism as a theory of inductive evidence, whose primary role was not to help people make wise choices, but to facilitate sound scientific reasoning. 1 This paper seeks to promote a broadly Bayesian approach to epistemology by showing how certain central questions about the nature of evidence can be addressed using the apparatus of subjective probability theory. Epistemic Bayesianism, as understood here, is the view that evidential relationships are best represented probabilistically. It has three central components: Evidential Probability. At any time t, a rational believer's opinions can be faithfully modeled by a family of probability functions C t , hereafter called her credal state, 2 the members of which accurately reflect her total evidence at t. Learning as Bayesian Updating. Learning experiences can be modeled as shifts from one credal state to another that proceed in accordance with Bayes's Rule. Confirmational Relativity. A wide range of questions about evidential relationships can be answered on the basis of information about structural features credal states. The first of these three theses is most fundamental. Much of what Bayesians say about learning and confirmation only makes sense if probabilities in credal
Rudolph Carnap argued for an out of hand acceptance of inductive probability as a means for achieving scientific certainty. Others, such as Karl Popper, challenged Carnap's view particularly, and inductive probability generally, at times raising questions about the viability of any certainty whatsoever. In this paper (written in 1999, for an undergraduate philosophy of science class), I briefly engage the major arguments of both Carnap and Popper on this topic, searching for a reason for confidence in contemporary scientific methods.
Erkenntnis, 2001
The logical interpretation of probability, or “objective Bayesianism” – the theory that (some) probabilities are strictly logical degrees of partial implication – is defended. The main argument against it is that it requires the assignment of prior probabilities, and that any attempt to determine them by symmetry via a “principle of insufficient reason” inevitably leads to paradox. Three replies are advanced: that priors are imprecise or of little weight, so that disagreement about them does not matter, within limits; that it is possible to distinguish reasonable from unreasonable priors on logical grounds; and that in real cases disagreement about priors can usually be explained by differences in the background information. It is argued also that proponents of alternative conceptions of probability, such as frequentists, Bayesians and Popperians, are unable to avoid committing themselves to the basic principles of logical probability.
Information and Computation, 1996
The Bayesian program in statistics starts from the assumption that an individual can always ascribe a definite probability to any event. It will be demonstrated that this assumption is incompatible with the natural requirement that the individual's subjective probability distribution should be computable. We shall construct a probabilistic algorithm producing with probability extremely close to 1 an infinite binary sequence which is not random with respect to any computable probability distribution (we use Dawid's notion of randomness, computable calibration, but the results hold for other widely known notions of randomness as well). Since the Bayesian knows the algorithm, he must believe that this sequence will be noncalibrable. On the other hand, it seems that the Bayesian must believe that the sequence is random with respect to his own probability distribution. We hope that the discussion of this apparent paradox will clarify the foundations of Bayesian statistics. We analyse also the time of computation and the place of``losing randomness.'' We show that we need only polynomial time and space to demonstrate non-calibration effects on finite sequences.
2008
Whether scientists test their hypotheses as they ought to has interested both cognitive psychologists and philosophers of science. Classic analyses of hypothesis testing assume that people should pick the test with the largest probability of falsifying their current hypothesis, while experiments have shown that people tend to select tests consistent with that hypothesis. Using two different normative standards, we prove that seeking evidence predicted by your current hypothesis is optimal when the hypotheses in question are deterministic and other reasonable assumptions hold. We test this account with two experiments using a sequential prediction task, in which people guess the next number in a sequence. Experiment 1 shows that people's predictions can be captured by a simple Bayesian model. Experiment 2 manipulates people's beliefs about the probabilities of different hypotheses, and shows that they confirm whichever hypothesis they are led to believe is most likely.
Andrés Rivadulla: Éxito, razón y cambio en física, Madrid: Ed. Trotta, 2004
In these pages I offer my solution to the problem of inductive probability of theories. Against the existing expectations in certain areas of the current philosophy of science, I argue that Bayes’s Theorem does not constitute an appropriate tool to assess the probability of theories and that we would do well to banish the question about how likely a certain scientific theory is to be true, or to what extent one theory is more likely true than another. Although I agree with Popper that inductive probability is impossible, I disagree with him in the way Sir Karl presents his argument, as I have showed elsewhere, so my proof is completely different. The argument I present in this paper is based on applying Bayes’s Theorem to specific situations that show its inefficiency both in the case of whether a hypothesis becomes all the more likely true the greater the empirical evidence that supports it, as whether the probability calculus allows to identify a given hypothesis from a set of hypotheses incompatible with each other as the most likely true.
Psychological Bulletin, 2009
The authors review research on judgments of random and nonrandom sequences involving binary events with a focus on studies documenting gambler's fallacy and hot hand beliefs. The domains of judgment include random devices, births, lotteries, sports performances, stock prices, and others. After discussing existing theories of sequence judgments, the authors conclude that in many everyday settings people have naive complex models of the mechanisms they believe generate observed events, and they rely on these models for explanations, predictions, and other inferences about event sequences. The authors next introduce an explanation-based, mental models framework for describing people's beliefs about binary sequences, based on 4 perceived characteristics of the sequence generator: randomness, intentionality, control, and goal complexity. Furthermore, they propose a Markov process framework as a useful theoretical notation for the description of mental models and for the analysis of actual event sequences.
2000
We suggest an axiomatic approach to the way in which past cases, or observations, are or should be used for making predictions and for learning. In our model, a predictor is asked to rank eventualities based on possible memories. A \memory" consists of repetitions of past cases, and can be identi ̄ed with a vector, attaching a nonnegative integer (number of occurrences) to each case. Mild consistency requirements on these rankings imply that they have a numerical representation that is linear in the number of case repetitions. That is, there exists a matrix assigning numbers to eventuality-case pairs, such that, for every memory vector, multiplication of the matrix by the vector yields a numerical representation of the ordinal plausibility ranking given that memory. Interpreting this result for the ranking of theories or hypotheses, rather than of speci ̄c eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theori...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Applied Logic, 2013
Philosophy and Phenomenological Research, 1962
Journal of Statistical Physics, 1972
Machine Intelligence and Pattern Recognition, 1990
Memory & Cognition, 2007
Philosophy Compass, 2011
Stanford Dissertation, 2011
Journal of the American Statistical Association, 1994
Minds and Machines, 2004
Journal for General Philosophy of Science
Philosophy Compass, 2011