Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
The aim of this paper is to apply the accuracy-based approach to epistemology to the case of higher order evidence: evidence that bears on the rationality of one’s beliefs. I proceed in two stages. First, I show that the accuracy-based framework that is standardly used to motivate rational requirements supports steadfastness – a position according to which higher order evidence should have no impact on one’s doxastic attitudes towards first order propositions. The argument for this will require a generalization of an important result by Greaves and Wallace for the claim that conditionalization maximizes expected accuracy. The generalization I provide will, among other things, allow us to apply the result to cases of self-locating evidence. In the second stage I develop an alternative framework. Very roughly, what distinguishes the traditional approach from the alternative one is that, on the traditional picture, we’re interested in evaluating the expected accuracy of conforming to an update-procedure. On the alternative picture that I develop, instead of considering how good an update procedure is as a plan to conform to, we consider how good it is as a plan to make. I show how, given the use of strictly proper scoring rules, the alternative picture vindicates calibrationism: a view according to which higher order evidence should have a significant impact on our beliefs. I conclude with some thoughts about why higher order evidence poses a serious challenge for standard ways of thinking about rationality.
In this paper, I explore how we can bridge rationality and accuracy. In part I, I describe a prima facie plausible answer to the “bridging question” and show that it can do important explanatory work. In part II, I argue that higher order evidence considerations pose a serious problem for the claims and arguments I developed in part I. I then describe an alternative answer to the bridging question that accommodates and explains conciliatory views about higher order evidence. The alternative answer appeals to the idea that the principles of rationality aren’t the ones that are the best to follow – they are the ones that are the best to try to follow. I conclude by showing that many of the difficulties that arise in trying to explain how rationality and accuracy are connected can be resolved by distinguishing two notions of epistemic rationality, each corresponding to a different way in which we care about accuracy.
Cambridge University Press, 2023
The higher-order evidence debate concerns how higher-order evidence affects the rationality of our first-order beliefs. This Element has two parts. The first part (Sections 1 and 2) provides a critical overview of the literature, aiming to explain why the higher-order evidence debate is interesting and important. The second part (Sections 3 to 6) defends calibrationism, the view that we should respond to higher-order evidence by aligning our credences to our reliability degree. The author first discusses the traditional version of calibrationism and explains its main difficulties, before proposing a new version of calibrationism called ‘Evidence-Discounting Calibrationism.’ The Element argues that this new version is independently plausible and that it can avoid the difficulties faced by the traditional version.
Mind
According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms-Probabilism, Conditionalization, the Principal Principle, and so on-have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EPDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EPDT encourages obviously epistemically irrational behaviour. We develop firmer conceptual foundations for EPDT. First, we detail a theory of praxic and epistemic good. Then we show that, in light of their very different good-making features, EPDT will evaluate epistemic states and epistemic acts according to different criteria. So, in general, rational preference over states and acts won't agree. Finally, we argue that based on direction-of-fit considerations, it is preferences over the former that matter for normative epistemology, and that EPDT, properly spelt out, arrives at the correct verdicts in a range of putative problem cases.
Philosophy Compass, 2013
Beliefs come in different strengths. An agent's credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research. Like the rest of us, Paul's beliefs come in degrees. Some are stronger than others. In particular , Paul believes that Linda is a bank teller and a political activist more strongly than he believes that she is a bank teller. That is, his credence in the former proposition is greater than his credence in the latter. Surely, Paul is irrational. But why? 1 In this survey, I describe a new strategy for answering such questions. It is a strategy that was first introduced by Jim Joyce (1998). The traditional strategy-the strategy that Joyce sought to replace or, at least, supplement is to show that such credences will lead the agent who has them to make decisions that are guaranteed to have a bad outcome. These are the well-known Dutch Book arguments. 2 For instance, Paul's credences will lead him to buy a book of bets on the two propositions concerning Linda that is guaranteed to lose him money. This, it is claimed, makes him irrational. Now, the validity of this argument has been the subject of much debate. However, even if it works, it only identifies one way in which Paul's credences are irrational: they are poor guides to action; from a pragmatic point of view, they are irrational. But, intuitively, there is something irrational about these credences from a purely epistemic point of view; they seem to exhibit a purely epistemic flaw. Even for an agent incapable of acting on her credences-and therefore incapable of making the bets that lead to the guaranteed loss-Paul's credences would be irrational. We will be concerned with identifying why that is so. That is, Joyce's strategy, which we describe here, provides a purely epistemic route to the norms that govern credences; this route does not rely on any connection between credence and action. We will begin by showing how the strategy works in the case of Paul. Then, we will show how to extend it to establish probabilism, which is the norm that Joyce considers in his original paper. Probabilism is one of the core tenets of so-called Bayesian episte-mology. Our next target is the other core tenet of that view, namely, conditionalization. We will give an argument for that norm that uses Joyce's strategy as well: it is due to Hilary Greaves and David Wallace (2006). After considering how we might strengthen these arguments by weakening the assumptions they make, we conclude by describing possible avenues for future research.
Dialectica, 2014
In ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don’t, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one’s credences to one’s evidence. I will consider four ways in which we might make this latter goal precise: on the first, the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the virtue of being accurate. The pluralist positions lead to problems for Joyce’s argument; the monist position avoids them. I endorse the latter.
Philosophy and Phenomenological Research, 2018
For many epistemologists, and for many philosophers more broadly, it is axiomatic that rationality requires you to take the doxastic attitudes that your evidence supports. Yet there is also another current in our talk about rationality. On this usage, rationality is a matter of the right kind of coherence between one’s mental attitudes. Surprisingly little work in epistemology is explicitly devoted to answering the question of how these two currents of talk are related. But many implicitly assume that evidence-responsiveness guarantees coherence, so that the rational impermissibility of incoherence will just fall out of the putative requirement to take the attitudes that one’s evidence supports, and so that coherence requirements do not need to be theorized in their own right, apart from evidential reasons. In this paper, I argue that this is a mistake, since coherence and evidence-responsiveness can in fact come into conflict. More specifically, I argue that in cases of misleading higher-order evidence, there can be a conflict between believing what one’s evidence supports and satisfying a requirement that I call “inter-level coherence”. This illustrates why coherence requirements and evidential reasons must be separated and theorized separately.
It has been claimed that, in response to certain kinds of evidence (“incomplete” or “non-specific” evidence), agents ought to adopt imprecise credences: doxastic states that are represented by sets of credence functions rather than single ones. In this paper I argue that, given some plausible constraints on accuracy measures, accuracy-centered epistemologists must reject the requirement to adopt imprecise credences. I then show that even the claim that imprecise credences are permitted is problematic for accuracy-centered epistemology. It follows that if imprecise credal states are permitted or required in the cases that their defenders appeal to, then the requirements of rationality can outstrip what would be warranted by an interest in accuracy.
2005
Hamminga exploits the structuralist terminology adopted in ICR in defining the relations between confirmation, empirical progress and truth approximation. In his paper, the fundamental problem of Lakatos' classical concept of scientific progress is clarified, and its way of evaluating theories is compared to the real problems of scientists who face the far from perfect theories they wish to improve and defend against competitors. Among other things, Hamminga presents a provocative diagnosis of Lakatos' notion of "novel facts", by arguing that it is not so much related to Popper's notion of "empirical content" of a theory, but rather to its allowed possibilities. Miller examines the view-advanced by McAllister (1996) and endorsed, with new arguments, by Kuipers (2002)-that aesthetical criteria may reasonably play a role in the selection of scientific theories. After evaluating the adequacy of Kuipers' approach to truth approximation, Miller discusses Kuipers' account of the nature and role of empirical and aesthetic criteria in the evaluation of scientific theories and, in particular, the thesis that "beauty can be a road to truth". Finally, he examines McAllister's doctrine that scientific revolutions are characterized above all by novelty of aesthetic judgments.
In this paper, I am interested in knowing how evidence one should have had (on the one hand) and one’s higher-order evidence (on the other) interact in determinations of the justification of belief. In doing so I aim to address two types of scenario that previous discussions have left open. In one type of scenario, there is a clash between a subject’s higher-order evidence and the evidence she should have had: S’s higher-order evidence is misleading as to the existence or likely epistemic bearing of further evidence she should have. In the other, while there is further evidence S should have had, this evidence would only have offered additional support for S’s belief that p. The picture I offer derives from two “epistemic ceiling” principles linking evidence to justification: one’s justification for the belief that p can be no higher than it is on one’s total evidence, nor can it be higher than what it would have been had one had all of the evidence one should have had. Together, these two principles entail what I call the doctrine of Epistemic Strict Liability: insofar as one fails to have evidence one should have had, one is epistemically answerable to that evidence whatever reasons one happened to have regarding the likely epistemic bearing of that evidence. I suggest that such a position can account for the battery of intuitions elicited in the full range of cases I will be considering.
Philosophy of Science, 2010
In this article and its sequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In this article, we make this norm mathematically precise. We describe epistemic dilemmas an agent might face if she attempts to follow Accuracy and show that the only measures of inaccuracy that do not create these dilemmas are the quadratic inaccuracy measures. In the sequel, we derive Bayesianism from Accuracy and show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed. We describe the alternative updating rule that Accuracy mandates in the absence of Rigidity.
Higher-order evidence is, roughly, evidence of evidence. The idea is that evidence comes in levels. At the first, or lowest, evidential level is evidence of the familiar type—evidence concerning some proposition that is not itself about evidence. At a higher evidential level the evidence concerns some proposition about the evidence at a lower level. Only in relatively recent years has this less familiar type of evidence been explicitly identified as a subject of epistemological focus, and the work on it remains relegated to a small circle of authors and a short stack of published articles—far disproportionate to the attention it deserves. It deserves to occupy center stage for several reasons. First, higher-order evidence frequently arises in a strikingly diverse range of epistemic contexts, including testimony, disagreement, empirical observation, introspection, and memory, among others. Second, in many of the contexts in which it arises, such evidence plays a crucial epistemic role. Third, the precise role it plays is complex, gives rise to a number of interesting epistemological puzzles, and for these reasons remains controversial and is not yet fully understood. As such, higher-order evidence merits systematic investigation. This thesis undertakes such an investigation. It aims to produce a thorough account of higher-order evidence—what it is, how it works, and its epistemic consequences. Chapter 1 serves as a general introduction to the topic and an overview of the existing literature, but primarily aims to further elucidate the concept of higher-order evidence and build a theoretical framework for later chapters. Chapter 2 develops an account of what I call “higher-order support”: the bearing higher-order evidence has, not on corresponding “lower-order evidence” (roughly, the evidence the higher-order evidence is about), but on corresponding “object-level propositions” (roughly, the propositions the higher-order evidence alleges the lower-order evidence to be about). Chapter 3 develops an account of “levels interaction”: the effect on overall support when the different evidential levels combine. Chapter 4 identifies important consequences of the theoretical results of the previous two chapters and applies the theory to four select cases of current epistemological controversy—testimony, memory, the closure of inquiry, and disagreement.
We will not argue for the datum here. We think Foley , Christensen [9], Kolodny [25], and others have made a compelling case for it. Today, it is our point of departure. [But, we do have a new, especially pernicious, Preface case (22).] Easwaran & Fitelson Accuracy, Coherence and Evidence 4 Stage-Setting Full Belief (B) Credence (b) Extras Refs Some philosophers construe the datum as reason to believe that ( ) there are no coherence requirements for full belief. Christensen [9] thinks (a) credences do have coherence requirements (probabilism); ( ) full beliefs do not; (b) what seem to be CRs for full belief can be explained via (a). Kolodny [25] agrees with ( ), but he disagrees with (a) and (b). He thinks (c) full belief is explanatorily indispensible; (d) there are no coherence requirements for any judgments; (e) what seem to be CRs for full belief can be explained via (EB).
Recent authors have drawn attention to a new kind of defeating evidence commonly referred to as higher-order evidence. Such evidence works by inducing doubts that one’s doxastic state is the result of a flawed process – for instance, a process brought about by a reason-distorting drug. I argue that accommodating defeat by higher-order evidence requires a two-tiered theory of justification, and that the phenomenon gives rise to a puzzle. The puzzle is that at least in some situations involving higher-order defeaters the correct epistemic rules issue conflicting recommendations. For instance, a subject ought to believe p, but she ought also to suspend judgment in p. I discuss three responses. The first resists the puzzle by arguing that there is only one correct epistemic rule, an Über-rule. The second accepts that there are genuine epistemic dilemmas. The third appeals to a hierarchy or ordering of correct epistemic rules. I spell out problems for all of these responses. I conclude that the right lesson to draw from the puzzle is that a state can be epistemically rational or justified even if one has what looks to be strong evidence to think that it is not. As such, the considerations put forth constitute a non question-begging argument for a kind of externalism.
Analysis - Forthcoming
Epistemic utility theory (EUT) is generally coupled with \emph{veritism}. Veritism is the view that truth is the sole fundamental epistemic value. Veritism, when paired with EUT, entails a methodological commitment: Norms of epistemic rationality are justified only if they can be derived from considerations of accuracy alone. According to EUT, then, believing truly has epistemic value, while believing falsely has epistemic disvalue. This raises the question as to how the rational believer should balance the prospect of true belief against the risk of error. A strong intuitive case can be made for a kind of epistemic \emph{conservatism}---that we should disvalue error more than we value true belief. I argue that none of the ways in which advocates of veritist EUT have sought to motivate conservatism can be squared with their methodological commitments. Short of any such justification, they must therefore either abandon their most central methodological principle or else adopt a permissive line with respect to epistemic risk.
Electronic Proceedings in Theoretical Computer Science
In an earlier paper [Rational choice and AGM belief revision, Artificial Intelligence, 2009] a correspondence was established between the choice structures of revealed-preference theory (developed in economics) and the syntactic belief revision functions of the AGM theory (developed in philosophy and computer science). In this paper we extend the re-interpretation of (a generalized notion of) choice structure in terms of belief revision by adding: (1) the possibility that an item of "information" might be discarded as not credible (thus dropping the AGM success axiom) and (2) the possibility that an item of information, while not accepted as fully credible, may still be "taken seriously" (we call such items of information "allowable"). We establish a correspondence between generalized choice structures (GCS) and AGM belief revision; furthermore, we provide a syntactic analysis of the proposed notion of belief revision, which we call filtered belief revision. * I am grateful to three anonymous reviewers for helpful comments. 1 See, for example, [10] and [11]. 2 [1, 4] 3 Thus the intended meaning of ω ω is "alternative ω is considered to be at least as good as alternative ω ".
Philosophical Perspectives, 2010
Philosophers' Imprint, 2015
Probabilism is the thesis that an agent is rational only if her credences are probabilistic. This paper will be concerned with what we might call the Accuracy Dominance Argument for Probabilism (Rosenkrantz, 1981; Joyce, 1998, 2009). In this paper, I wish to identify and explore a lacuna in this argument that arises for those who take there to be (at least) two sorts of doxastic states: beliefs and credences.
Reliabilists hold that a belief is doxastically justified if and only if it is caused by a reliable process. But since such a process is one that tends to produce a high ratio of true to false beliefs, reliabilism is on the face of it applicable to binary beliefs, but not to degrees of confidence or credences. For while (binary) beliefs admit of truth or falsity, the same cannot be said of credences in general. A natural question now arises: can reliability theories of justified belief be extended or modified to account for justified credence? In this paper, I address the preceding question. I begin by showing that, as it stands, reliabilism cannot account for justified credence. I then consider three ways in which the reliabilist may try to do so by extending or modifying her theory, but I argue that such attempts face certain problems. After that, I turn to a version of reliabilism that incorporates evidentialist elements and argue that it allows us to avoid the problems that the other theories face. If I am right, this gives reliabilists a reason, aside from those given recently by Comesana and Goldman, to move towards such a kind of hybrid theory.
Degrees of Belief, 2009
Traditional epistemology is both dogmatic and alethic. It is dogmatic in the sense that it takes the fundamental doxastic attitude to be full belief, the state in which a person categorically accepts some proposition as true. It is alethic in the sense that it evaluates such categorical beliefs on the basis of what William James calls the 'two great commandments' of epistemology: Believe the truth! Avoid error! Other central concepts of dogmatic epistemology-knowledge, justification, reliability, sensitivity, and so on-are understood in terms of their relationships to this ultimate standard of truth or accuracy. Some epistemologists, inspired by Bayesian approaches in decision theory and statistics, have sought to replace the dogmatic model with a probabilistic one in which partial beliefs, or credences, play the leading role. A person's credence in a proposition X is her level of confidence in its truth. This corresponds, roughly, to the degree to which she is disposed to presuppose X in her theoretical and practical reasoning. Credences are inherently gradational: the strength of a partial belief in X can range from certainty of truth, through maximal uncertainty (in which X and its negation ∼X are believed equally strongly), to complete certainty of falsehood. These variations in confidence are warranted by differing states of evidence, and they rationalize different choices among options whose outcomes depend on X. It is a central normative doctrine of probabilistic epistemology that rational credences should obey the laws of probability. In the idealized case where a believer has a numerically precise credence b(X) for every proposition X in some Boolean algebra of propositions, 1 these laws are as follows:
When a first order belief accurately reflects the evidence, how should this affect the epistemic justification of a higher order belief that this is the case? In an influential paper, Kelly argues that first order evidential accuracy tends to generate more justified higher order beliefs (Kelly 2010). Call this Bottom Up. I argue that neither general views about what justifies our higher order beliefs nor the specific arguments that Kelly offers support Bottom Up. Second, I suggest that while we can reject Bottom Up, we can still accept that justified higher order beliefs significantly affect the justification of first order beliefs. Third, I argue that the epistemic justification of higher order belief is fragile in the sense that it tends to dissipate when a subject is confronted with certain defeaters, including notably the sort of defeaters arising from disagreement, precisely when higher order justification depends on first order success in the ways that one may think support Bottom Up.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.