Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
…
7 pages
1 file
I discuss Prawitz's claim that a non-reliabilist answer to the question "What is a proof?" compels us to reject the standard Bolzano-Tarski account of validity, and to account for the meaning of a sentence in broadly verificationist terms. I sketch what I take to be a possible way of resisting Prawitz's claim-one that concedes the anti-reliabilist assumption from which Prawitz's argument proceeds. Keywords: Knowledge • validity • proofs • inferentialism• harmony What is a proof? Professor Prawitz's paper has two aims: to show that this is a hard and important question, and to suggest a possible answer-one, he argues, that requires us to reject the standard Bolzano-Tarski account of validity, and to account for the meaning of a sentence in terms of the grounds for asserting it. Section 1-3 attempt to resist Prawitz's attack to the standard conceptions of meaning and validity. Section 4 briefly raises three potential worries about Prawitz's preferred answer to our initial question. Section 5 offers some concluding remarks. * Many thanks to Dag Prawitz and Tim Williamson for helpful conversations on some of the topics discussed herein.
Topoi-an International Review of Philosophy, 2017
A that occurs in the final conclusion of a proof is true, the truth of A is guaranteed by the proof, that is to say, the proof supplies a ground for asserting A and thereby justifies its final conclusion, the assertion of A. It is quite different with the concept of inference. There are valid and invalid inferences. An attempted proof that contains an invalid inference fails in justifying its final conclusion and is then not a proof. Consequently, the concept of proof cannot be explained by saying that a proof is a chain of inferences-we have to say that it is a chain of valid inferences. We could of course change usage, which is not very fixed anyway, and take validity to be included among the properties defining the concept of inference. But that would not change anything essential. Questions about the validity of inferences would only have to be considered at an earlier stage when explaining what it is to be an inference. I find it convenient to stick to common usage and not count validity among the defining properties of the concept of inference; it allows us among other things to speak of the inferences occurring in a line of thought without committing us to their validity. But what is a valid inference? The traditional definition of valid inference in terms of necessary preservation of truth from premisses to conclusion is obviously inadequate here, and so is clearly the common definition of validity within model theory, first proposed by Bolzano and then by Tarski, in which the necessity is replaced by the requirement that truth is preserved under all interpretations of the non-logical terms occurring in the premisses and the conclusion. If that was all that was required of inferences of a proof, every logical consequence could be established by a proof with just one inference step, regardless of how difficult it was to see that logical consequence actually pertains; as we all know, Abstract We may try to explain proofs as chains of valid inference, but the concept of validity needed in such an explanation cannot be the traditional one. For an inference to be legitimate in a proof it must have sufficient epistemic power, so that the proof really justifies its final conclusion. However, the epistemic concepts used to account for this power are in their turn usually explained in terms of the concept of proof. To get out of this circle we may consider an idea within intuitionism about what it is to justify the assertion of a proposition. It depends on Heyting's view of the meaning of a proposition, but does not presuppose the concept of inference or of proof as chains of inferences. I discuss this idea and what is required in order to use it for an adequate notion of valid inference.
Dag Prawitz's theory of grounds proposes a fresh approach to valid inferences. Its main aim is to clarify nature and reasons of their epistemic power. The notion of ground is taken to denote what one is in possession of when in a state of evidence, and valid inferences are described in terms of operations that make us pass from grounds we already have to new grounds. Thanks to a rigorously developed proof-as-chains conception, the ground-theoretic framework permits Prawitz to overcome some conceptual difficulties of his earlier proof-theoretic explanation. Though from different points of view, anyway, the two accounts share an issue of recognizability of relevant operational properties.
In a paper from 1998, Göran Sundholm has tried to convince Dag Prawitz that a semantic theory of deduction had better employ three notions of proof: proof-object, proof-act and proof-trace. In Prawitz’s semantics of valid arguments, however, the three notions can be said to collapse into each other. In this paper I shall first of all argue that this collapse results in a number of circularity and decidability problems. I shall also argue that it is maybe for getting rid of these problems that Prawitz’s later theory of grounds seems to allow for an objects-acts-traces distinction partly reminiscent of Sundholm's one. But Prawitz’s ground-theoretic picture retains many significant peculiarities. These mainly concern the way objects, acts and traces relate to each other, and the epistemic status assigned to proof-objects. The main aim of this paper is thus that of providing an overview and comparison of Prawitz’s and Sundholm’s semantics, and to argue that the divergences between the two stem from a difference in how Prawitz and Sundholm respectively conceive of the notion of assertion. To conclude, I discuss a problem of vacuous validity recently raised by Prawitz, and investigate to some extent the possibility of reading it via Sundholm's (and Martin-Löf's) approach(es).
1971. Discourse Grammars and the Structure of Mathematical Reasoning III: Two Theories of Proof, Journal of Structural Learning 3, #3, 1–24. 1976. Reprint of revised version in Structural Learning II Issues and Approaches, ed. J. Scandura, Gordon & Breach Science Publishers, New York, MR56#15265. PDF EPIGRAPH There was, until very lately, a special difficulty in the principles of mathematics. It seemed plain that mathematics consists of deductions, and yet the orthodox accounts of deduction were largely or wholly inapplicable to existing mathematics. Not only the Aristotelian syllogistic theory, but also the modern doctrines of Symbolic Logic, were either theoretically inadequate to mathematical reasoning, or at any rate required such artificial forms of statement that they could not be practically applied. --Russell ABSTRACT This part of the series has a dual purpose. In the first place we will discuss two kinds of theories of proof. The first kind will be called a theory of linear proof. The second has been called a theory of suppositional proof. The term "natural deduction" has often and correctly been used to refer to the second kind of theory, but I shall not do so here because many of the theories so-called are not of the second kind--they must be thought of either as disguised linear theories or theories of a third kind (see postscript below). The second purpose of this part is 25 to develop some of the main ideas needed in constructing a comprehensive theory of proof. The reason for choosing the linear and suppositional theories for this purpose is because the linear theory includes only rules of a very simple nature, and the suppositional theory can be seen as the result of making the linear theory more comprehensive. CORRECTION: At the time these articles were written the word ‘proof’ especially in the phrase ‘proof from hypotheses’ was widely used to refer to what were earlier and are now called deductions. I ask your forgiveness. I have forgiven Church and Henkin who misled me.
Like every particular -ism that has become sufficiently established, Brandom’s inferentialism is difficult to circumscribe by mere verbal definition. This is because, besides the list of its essential features, it depends on a rather contingent and often heterogeneous list of particular philosophical problems, the philosophers who addressed them, and the types of solutions they offered. As for such a delimitation, Peregrin’s book Inferentialism, in its first part, combines both these approaches – i.e. an intensional and extensional one – and adjusts them, in the second part, by some technical results concerning the inferential foundations of logic. Throughout the book, many historical references are made tracing the origins of the inferentialist doctrine to the doctrines of the (relatively recent) past such as Carnap’s logical syntax or Lorenzen’s dialogical semantics, all of them in some way connected to the development of modern logic and the analytical movement in philosophy. In my contribution, I would like to deepen these historical remarks by pointing out that, within this tradition, there is an evident precedent to inferentialism: namely, so-called axiomatism, particularly in the form advocated by Hilbert. Per se, this will probably come as no surprise to anybody and might rather be seen as a contingent fact; however, as I will claim, this is not the case if you take into account the role that axiomatism has played in the development of mathematics. In this case, its kinship with Brandom’s inferentialism turns out to be rather substantial, and it will hopefully win the sympathetic eye of those who find inferentialism to be something that is hard-to-swallow while at the same time taking the idea of axiomatization as being a rather natural and unproblematic one.
Ratio, 2003
I aim to stand the received view about verificationism on its head. It is commonly thought that verificationism is a powerful philosophical tool, which we could deploy very effectively if only it weren't so hopelessly implausible. On the contrary, I will argue. Verificationism -if properly construed -may well be true. But its philosophical applications are chimerical.
The study of the foundations of mathematics is in a crisis: the failure of logicism has invited a satisfactory alternative. This should be a better proof theory; Gentzen's natural deduction came to provide this but had a limited success. The discussion on this practically faded out. Solomon Feferman and his coworkers propose tentatively a synthesis that relies on achievements from diverse directions. Presenting sympathetically ideas of Hilbert, Brouwer, Gentzen, Weyl, and Bishop, they hope to change in the standard textbooks of logic. This requires a more problem-oriented presentatlon of the background material that should make the situation more accessible to the ordinary philosophy faculty. 'Ihk invites supplementary discussions of achievements that they are thus far sadly overlooked, especially those of Popper, Robinson, and Lakatos.
I take issue with Robert Brandom's claim that on an analysis of knowledge based on objective probabilities it is not possible to provide a stable answer to the question whether a belief has the status of knowledge. I argue that the version of the problem of generality developed by Brandom doesn't undermine a truth-tracking account of noninferential knowledge that construes truth-tacking in terms of conditional probabilities. I then consider Sherrilyn Roush's claim that an account of knowledge based on probabilistic tracking faces a version of the problem of generality. I argue that the problems she raises are specific to her account, and do not affect the version of the view that I have advanced. I then consider Brandom's argument that the cases that motivate reliabilist epistemologies are in principle exceptional. I argue that he has failed to make a cogent case for this claim. I close with the suggestion that the representationalist approach to knowledge that I endorse and Brandom rejects is in principle compatible with the kind of pragmatist approach to belief and truth that both Brandom and I endorse.
Oxford University Press eBooks, 2020
Inferentialism is explained as an attempt to provide an account of meaning that is more sensitive (than the tradition of truth-conditional theorizing deriving from Tarski and Davidson) to what is learned when one masters meanings. The logically reformist inferentialism of Dummett and Prawitz is contrasted with the more recent quietist inferentialism of Brandom. Various other issues are highlighted for inferentialism in general, by reference to which different kinds of inferentialism can be characterized. Inferentialism for the logical operators is explained, with special reference to the Principle of Harmony. The statement of that principle in the author's book Natural Logic is fine-tuned here in the way obviously required in order to bar an interesting would-be counterexample furnished by Crispin Wright, and to stave off any more of the same. * To appear in ed. Alex Miller, Essays for Crispin Wright: Logic, Language and Mathematics (in preparation for Oxford University Press: Volume 2 of a two-volume Festschrift for Crispin Wright, co-edited with Annalisa Coliva). Discussions with Tadeusz Szubka prompted a more detailed examination of Brandom's inferentialism, and its points of contrast with Dummett's. Thanks are owed to Salvatore Florio for an extremely careful reading of an earlier draft, which resulted in significant improvements. Robert Kraut was generous with his time and expertise on a later draft. Two referees for Oxford University Press provided helpful comments, which led to considerable expansion. The author is fully responsible for any defects that remain.
Analysis, 2014
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logical validity, as Jeff Ketland (2012) has recently pointed out, no theory which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The Logica Yearbook 2019, 2020
Philosophia Mathematica, 2008
Synthese, 2014
国際哲学研究 Journal of International Philosophy, 2013
New Waves in Philosophy of Mathematics (ed. by Ø. Linnebo and O. Bueno), Palgrave Macmillan, 2009
Philosophia, 1989
Revista Iberoamericana de Argumentación
Epistēmēs Metron Logos
Information, 2011
Trends in Logic, 2014