100% found this document useful (8 votes)
16K views597 pages

Language - Proof and Logic

Uploaded by

Lucian Dabuleanu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (8 votes)
16K views597 pages

Language - Proof and Logic

Uploaded by

Lucian Dabuleanu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

LANGUAGE,

PROOF AND
LOGIC
JON BARWISE & JOHN ETCHEMENDY

In collaboration with
Gerard Allwein
Dave Barker-Plummer
Albert Liu

SEVEN BRIDGES PRESS


7
7

NEW YORK • LONDON


Copyright © 1999
CSLI Publications
Center for the Study of Language and Information
Leland Stanford Junior University
03 02 01 00 99 54321

Librar y of Congress Cataloging-in-Publication Data

Bar wise, Jon.

Language, proof and logic / Jon Bar wise and John Etchemendy ;
in collaboration with Gerard Allwein, Dave Barker-Plummer, and
Albert Liu.

p. cm.

ISBN 1-889119-08-3 (pbk. : alk. paper)


I. Etchemendy, John, 1952- II. Allwein, Gerard, 1956-
III. Barker-Plummer, Dave. IV. Liu, Albert, 1966- V. Title.
IN PROCESS

99-41113
CIP
Acknowledgements

Our primary debt of gratitude goes to our three main collaborators on this
project: Gerry Allwein, Dave Barker-Plummer, and Albert Liu. They have
worked with us in designing the entire package, developing and implementing
the software, and teaching from and refining the text. Without their intelli-
gence, dedication, and hard work, LPL would neither exist nor have most of
its other good properties.
In addition to the five of us, many people have contributed directly and in-
directly to the creation of the package. First, over two dozen programmers have
worked on predecessors of the software included with the package, both earlier
versions of Tarski’s World and the program Hyperproof, some of whose code
has been incorporated into Fitch. We want especially to mention Christopher
Fuselier, Mark Greaves, Mike Lenz, Eric Ly, and Rick Wong, whose outstand-
ing contributions to the earlier programs provided the foundation of the new
software. Second, we thank several people who have helped with the develop-
ment of the new software in essential ways: Rick Sanders, Rachel Farber, Jon
Russell Barwise, Alex Lau, Brad Dolin, Thomas Robertson, Larry Lemmon,
and Daniel Chai. Their contributions have improved the package in a host of
ways.
Prerelease versions of LPL have been tested at several colleges and uni-
versities. In addition, other colleagues have provided excellent advice that we
have tried to incorporate into the final package. We thank Selmer Bringsjord,
Renssalaer Polytechnic Institute; Tom Burke, University of South Carolina;
Robin Cooper, Gothenburg University; James Derden, Humboldt State Uni-
versity; Josh Dever, SUNY Albany; Avrom Faderman, University of Rochester;
James Garson, University of Houston; Ted Hodgson, Montana State Univer-
sity; John Justice, Randolph-Macon Women’s College; Ralph Kennedy, Wake
Forest University; Michael O’Rourke, University of Idaho; Greg Ray, Univer-
sity of Florida; Cindy Stern, California State University, Northridge; Richard
Tieszen, San Jose State University; Saul Traiger, Occidental College; and Lyle
Zynda, Indiana University at South Bend. We are particularly grateful to John
Justice, Ralph Kennedy, and their students (as well as the students at Stan-
ford and Indiana University), for their patience with early versions of the
software and for their extensive comments and suggestions.
We would also like to thank Stanford’s Center for the Study of Language
and Information and Indiana University’s College of Arts and Sciences for

iii
iv / Acknowledgements

their financial support of the project. Finally, we are grateful to our two
publishers, Dikran Karagueuzian of CSLI Publications and Clay Glad of Seven
Bridges Press, for their skill and enthusiasm about LPL.

Acknowledgements
Contents

Acknowledgements iii

Introduction 1
The special role of logic in rational inquiry . . . . . . . . . . . . . . 1
Why learn an artificial language? . . . . . . . . . . . . . . . . . . . . 2
Consequence and proof . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Instructions about homework exercises (essential! ) . . . . . . . . . . 5
To the instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Web address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

I Propositional Logic 17
1 Atomic Sentences 19
1.1 Individual constants . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Predicate symbols . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3 Atomic sentences . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.4 General first-order languages . . . . . . . . . . . . . . . . . . . 28
1.5 Function symbols (optional ) . . . . . . . . . . . . . . . . . . . . 31
1.6 The first-order language of set theory (optional ) . . . . . . . . 37
1.7 The first-order language of arithmetic (optional) . . . . . . . . 38
1.8 Alternative notation (optional ) . . . . . . . . . . . . . . . . . . 40

2 The Logic of Atomic Sentences 41


2.1 Valid and sound arguments . . . . . . . . . . . . . . . . . . . . 41
2.2 Methods of proof . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3 Formal proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4 Constructing proofs in Fitch . . . . . . . . . . . . . . . . . . . . 58
2.5 Demonstrating nonconsequence . . . . . . . . . . . . . . . . . . 63
2.6 Alternative notation (optional ) . . . . . . . . . . . . . . . . . . 66

3 The Boolean Connectives 67


3.1 Negation symbol: ¬ . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2 Conjunction symbol: ∧ . . . . . . . . . . . . . . . . . . . . . . . 71
3.3 Disjunction symbol: ∨ . . . . . . . . . . . . . . . . . . . . . . . 74
3.4 Remarks about the game . . . . . . . . . . . . . . . . . . . . . 77

v
vi / Contents

3.5 Ambiguity and parentheses . . . . . . . . . . . . . . . . . . . . 79


3.6 Equivalent ways of saying things . . . . . . . . . . . . . . . . . 82
3.7 Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.8 Alternative notation (optional ) . . . . . . . . . . . . . . . . . . 89

4 The Logic of Boolean Connectives 93


4.1 Tautologies and logical truth . . . . . . . . . . . . . . . . . . . 94
4.2 Logical and tautological equivalence . . . . . . . . . . . . . . . 106
4.3 Logical and tautological consequence . . . . . . . . . . . . . . . 110
4.4 Tautological consequence in Fitch . . . . . . . . . . . . . . . . . 114
4.5 Pushing negation around (optional) . . . . . . . . . . . . . . . 117
4.6 Conjunctive and disjunctive normal forms (optional ) . . . . . . 121

5 Methods of Proof for Boolean Logic 127


5.1 Valid inference steps . . . . . . . . . . . . . . . . . . . . . . . . 128
5.2 Proof by cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.3 Indirect proof: proof by contradiction . . . . . . . . . . . . . . . 136
5.4 Arguments with inconsistent premises (optional ) . . . . . . . . 140

6 Formal Proofs and Boolean Logic 142


6.1 Conjunction rules . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2 Disjunction rules . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.3 Negation rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.4 The proper use of subproofs . . . . . . . . . . . . . . . . . . . . 163
6.5 Strategy and tactics . . . . . . . . . . . . . . . . . . . . . . . . 167
6.6 Proofs without premises (optional) . . . . . . . . . . . . . . . . 173

7 Conditionals 176
7.1 Material conditional symbol: → . . . . . . . . . . . . . . . . . . 178
7.2 Biconditional symbol: ↔ . . . . . . . . . . . . . . . . . . . . . . 181
7.3 Conversational implicature . . . . . . . . . . . . . . . . . . . . 187
7.4 Truth-functional completeness (optional ) . . . . . . . . . . . . . 190
7.5 Alternative notation (optional ) . . . . . . . . . . . . . . . . . . 196

8 The Logic of Conditionals 198


8.1 Informal methods of proof . . . . . . . . . . . . . . . . . . . . . 198
8.2 Formal rules of proof for → and ↔ . . . . . . . . . . . . . . . . 206
8.3 Soundness and completeness (optional) . . . . . . . . . . . . . . 214
8.4 Valid arguments: some review exercises . . . . . . . . . . . . . . 222

Contents
Contents / vii

II Quantifiers 225
9 Introduction to Quantification 227
9.1 Variables and atomic wffs . . . . . . . . . . . . . . . . . . . . . 228
9.2 The quantifier symbols: ∀, ∃ . . . . . . . . . . . . . . . . . . . . 230
9.3 Wffs and sentences . . . . . . . . . . . . . . . . . . . . . . . . . 231
9.4 Semantics for the quantifiers . . . . . . . . . . . . . . . . . . . . 234
9.5 The four Aristotelian forms . . . . . . . . . . . . . . . . . . . . 239
9.6 Translating complex noun phrases . . . . . . . . . . . . . . . . 243
9.7 Quantifiers and function symbols (optional ) . . . . . . . . . . . 251
9.8 Alternative notation (optional ) . . . . . . . . . . . . . . . . . . 255

10 The Logic of Quantifiers 257


10.1 Tautologies and quantification . . . . . . . . . . . . . . . . . . . 257
10.2 First-order validity and consequence . . . . . . . . . . . . . . . 266
10.3 First-order equivalence and DeMorgan’s laws . . . . . . . . . . 275
10.4 Other quantifier equivalences (optional ) . . . . . . . . . . . . . 280
10.5 The axiomatic method (optional) . . . . . . . . . . . . . . . . . 283

11 Multiple Quantifiers 289


11.1 Multiple uses of a single quantifier . . . . . . . . . . . . . . . . 289
11.2 Mixed quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 293
11.3 The step-by-step method of translation . . . . . . . . . . . . . . 298
11.4 Paraphrasing English . . . . . . . . . . . . . . . . . . . . . . . . 300
11.5 Ambiguity and context sensitivity . . . . . . . . . . . . . . . . 304
11.6 Translations using function symbols (optional ) . . . . . . . . . 308
11.7 Prenex form (optional ) . . . . . . . . . . . . . . . . . . . . . . . 311
11.8 Some extra translation problems . . . . . . . . . . . . . . . . . 315

12 Methods of Proof for Quantifiers 319


12.1 Valid quantifier steps . . . . . . . . . . . . . . . . . . . . . . . . 319
12.2 The method of existential instantiation . . . . . . . . . . . . . . 322
12.3 The method of general conditional proof . . . . . . . . . . . . . 323
12.4 Proofs involving mixed quantifiers . . . . . . . . . . . . . . . . 329
12.5 Axiomatizing shape (optional ) . . . . . . . . . . . . . . . . . . 338

13 Formal Proofs and Quantifiers 342


13.1 Universal quantifier rules . . . . . . . . . . . . . . . . . . . . . 342
13.2 Existential quantifier rules . . . . . . . . . . . . . . . . . . . . . 347
13.3 Strategy and tactics . . . . . . . . . . . . . . . . . . . . . . . . 352
13.4 Soundness and completeness (optional ) . . . . . . . . . . . . . . 361

Contents
viii / Contents

13.5 Some review exercises (optional ) . . . . . . . . . . . . . . . . . 361

14 More about Quantification (optional ) 364


14.1 Numerical quantification . . . . . . . . . . . . . . . . . . . . . . 366
14.2 Proving numerical claims . . . . . . . . . . . . . . . . . . . . . 374
14.3 The, both, and neither . . . . . . . . . . . . . . . . . . . . . . . 379
14.4 Adding other determiners to fol . . . . . . . . . . . . . . . . . 383
14.5 The logic of generalized quantification . . . . . . . . . . . . . . 389
14.6 Other expressive limitations of first-order logic . . . . . . . . . 397

III Applications and Metatheory 403


15 First-order Set Theory 405
15.1 Naive set theory . . . . . . . . . . . . . . . . . . . . . . . . . . 406
15.2 Singletons, the empty set, subsets . . . . . . . . . . . . . . . . . 412
15.3 Intersection and union . . . . . . . . . . . . . . . . . . . . . . . 415
15.4 Sets of sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
15.5 Modeling relations in set theory . . . . . . . . . . . . . . . . . . 422
15.6 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
15.7 The powerset of a set (optional ) . . . . . . . . . . . . . . . . . 429
15.8 Russell’s Paradox (optional ) . . . . . . . . . . . . . . . . . . . . 432
15.9 Zermelo Frankel set theory zfc (optional ) . . . . . . . . . . . . 433

16 Mathematical Induction 442


16.1 Inductive definitions and inductive proofs . . . . . . . . . . . . 443
16.2 Inductive definitions in set theory . . . . . . . . . . . . . . . . . 451
16.3 Induction on the natural numbers . . . . . . . . . . . . . . . . . 453
16.4 Axiomatizing the natural numbers (optional ) . . . . . . . . . . 456
16.5 Proving programs correct (optional ) . . . . . . . . . . . . . . . 458

17 Advanced Topics in Propositional Logic 468


17.1 Truth assignments and truth tables . . . . . . . . . . . . . . . . 468
17.2 Completeness for propositional logic . . . . . . . . . . . . . . . 470
17.3 Horn sentences (optional) . . . . . . . . . . . . . . . . . . . . . 479
17.4 Resolution (optional ) . . . . . . . . . . . . . . . . . . . . . . . . 488

18 Advanced Topics in FOL 495


18.1 First-order structures . . . . . . . . . . . . . . . . . . . . . . . . 495
18.2 Truth and satisfaction, revisited . . . . . . . . . . . . . . . . . . 500
18.3 Soundness for fol . . . . . . . . . . . . . . . . . . . . . . . . . 509

Contents
Contents / ix

18.4 The completeness of the shape axioms (optional ) . . . . . . . . 512


18.5 Skolemization (optional) . . . . . . . . . . . . . . . . . . . . . . 514
18.6 Unification of terms (optional ) . . . . . . . . . . . . . . . . . . 516
18.7 Resolution, revisited (optional ) . . . . . . . . . . . . . . . . . . 519

19 Completeness and Incompleteness 526


19.1 The Completeness Theorem for fol . . . . . . . . . . . . . . . 527
19.2 Adding witnessing constants . . . . . . . . . . . . . . . . . . . . 529
19.3 The Henkin theory . . . . . . . . . . . . . . . . . . . . . . . . . 531
19.4 The Elimination Theorem . . . . . . . . . . . . . . . . . . . . . 534
19.5 The Henkin Construction . . . . . . . . . . . . . . . . . . . . . 540
19.6 The Löwenheim-Skolem Theorem . . . . . . . . . . . . . . . . . 546
19.7 The Compactness Theorem . . . . . . . . . . . . . . . . . . . . 548
19.8 The Gödel Incompleteness Theorem . . . . . . . . . . . . . . . 552

Summary of Formal Proof Rules 557


Propositional rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
First-order rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Inference Procedures (Con Rules) . . . . . . . . . . . . . . . . . . . 561

Glossary 562

General Index 573

Exercise Files Index 585

Contents
Introduction

The special role of logic in rational inquiry

What do the fields of astronomy, economics, finance, law, mathematics, med-


icine, physics, and sociology have in common? Not much in the way of sub-
ject matter, that’s for sure. And not all that much in the way of methodology.
What they do have in common, with each other and with many other fields, is
their dependence on a certain standard of rationality. In each of these fields,
it is assumed that the participants can differentiate between rational argu-
mentation based on assumed principles or evidence, and wild speculation or
nonsequiturs, claims that in no way follow from the assumptions. In other
words, these fields all presuppose an underlying acceptance of basic principles
of logic.
For that matter, all rational inquiry depends on logic, on the ability of logic and rational
people to reason correctly most of the time, and, when they fail to reason inquiry
correctly, on the ability of others to point out the gaps in their reasoning.
While people may not all agree on a whole lot, they do seem to be able to agree
on what can legitimately be concluded from given information. Acceptance of
these commonly held principles of rationality is what differentiates rational
inquiry from other forms of human activity.
Just what are the principles of rationality presupposed by these disciplines?
And what are the techniques by which we can distinguish correct or “valid”
reasoning from incorrect or “invalid” reasoning? More basically, what is it
that makes one claim “follow logically” from some given information, while
some other claim does not?
Many answers to these questions have been explored. Some people have
claimed that the laws of logic are simply a matter of convention. If this is so, logic and convention
we could presumably decide to change the conventions, and so adopt different
principles of logic, the way we can decide which side of the road we drive
on. But there is an overwhelming intuition that the laws of logic are somehow
more fundamental, less subject to repeal, than the laws of the land, or even the
laws of physics. We can imagine a country in which a red traffic light means
go, and a world on which water flows up hill. But we can’t even imagine a
world in which there both are and are not nine planets.
The importance of logic has been recognized since antiquity. After all, no

1
2 / Introduction

science can be any more certain than its weakest link. If there is something
arbitrary about logic, then the same must hold of all rational inquiry. Thus
laws of logic it becomes crucial to understand just what the laws of logic are, and even
more important, why they are laws of logic. These are the questions that one
takes up when one studies logic itself. To study logic is to use the methods of
rational inquiry on rationality itself.
Over the past century the study of logic has undergone rapid and im-
portant advances. Spurred on by logical problems in that most deductive of
disciplines, mathematics, it developed into a discipline in its own right, with its
own concepts, methods, techniques, and language. The Encyclopedia Brittan-
ica lists logic as one of the seven main branches of knowledge. More recently,
the study of logic has played a major role in the development of modern day
computers and programming languages. Logic continues to play an important
part in computer science; indeed, it has been said that computer science is
just logic implemented in electrical engineering.
goals of the book This book is intended to introduce you to some of the most important
concepts and tools of logic. Our goal is to provide detailed and systematic
answers to the questions raised above. We want you to understand just how
the laws of logic follow inevitably from the meanings of the expressions we
use to make claims. Convention is crucial in giving meaning to a language,
but once the meaning is established, the laws of logic follow inevitably.
More particularly, we have two main aims. The first is to help you learn
a new language, the language of first-order logic. The second is to help you
learn about the notion of logical consequence, and about how one goes about
establishing whether some claim is or is not a logical consequence of other
accepted claims. While there is much more to logic than we can even hint at
in this book, or than any one person could learn in a lifetime, we can at least
cover these most basic of issues.

Why learn an artificial language?

This language of first-order logic is very important. Like Latin, the language is
not spoken, but unlike Latin, it is used every day by mathematicians, philoso-
phers, computer scientists, linguists, and practitioners of artificial intelligence.
Indeed, in some ways it is the universal language, the lingua franca, of the sym-
bolic sciences. Although it is not so frequently used in other forms of rational
inquiry, like medicine and finance, it is also a valuable tool for understanding
the principles of rationality underlying these disciplines as well.
The language goes by various names: the lower predicate calculus, the
FOL functional calculus, the language of first-order logic, and fol. The last of

Introduction
Why learn an artificial language? / 3

these is pronounced ef–oh–el, not fall, and is the name we will use.
Certain elements of fol go back to Aristotle, but the language as we know
it today has emerged over the past hundred years. The names chiefly associ-
ated with its development are those of Gottlob Frege, Giuseppe Peano, and
Charles Sanders Peirce. In the late nineteenth century, these three logicians
independently came up with the most important elements of the language,
known as the quantifiers. Since then, there has been a process of standard-
ization and simplification, resulting in the language in its present form. Even
so, there remain certain dialects of fol, differing mainly in the choice of the
particular symbols used to express the basic notions of the language. We will
use the dialect most common in mathematics, though we will also tell you
about several other dialects along the way. Fol is used in different ways in
different fields. In mathematics, it is used in an informal way quite exten- logic and mathematics
sively. The various connectives and quantifiers find their way into a great deal
of mathematical discourse, both formal and informal, as in a classroom set-
ting. Here you will often find elements of fol interspersed with English or
the mathematician’s native language. If you’ve ever taken calculus you have
probably seen such formulas as:

∀² > 0 ∃δ > 0 . . .

Here, the unusual, rotated letters are taken directly from the language fol.
In philosophy, fol and enrichments of it are used in two different ways. As logic and philosophy
in mathematics, the notation of fol is used when absolute clarity, rigor, and
lack of ambiguity are essential. But it is also used as a case study of making
informal notions (like grammaticality, meaning, truth, and proof) precise and
rigorous. The applications in linguistics stem from this use, since linguistics
is concerned, in large part, with understanding some of these same informal
notions.
In artificial intelligence, fol is also used in two ways. Some researchers logic and artificial
take advantage of the simple structure of fol sentences to use it as a way to intelligence
encode knowledge to be stored and used by a computer. Thinking is modeled
by manipulations involving sentences of fol. The other use is as a precise
specification language for stating axioms and proving results about artificial
agents.
In computer science, fol has had an even more profound influence. The logic and computer
very idea of an artificial language that is precise yet rich enough to program science
computers was inspired by this language. In addition, all extant programming
languages borrow some notions from one or another dialect of fol. Finally,
there are so-called logic programming languages, like Prolog, whose programs
are sequences of sentences in a certain dialect of fol. We will discuss the

Why learn an artificial language?


4 / Introduction

logical basis of Prolog a bit in Part III of this book.


artificial languages Fol serves as the prototypical example of what is known as an artificial
language. These are languages that were designed for special purposes, and
are contrasted with so-called natural languages, languages like English and
Greek that people actually speak. The design of artificial languages within the
symbolic sciences is an important activity, one that is based on the success of
fol and its descendants.
Even if you are not going to pursue logic or any of the symbolic sciences,
the study of fol can be of real benefit. That is why it is so widely taught. For
one thing, learning fol is an easy way to demystify a lot of formal work. It will
also teach you a great deal about your own language, and the laws of logic it
logic and ordinary supports. First, fol, while very simple, incorporates in a clean way some of the
language important features of human languages. This helps make these features much
more transparent. Chief among these is the relationship between language
and the world. But, second, as you learn to translate English sentences into
fol you will also gain an appreciation of the great subtlety that resides in
English, subtlety that cannot be captured in fol or similar languages, at least
not yet. Finally, you will gain an awareness of the enormous ambiguity present
in almost every English sentence, ambiguity which somehow does not prevent
us from understanding each other in most situations.

Consequence and proof

Earlier, we asked what makes one claim follow from others: convention, or
something else? Giving an answer to this question for fol takes up a signif-
icant part of this book. But a short answer can be given here. Modern logic
logical consequence teaches us that one claim is a logical consequence of another if there is no way
the latter could be true without the former also being true.
This is the notion of logical consequence implicit in all rational inquiry.
All the rational disciplines presuppose that this notion makes sense, and that
we can use it to extract consequences of what we know to be so, or what we
think might be so. It is also used in disconfirming a theory. For if a particular
claim is a logical consequence of a theory, and we discover that the claim is
false, then we know the theory itself must be incorrect in some way or other.
If our physical theory has as a consequence that the planetary orbits are
circular when in fact they are elliptical, then there is something wrong with our
physics. If our economic theory says that inflation is a necessary consequence
of low unemployment, but today’s low employment has not caused inflation,
then our economic theory needs reassessment.
Rational inquiry, in our sense, is not limited to academic disciplines, and so

Introduction
Essential instructions about homework exercises / 5

neither are the principles of logic. If your beliefs about a close friend logically
imply that he would never spread rumors behind your back, but you find that
he has, then your beliefs need revision. Logical consequence is central, not
only to the sciences, but to virtually every aspect of everyday life.
One of our major concerns in this book is to examine this notion of logical
consequence as it applies specifically to the language fol. But in so doing, we
will also learn a great deal about the relation of logical consequence in natural
languages. Our main concern will be to learn how to recognize when a specific
claim follows logically from others, and conversely, when it does not. This is
an extremely valuable skill, even if you never have occasion to use fol again
after taking this course. Much of our lives are spent trying to convince other
people of things, or being convinced of things by other people, whether the
issue is inflation and unemployment, the kind of car to buy, or how to spend
the evening. The ability to distinguish good reasoning from bad will help you
recognize when your own reasoning could be strengthened, or when that of
others should be rejected, despite superficial plausibility.
It is not always obvious when one claim is a logical consequence of oth-
ers, but powerful methods have been developed to address this problem, at
least for fol. In this book, we will explore methods of proof—how we can proof and
prove that one claim is a logical consequence of another—and also methods counterexample
for showing that a claim is not a consequence of others. In addition to the
language fol itself, these two methods, the method of proof and the method
of counterexample, form the principal subject matter of this book.

Essential instructions about homework exercises

This book came packaged with software that you must have to use the book.
In the software package, you will find a CD-ROM containing four computer
applications—Tarski’s World, Fitch, Boole and Submit—and a manual that Tarski’s World, Fitch,
explains how to use them. If you do not have the complete package, you will Boole and Submit
not be able to do many of the exercises or follow many of the examples used in
the book. The CD-ROM also contains an electronic copy of the book, in case
you prefer reading it on your computer. When you buy the package, you also
get access to the Grade Grinder, an Internet grading service that can check the Grade Grinder
whether your homework is correct.
About half of the exercises in the first two parts of the book will be com-
pleted using the software on the CD-ROM. These exercises typically require
that you create a file or files using Tarski’s World, Fitch or Boole, and then
submit these solution files using the program Submit. When you do this, your
solutions are not submitted directly to your instructor, but rather to our grad-

Essential instructions about homework exercises


6 / Introduction

ing server, the Grade Grinder, which assesses your files and sends a report to
both you and your instructor. (If you are not using this book as a part of a
formal class, you can have the reports sent just to you.)
Exercises in the book are numbered n.m, where n is the number of the
chapter and m is the number of the exercise in that chapter. Exercises whose
solutions consist of one or more files that you are to submit to the Grade
➶ vs. ✎ Grinder are indicated with an arrow (➶), so that you know the solutions are
to be sent off into the Internet ether. Exercises whose solutions are to be
turned in (on paper) to your instructor are indicated with a pencil (✎). For
example, Exercises 36 and 37 in Chapter 6 might look like this:
6.36 Use Tarski’s World to build a world in which the following sentences
➶ are all true. . . .
6.37 Turn in an informal proof that the following argument is logically
✎ valid. . . .
The arrow on Exercise 6.36 tells you that the world you create using
Tarski’s World is to be submitted electronically, and that there is nothing
else to turn in. The pencil on Exercise 6.37 tells you that your solution should
be turned in directly to your instructor, on paper.
Some exercises ask you to turn in something to your instructor in addition
to submitting a file electronically. These are indicated with both an arrow and
a pencil (➶|✎). This is also used when the exercise may require a file to be
submitted, but may not, depending on the solution. For example, the next
problem in Chapter 6 might ask:
6.38 Is the following argument valid? If so, use Fitch to construct a formal
➶|✎ proof of its validity. If not, explain why it is invalid and turn in your
explanation to your instructor.

Here, we can’t tell you definitely whether you’ll be submitting a file or


turning something in without giving away an important part of the exercise,
so we mark the exercise with both symbols.
By the way, in giving instructions in the exercises, we will reserve the word
submitting vs. turning “submit” for electronic submission, using the Submit program. We use “turn
in exercises in” when you are to turn in the solution to your instructor.
When you create files to be submitted to the Grade Grinder, it is important
that you name them correctly. Sometimes we will tell you what to name the
files, but more often we expect you to follow a few standard conventions. Our
naming solution files naming conventions are simple. If you are creating a proof using Fitch, then
you should name the file Proof n.m, where n.m is the number of the exercise. If
you are creating a world or sentence file in Tarski’s World, then you should call

Introduction
Essential instructions about homework exercises / 7

it either World n.m or Sentences n.m, where n.m is the number of the exercise.
Finally, if you are creating a truth table using Boole, you should name it
Table n.m. The key thing is to get the right exercise number in the name,
since otherwise your solution will be graded incorrectly. We’ll remind you of
these naming conventions a few times, but after that you’re on your own.
When an exercise asks you to construct a formal proof using Fitch, you
will find a file on your disk called Exercise n.m. This file contains the proof set starting proofs
up, so you should open it and construct your solution in this file. This is a lot
easier for you and also guarantees that the Grade Grinder will know which
exercise you are solving. So make sure you always start with the packaged
Exercise file when you create your solution.
Exercises may also have from one to three stars (?, ??, ? ??), as a rough ? stars
indication of the difficulty of the problem. For example, this would be an
exercise that is a little more difficult than average (and whose solution you
turn in to your instructor):
6.39 Design a first-order language that allows you to express the following
✎? English sentences. . . .

Remember

1. The arrow (➶) means that you submit your solution electronically.

2. The pencil (✎) means that you turn in your solution to your instruc-
tor.

3. The combination (➶|✎) means that your solution may be either a


submitted file or something to turn in, or possibly both.

4. Stars (?, ??, ???) indicate exercises that are more difficult than average.

5. Unless otherwise instructed, name your files Proof n.m, World n.m,
Sentences n.m, or Table n.m, where n.m is the number of the exercise.

6. When using Fitch to construct Proof n.m, start with the exercise file
Exercise n.m, which contains the problem setup.

Throughout the book, you will find a special kind of exercise that we
call You try it exercises. These appear as part of the text rather than in You try it sections
the exercise sections because they are particularly important. They either
illustrate important points about logic that you will need to understand later
or teach you some basic operations involving one of the computer programs

Essential instructions about homework exercises


8 / Introduction

that came with your book. Because of this, you shouldn’t skip any of the You
try it sections. Do these exercises as soon as you come to them, if you are in
the vicinity of a computer. If you aren’t in the vicinity of a computer, come
back and do them as soon as you are.
Here’s your first You try it exercise. Make sure you actually do it, right
now if possible. It will teach you how to use Submit to send files to the Grade
Grinder, a skill you definitely want to learn. You will need to know your email
address, your instructor’s name and email address, and your Book ID number
before you can do the exercise. If you don’t know any of these, talk to your
instructor first. Your computer must be connected to the internet to submit
files. If it’s not, use a public computer at your school or at a public library.

You try it
................................................................
I 1. We’re going to step you through the process of submitting a file to the
Grade Grinder. The file is called World Submit Me 1. It is a Tarski’s World
file, but you won’t have to open it using Tarski’s World in order to sub-
mit it. We’ll pretend that it is an exercise file that you’ve created while
doing your homework, and now you’re ready to submit it. More complete
instructions on running Submit are contained in the instruction manual
that came with the software.
I 2. Find the program Submit on the CD-ROM that came with your book.
Submit has a blue and yellow icon and appears inside a folder called Sub-
mit Folder. Once you’ve found it, double-click on the icon to launch the
program.
I 3. After a moment, you will see the main Submit window, which has a rotat-
ing cube in the upper-left corner. The first thing you should do is fill in the
requested information in the five fields. Enter your Book ID first, then your
name and email address. You have to use your complete email address—
for example, [email protected], not just claire or claire@cs—since
the Grade Grinder will need the full address to send its response back to
you. Also, if you have more than one email address, you have to use the
same one every time you submit files, since your email address and Book ID
together are how Grade Grinder will know that it is really you submitting
files. Finally, fill in your instructor’s name and complete email address. Be
very careful to enter the correct and complete email addresses!

Introduction
Essential instructions about homework exercises / 9

4. If you are working on your own computer, you might want to save the J
information you’ve just entered on your hard disk so that you won’t have
to enter it by hand each time. You can do this by choosing Save As. . .
from the File menu. This will save all the information except the Book ID
in a file called Submit User Data. Later, you can launch Submit by double-
clicking on this file, and the information will already be entered when the
program starts up.

5. We’re now ready to specify the file to submit. Click on the button Choose J
Files To Submit in the lower-left corner. This opens a window showing
two file lists. The list on the left shows files on your computer—currently,
the ones inside the Submit Folder—while the one on the right (which is
currently empty) will list files you want to submit. We need to locate the
file World Submit Me 1 on the left and copy it over to the right.
The file World Submit Me 1 is located in the Tarski’s World exercise files
folder. To find this folder you will have to navigate among folders until it
appears in the file list on the left. Start by clicking once on the Submit
Folder button above the left-hand list. A menu will appear and you can
then move up to higher folders by choosing their names (the higher folders
appear lower on this menu). Move to the next folder up from the Submit
Folder, which should be called LPL Software. When you choose this folder,
the list of files will change. On the new list, find the folder Tarski’s World
Folder and double-click on its name to see the contents of the folder. The
list will again change and you should now be able to see the folder TW Exer-
cise Files. Double-click on this folder and the file list will show the contents
of this folder. Toward the bottom of the list (you will have to scroll down
the list by clicking on the scroll buttons), you will find World Submit Me
1. Double-click on this file and its name will move to the list on the right.

6. When you have successfully gotten the file World Submit Me 1 on the right- J
hand list, click the Done button underneath the list. This should bring you
back to the original Submit window, only now the file you want to submit
appears in the list of files. (Macintosh users can get to this point quickly by
dragging the files they want to submit onto the Submit icon in the Finder.
This will launch Submit and put those files in the submission list. If you
drag a folder of files, it will put all the files in the folder onto the list.)

7. When you have the correct file on the submission list, click on the Sub- J
mit Files button under this list. Submit will ask you to confirm that you
want to submit World Submit Me 1, and whether you want to send the

Essential instructions about homework exercises


10 / Introduction

results just to you or also to your instructor. In this case, select Just Me.
When you are submitting finished homework exercises, you should select
Instructor Too. Once you’ve chosen who the results should go to, click
the Proceed button and your submission will be sent. (With real home-
work, you can always do a trial submission to see if you got the answers
right, asking that the results be sent just to you. When you are satisfied
with your solutions, submit the files again, asking that the results be sent
to the instructor too. But don’t forget the second submission!)
I 8. In a moment, you will get a dialog box that will tell you if your submission
has been successful. If so, it will give you a “receipt” message that you can
save, if you like. If you do not get this receipt, then your submission has
not gone through and you will have to try again.
I 9. A few minutes after the Grade Grinder receives your file, you should get
an email message saying that it has been received. If this were a real home-
work exercise, it would also tell you if the Grade Grinder found any errors
in your homework solutions. You won’t get an email report if you put in
the wrong, or a misspelled, email address. If you don’t get a report, try
submitting again with the right address.
I 10. When you are done, choose Quit from the File menu. Congratulations on
submitting your first file.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Here’s an important thing for you to know: when you submit files to the
what gets sent Grade Grinder, Submit sends a copy of the files. The original files are still
on the disk where you originally saved them. If you saved them on a public
computer, it is best not to leave them lying around. Put them on a floppy disk
that you can take with you, and delete any copies from the public computer’s
hard disk.

To the instructor

Students, you may skip this section. It is a personal note from us, the authors,
to instructors planning to use this package in their logic courses.

Practical matters
We use the Language, Proof and Logic package (LPL) in two very different
sorts of courses. One is a first course in logic for undergraduates with no
previous background in logic, philosophy, mathematics, or computer science.

Introduction
To the instructor / 11

This important course, sometimes disparagingly referred to as “baby logic,”


is often an undergraduate’s first and only exposure to the rigorous study of
reasoning. When we teach this course, we cover much of the first two parts
of the book, leaving out many of the sections indicated as optional in the
table of contents. Although some of the material in these two parts may seem
more advanced than is usually covered in a traditional introductory course,
we find that the software makes it completely accessible to even the relatively
unprepared student.
At the other end of the spectrum, we use LPL in an introductory graduate-
level course in metatheory, designed for students who have already had some
exposure to logic. In this course, we quickly move through the first two parts,
thereby giving the students both a review and a common framework for use
in the discussions of soundness and completeness. Using the Grade Grinder,
students can progress through much of the early material at their own pace,
doing only as many exercises as is needed to demonstrate competence.
There are no doubt many other courses for which the package would be
suitable. Though we have not had the opportunity to use it this way, it would
be ideally suited for a two-term course in logic and its metatheory.
Our courses are typically listed as philosophy courses, though many of the
students come from other majors. Since LPL is designed to satisfy the logical
needs of students from a wide variety of disciplines, it fits naturally into logic
courses taught in other departments, most typically mathematics and com-
puter science. Instructors in different departments may select different parts
of the optional material. For example, computer science instructors may want
to cover the sections on resolution in Part III, though philosophy instructors
generally do not cover this material.
If you have not used software in your teaching before, you may be con-
cerned about how to incorporate it into your class. Again, there is a spectrum
of possibilities. At one end is to conduct your class exactly the way you always
do, letting the students use the software on their own to complete homework
assignments. This is a perfectly fine way to use the package, and the students
will still benefit significantly from the suite of software tools. We find that
most students now have easy access to computers and the Internet, and so
no special provisions are necessary to allow them to complete and submit the
homework.
At the other end are courses given in computer labs or classrooms, where
the instructor is more a mentor offering help to students as they proceed at
their own pace, a pace you can keep in step with periodic quizzes and exams.
Here the student becomes a more active participant in the learning, but such
a class requires a high computer:student ratio, at least one:three. For a class

To the instructor
12 / Introduction

of 30 or fewer students, this can be a very effective way to teach a beginning


logic course.
In between, and the style we typically use, is to give reasonably traditional
presentations, but to bring a laptop to class from time to time to illustrate
important material using the programs. This requires some sort of projection
system, but also allows you to ask the students to do some of the computer
problems in class. We encourage you to get students to operate the computer
themselves in front of the class, since they thereby learn from one another,
both about strategies for solving problems and constructing proofs, and about
different ways to use the software. A variant of this is to schedule a weekly
lab session as part of the course.
The book contains an extremely wide variety of exercises, ranging from
solving puzzles expressed in fol to conducting Boolean searches on the World
Wide Web. There are far more exercises than you can expect your students
to do in a single quarter or semester. Beware that many exercises, especially
those using Tarski’s World, should be thought of as exercise sets. They may, for
example, involve translating ten or twenty sentences, or transforming several
sentences into conjunctive normal form. Students can find hints and solutions
to selected exercises on our web site. You can download a list of these exercises
from the same site.
Although there are more exercises than you can reasonably assign in a
semester, and so you will have to select those that best suit your course, we
do urge you to assign all of the You try it exercises. These are not difficult
and do not test students’ knowledge. Instead, they are designed to illustrate
important logical concepts, to introduce students to important features of the
programs, or both. The Grade Grinder will check any files that the students
create in these sections.
We should say a few words about the Grade Grinder, since it is a truly
innovative feature of this package. Most important, the Grade Grinder will
free you from the most tedious aspect of teaching logic, namely, grading those
kinds of problems whose assessment can be mechanized. These include formal
proofs, translation into fol, truth tables, and various other kinds of exercises.
This will allow you to spend more time on the more rewarding parts of teaching
the material.
That said, it is important to emphasize two points. The first is that the
Grade Grinder is not limited in the way that most computerized grading
programs are. It uses sophisticated techniques, including a powerful first-order
theorem prover, in assessing student answers and providing intelligent reports
on those answers. Second, in designing this package, we have not fallen into
the trap of tailoring the material to what can be mechanically assessed. We

Introduction
To the instructor / 13

firmly believe that computer-assisted learning has an important but limited


role to play in logic instruction. Much of what we teach goes beyond what
can be assessed automatically. This is why about half of the exercises in the
book still require human attention.
It is a bit misleading to say that the Grade Grinder “grades” the home-
work. The Grade Grinder simply reports to you any errors in the students’
solutions, leaving the decision to you what weight to give to individual prob-
lems and whether partial credit is appropriate for certain mistakes. A more
detailed explanation of what the Grade Grinder does and what grade reports
look like can be found at the web address given on page 15.
Before your students can request that their Grade Grinder results be sent
to you, you will have to register with the Grade Grinder as an instructor. This registering with
can be done by going to the LPL web site and following the Instructor links. the Grade Grinder

Philosophical remarks
This book, and the supporting software that comes with it, grew out of our
own dissatisfaction with beginning logic courses. It seems to us that students
all too often come away from these courses with neither of the things we
want them to have. They do not understand the first-order language or the
rationale for it, and they are unable to explain why or even whether one claim
follows logically from another. Worse, they often come away with a complete
misconception about logic. They leave their first (and only) course in logic
having learned what seem like a bunch of useless formal rules. They gain little
if any understanding about why those rules, rather than some others, were
chosen, and they are unable to take any of what they have learned and apply
it in other fields of rational inquiry or in their daily lives. Indeed, many come
away convinced that logic is both arbitrary and irrelevant. Nothing could be
further from the truth.
The real problem, as we see it, is a failure on the part of logicians to find a
simple way to explain the relationship between meaning and the laws of logic.
In particular, we do not succeed in conveying to students what sentences
in fol mean, or in conveying how the meanings of sentences govern which
methods of inference are valid and which are not. It is this problem we set
out to solve with LPL.
There are two ways to learn a second language. One is to learn how to
translate sentences of the language to and from sentences of your native lan-
guage. The other is to learn by using the language directly. In teaching fol,
the first way has always been the prevailing method of instruction. There are
serious problems with this approach. Some of the problems, oddly enough,

To the instructor
14 / Introduction

stem from the simplicity, precision, and elegance of fol. This results in a dis-
tracting mismatch between the student’s native language and fol. It forces
students trying to learn fol to be sensitive to subtleties of their native lan-
guage that normally go unnoticed. While this is useful, it often interferes with
the learning of fol. Students mistake complexities of their native tongue for
complexities of the new language they are learning.
In LPL, we adopt the second method for learning fol. Students are given
many tasks involving the language, tasks that help them understand the mean-
ings of sentences in fol. Only then, after learning the basics of the symbolic
language, are they asked to translate between English and fol. Correct trans-
lation involves finding a sentence in the target language whose meaning ap-
proximates, as closely as possible, the meaning of the sentence being trans-
lated. To do this well, a translator must already be fluent in both languages.
We have been using this approach for several years. What allows it to
work is Tarski’s World, one of the computer programs in this package. Tarski’s
World provides a simple environment in which fol can be used in many of
the ways that we use our native language. We provide a large number of
problems and exercises that walk students through the use of the language in
this setting. We build on this in other problems where they learn how to put
the language to more sophisticated uses.
As we said earlier, besides teaching the language fol, we also discuss basic
methods of proof and how to use them. In this regard, too, our approach
is somewhat unusual. We emphasize both informal and formal methods of
proof. We first discuss and analyze informal reasoning methods, the kind
used in everyday life, and then formalize these using a Fitch-style natural
deduction system. The second piece of software that comes with the book,
which we call Fitch, makes it easy for students to learn this formal system
and to understand its relation to the crucial informal methods that will assist
them in other disciplines and in any walk of life.
A word is in order about why we chose a Fitch-style system of deduction,
rather than a more semantically based method like truth trees or semantic
tableau. In our experience, these semantic methods are easy to teach, but
are only really applicable to arguments in formal languages. In contrast, the
important rules in the Fitch system, those involving subproofs, correspond
closely to essential methods of reasoning and proof, methods that can be used
in virtually any context: formal or informal, deductive or inductive, practical
or theoretical. The point of teaching a formal system of deduction is not
so students will use the specific system later in life, but rather to foster an
understanding of the most basic methods of reasoning—methods that they
will use—and to provide a precise model of reasoning for use in discussions of

Introduction
Web address / 15

soundness and completeness.


Tarski’s World also plays a significant role in our discussion of proof, along
with Fitch, by providing an environment for showing that one claim does
not follow from another. With LPL, students learn not just how to prove
consequences of premises, but also the equally important technique of showing
that a given claim does not follow logically from its premises. To do this, they
learn how to give counterexamples, which are really proofs of nonconsequence.
These will often be given using Tarski’s World.
The approach we take in LPL is also unusual in two other respects. One
is our emphasis on languages in which all the basic symbols are assumed to
be meaningful. This is in contrast to the so-called “uninterpreted languages”
(surely an oxymoron) so often found in logic textbooks. Another is the inclu-
sion of various topics not usually covered in introductory logic books. These
include the theory of conversational implicature, material on generalized quan-
tifiers, and most of the material in Part III. We believe that even if these topics
are not covered, their presence in the book illustrates to the student the rich-
ness and open-endedness of the discipline of logic.

Web address

In addition to the book, software, and grading service, additional material can
be found on the Web at the following address:

http://www-csli.stanford.edu/LPL/

Note the dash (-) rather than the more common period (.) after “www” in
this address.

Web address
16
Part I
Propositional Logic

17
18
Chapter 1

Atomic Sentences
In the Introduction, we talked about fol as though it were a single language.
Actually, it is more like a family of languages, all having a similar grammar
and sharing certain important vocabulary items, known as the connectives
and quantifiers. Languages in this family can differ, however, in the specific
vocabulary used to form their most basic sentences, the so-called atomic sen-
tences.
Atomic sentences correspond to the most simple sentences of English, sen- atomic sentences
tences consisting of some names connected by a predicate. Examples are Max
ran, Max saw Claire, and Claire gave Scruffy to Max. Similarly, in fol atomic
sentences are formed by combining names (or individual constants, as they
are often called) and predicates, though the way they are combined is a bit
different from English, as you will see.
Different versions of fol have available different names and predicates. We names and predicates
will frequently use a first-order language designed to describe blocks arranged
on a chessboard, arrangements that you will be able to create in the program
Tarski’s World. This language has names like b, e, and n2 , and predicates
like Cube, Larger, and Between. Some examples of atomic sentences in this
language are Cube(b), Larger(c, f), and Between(b, c, d). These sentences say,
respectively, that b is a cube, that c is larger than f , and that b is between c
and d.
Later in this chapter, we will look at the atomic sentences used in two
other versions of fol, the first-order languages of set theory and arithmetic.
In the next chapter, we begin our discussion of the connectives and quantifiers
common to all first-order languages.

Section 1.1
Individual constants

Individual constants are simply symbols that are used to refer to some fixed
individual object. They are the fol analogue of names, though in fol we
generally don’t capitalize them. For example, we might use max as an individ-
ual constant to denote a particular person, named Max, or 1 as an individual
constant to denote a particular number, the number one. In either case, they
would basically work exactly the way names work in English. Our blocks

19
20 / Atomic Sentences

language takes the letters a through f plus n1, n2 , . . . as its names.


The main difference between names in English and the individual constants
names in fol of fol is that we require the latter to refer to exactly one object. Obviously,
the name Max in English can be used to refer to many different people, and
might even be used twice in a single sentence to refer to two different people.
Such wayward behavior is frowned upon in fol.
There are also names in English that do not refer to any actually existing
object. For example Pegasus, Zeus, and Santa Claus are perfectly fine names
in English; they just fail to refer to anything or anybody. We don’t allow such
names in fol.1 What we do allow, though, is for one object to have more than
one name; thus the individual constants matthew and max might both refer
to the same individual. We also allow for nameless objects, objects that have
no name at all.

Remember

In fol,

◦ Every individual constant must name an (actually existing) object.

◦ No individual constant can name more than one object.

◦ An object can have more than one name, or no name at all.

Section 1.2
Predicate symbols

Predicate symbols are symbols used to express some property of objects or


predicate or relation some relation between objects. Because of this, they are also sometimes called
symbols relation symbols. As in English, predicates are expressions that, when com-
bined with names, form atomic sentences. But they don’t correspond exactly
to the predicates of English grammar.
Consider the English sentence Max likes Claire. In English grammar, this
is analyzed as a subject-predicate sentence. It consists of the subject Max
followed by the predicate likes Claire. In fol, by contrast, we view this as
logical subjects a claim involving two “logical subjects,” the names Max and Claire, and a
1 There is, however, a variant of first-order logic called free logic in which this assumption

is relaxed. In free logic, there can be individual constants without referents. This yields a
language more appropriate for mythology and fiction.

Chapter 1
Predicate symbols / 21

predicate, likes, that expresses a relation between the referents of the names.
Thus, atomic sentences of fol often have two or more logical subjects, and the
predicate is, so to speak, whatever is left. The logical subjects are called the
“arguments” of the predicate. In this case, the predicate is said to be binary, arguments of a
since it takes two arguments. predicate
In English, some predicates have optional arguments. Thus you can say
Claire gave, Claire gave Scruffy, or Claire gave Scruffy to Max. Here the
predicate gave is taking one, two, and three arguments, respectively. But in
fol, each predicate has a fixed number of arguments, a fixed arity as it is arity of a predicate
called. This is a number that tells you how many individual constants the
predicate symbol needs in order to form a sentence. The term “arity” comes
from the fact that predicates taking one argument are called unary, those
taking two are binary, those taking three are ternary, and so forth.
If the arity of a predicate symbol Pred is 1, then Pred will be used to
express some property of objects, and so will require exactly one argument (a
name) to make a claim. For example, we might use the unary predicate symbol
Home to express the property of being at home. We could then combine this
with the name max to get the expression Home(max), which expresses the
claim that Max is at home.
If the arity of Pred is 2, then Pred will be used to represent a relation
between two objects. Thus, we might use the expression Taller(claire, max) to
express a claim about Max and Claire, the claim that Claire is taller than
Max. In fol, we can have predicate symbols of any arity. However, in the
blocks language used in Tarski’s World we restrict ourselves to predicates
with arities 1, 2, and 3. Here we list the predicates of that language, this time
with their arity.

Arity 1: Cube, Tet, Dodec, Small, Medium, Large

Arity 2: Smaller, Larger, LeftOf, RightOf, BackOf, FrontOf, SameSize, Same-


Shape, SameRow, SameCol, Adjoins, =

Arity 3: Between

Tarski’s World assigns each of these predicates a fixed interpretation, one


reasonably consistent with the corresponding English verb phrase. For exam-
ple, Cube corresponds to is a cube, BackOf corresponds to is in back of, and
so forth. You can get the hang of them by working through the first set of
exercises given below. To help you learn exactly what the predicates mean,
Table 1.1 lists atomic sentences that use these predicates, together with their
interpretations.
In English, predicates are sometimes vague. It is often unclear whether vagueness

Section 1.2
22 / Atomic Sentences

Table 1.1: Blocks language predicates.

Atomic
Sentence Interpretation
Tet(a) a is a tetrahedron
Cube(a) a is a cube
Dodec(a) a is a dodecahedron
Small(a) a is small
Medium(a) a is medium
Large(a) a is large
SameSize(a, b) a is the same size as b
SameShape(a, b) a is the same shape as b
Larger(a, b) a is larger than b
Smaller(a, b) a is smaller than b
SameCol(a, b) a is in the same column as b
SameRow(a, b) a is in the same row as b
a and b are located on adjacent (but
Adjoins(a, b)
not diagonally) squares
a is located nearer to the left edge of
LeftOf(a, b)
the grid than b
a is located nearer to the right edge
RightOf(a, b)
of the grid than b
a is located nearer to the front of the
FrontOf(a, b)
grid than b
a is located nearer to the back of the
BackOf(a, b)
grid than b
a, b and c are in the same row, col-
Between(a, b, c) umn, or diagonal, and a is between b
and c

an individual has the property in question or not. For example, Claire, who
is sixteen, is young. She will not be young when she is 96. But there is no
determinate age at which a person stops being young: it is a gradual sort of
thing. Fol, however, assumes that every predicate is interpreted by a deter-
determinate property minate property or relation. By a determinate property, we mean a property
for which, given any object, there is a definite fact of the matter whether or
not the object has the property.
This is one of the reasons we say that the blocks language predicates are

Chapter 1
Atomic sentences / 23

somewhat consistent with the corresponding English predicates. Unlike the


English predicates, they are given very precise interpretations, interpretations
that are suggested by, but not necessarily identical with, the meanings of the
corresponding English phrases. The case where the discrepancy is probably
the greatest is between Between and is between.

Remember

In fol,

◦ Every predicate symbol comes with a single, fixed “arity,” a number


that tells you how many names it needs to form an atomic sentence.

◦ Every predicate is interpreted by a determinate property or relation


of the same arity as the predicate.

Section 1.3
Atomic sentences

In fol, the simplest kinds of claims are those made with a single predicate
and the appropriate number of individual constants. A sentence formed by a
predicate followed by the right number of names is called an atomic sentence. atomic sentence
For example Taller(claire, max) and Cube(a) are atomic sentences, provided
the names and predicate symbols in question are part of the vocabulary of
our language. In the case of the identity symbol, we put the two required
names on either side of the predicate, as in a = b. This is called “infix” no- infix vs. prefix notation
tation, since the predicate symbol = appears in between its two arguments.
With the other predicates we use “prefix” notation: the predicate precedes
the arguments.
The order of the names in an atomic sentence is quite important. Just
as Claire is taller than Max means something different from Max is taller
than Claire, so too Taller(claire, max) means something completely different
than Taller(max, claire). We have set things up in our blocks language so that
the order of the arguments of the predicates is like that in English. Thus
LeftOf(b, c) means more or less the same thing as the English sentence b is
left of c, and Between(b, c, d) means roughly the same as the English b is
between c and d.
Predicates and names designate properties and objects, respectively. What

Section 1.3
24 / Atomic Sentences

claims makes sentences special is that they make claims (or express propositions).
A claim is something that is either true or false; which of these it is we call
truth value its truth value. Thus Taller(claire, max) expresses a claim whose truth value is
true, while Taller(max, claire) expresses a claim whose truth value is false.
(You probably didn’t know that, but now you do.) Given our assumption
that predicates express determinate properties and that names denote definite
individuals, it follows that each atomic sentence of fol must express a claim
that is either true or false.

You
. . . . .try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I 1. It is time to try your hand using Tarski’s World. In this exercise, you
will use Tarski’s World to become familiar with the interpretations of the
atomic sentences of the blocks language. Before starting, though, you need
to learn how to launch Tarski’s World and perform some basic operations.
Read the appropriate sections of the user’s manual describing Tarski’s
World before going on.
I 2. Launch Tarski’s World and open the files called Wittgenstein’s World and
Wittgenstein’s Sentences. You will find these in the folder TW Exercises. In
these files, you will see a blocks world and a list of atomic sentences. (We
have added comments to some of the sentences. Comments are prefaced
by a semicolon (“;”), which tells Tarski’s World to ignore the rest of the
line.)
I 3. Move through the sentences using the arrow keys on your keyboard, men-
tally assessing the truth value of each sentence in the given world. Use
the Verify button to check your assessments. (Since the sentences are all
atomic sentences the Game button will not be helpful.) If you are sur-
prised by any of the evaluations, try to figure out how your interpretation
of the predicate differs from the correct interpretation.

I 4. Next change Wittgenstein’s World in many different ways, seeing what hap-
pens to the truth of the various sentences. The main point of this is to
help you figure out how Tarski’s World interprets the various predicates.
For example, what does BackOf(d, c) mean? Do two things have to be in
the same column for one to be in back of the other?

I 5. Play around as much as you need until you are sure you understand the
meanings of the atomic sentences in this file. For example, in the original
world none of the sentences using Adjoins comes out true. You should try

Chapter 1
Atomic sentences / 25

to modify the world to make some of them true. As you do this, you will
notice that large blocks cannot adjoin other blocks.
6. In doing this exercise, you will no doubt notice that Between does not mean J
exactly what the English between means. This is due to the necessity of
interpreting Between as a determinate predicate. For simplicity, we insist
that in order for b to be between c and d, all three must be in the same
row, column, or diagonal.
7. When you are finished, close the files, but do not save the changes you J
have made to them.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Remember

In fol,

◦ Atomic sentences are formed by putting a predicate of arity n in front


of n names (enclosed in parentheses and separated by commas).

◦ Atomic sentences are built from the identity predicate, =, using infix
notation: the arguments are placed on either side of the predicate.

◦ The order of the names is crucial in forming atomic sentences.

Exercises

You will eventually want to read the entire chapter of the user’s manual on how to use Tarski’s World. To
do the following problems, you will need to read at least the first four sections. Also, if you don’t remember
how to name and submit your solution files, you should review the section on essential instructions in
the Introduction, starting on page 5.

1.1 If you skipped the You try it section, go back and do it now. This is an easy but crucial
exercise that will familiarize you with the atomic sentences of the blocks language. There is
nothing you need to turn in or submit, but don’t skip the exercise!

1.2 (Copying some atomic sentences) This exercise will give you some practice with the Tarski’s
➶ World keyboard window, as well as with the syntax of atomic sentences. The following are all
atomic sentences of our language. Start a new sentence file and copy them into it. Have Tarski’s
World check each formula after you write it to see that it is a sentence. If you make a mistake,
edit it before going on. Make sure you use the Add Sentence command between sentences,

Section 1.3
26 / Atomic Sentences

not the return key. If you’ve done this correctly, the sentences in your list will be numbered
and separated by horizontal lines.
1. Tet(a)
2. Medium(a)
3. Dodec(b)
4. Cube(c)
5. FrontOf(a, b)
6. Between(a, b, c)
7. a = d
8. Larger(a, b)
9. Smaller(a, c)
10. LeftOf(b, c)

Remember, you should save these sentences in a file named Sentences 1.2. When you’ve finished
your first assignment, submit all of your solution files using the Submit program.

1.3 (Building a world) Build a world in which all the sentences in Exercise 1.2 are simultaneously
➶ true. Remember to name and submit your world file as World 1.3.

1.4 (Translating atomic sentences) Here are some simple sentences of English. Start a new sentence
➶ file and translate them into fol.
1. a is a cube.
2. b is smaller than a.
3. c is between a and d.
4. d is large.
5. e is larger than a.
6. b is a tetrahedron.
7. e is a dodecahedron.
8. e is right of b.
9. a is smaller than e.
10. d is in back of a.
11. b is in the same row as d.
12. b is the same size as c.

After you’ve translated the sentences, build a world in which all of your translations are true.
Submit your sentence and world files as Sentences 1.4 and World 1.4.

1.5 (Naming objects) Open Lestrade’s Sentences and Lestrade’s World. You will notice that none of
➶ the objects in this world has a name. Your task is to assign the objects names in such a way
that all the sentences in the list come out true. Remember to save your solution in a file named
World 1.5. Be sure to use Save World As. . . , not Save World.

Chapter 1
Atomic sentences / 27

1.6 (Naming objects, continued) Not all of the choices in Exercise 1.5 were forced on you. That
➶? is, you could have assigned the names differently and still had the sentences come out true.
Change the assignment of as many names as possible while still making all the sentences true,
and submit the changed world as World 1.6. In order for us to compare your files, you must
submit both World 1.5 and World 1.6 at the same time.
1.7 (Context sensitivity of predicates) We have stressed the fact that fol assumes that every
➶|✎ predicate is interpreted by a determinate relation, whereas this is not the case in natural
languages like English. Indeed, even when things seem quite determinate, there is often some
form of context sensitivity. In fact, we have built some of this into Tarski’s World. Consider,
for example, the difference between the predicates Larger and BackOf. Whether or not cube a is
larger than cube b is a determinate matter, and also one that does not vary depending on your
perspective on the world. Whether or not a is back of b is also determinate, but in this case it
does depend on your perspective. If you rotate the world by 90◦ , the answer might change.
Open Austin’s Sentences and Wittgenstein’s World. Evaluate the sentences in this file and
tabulate the resulting truth values in a table like the one below. We’ve already filled in the first
column, showing the values in the original world. Rotate the world 90◦ clockwise and evaluate
the sentences again, adding the results to the table. Repeat until the world has come full circle.
Original Rotated 90◦ Rotated 180◦ Rotated 270◦
1. false
2. false
3. true
4. false
5. true
6. false

You should be able to think of an atomic sentence in the blocks language that would produce
the following pattern:

true false true false

Add a seventh sentence to Austin’s Sentences that would display the above pattern.
Are there any atomic sentences in the blocks language that would produce this pattern?

false true false false

If so, add such a sentence as sentence eight in Austin’s Sentences. If not, leave sentence eight
blank.
Are there any atomic sentences that would produce a row in the table containing exactly
three true’s? If so, add such a sentence as number nine. If not, leave sentence nine blank.
Submit your modified sentence file as Sentences 1.7. Turn in your completed table to your
instructor.

Section 1.3
28 / Atomic Sentences

Section 1.4
General first-order languages

First-order languages differ in the names and predicates they contain, and so in
the atomic sentences that can be formed. What they share are the connectives
and quantifiers that enable us to build more complex sentences from these
simpler parts. We will get to those common elements in later chapters.
translation When you translate a sentence of English into fol, you will sometimes
have a “predefined” first-order language that you want to use, like the blocks
language of Tarski’s World, or the language of set theory or arithmetic de-
scribed later in this chapter. If so, your goal is to come up with a translation
that captures the meaning of the original English sentence as nearly as pos-
sible, given the names and predicates available in your predefined first-order
language.
Other times, though, you will not have a predefined language to use for
your translation. If not, the first thing you have to do is decide what names and
designing languages predicates you need for your translation. In effect, you are designing, on the fly,
a new first-order language capable of expressing the English sentence you want
to translate. We’ve been doing this all along, for example when we introduced
Home(max) as the translation of Max is at home and Taller(claire, max) as the
translation of Claire is taller than Max.
When you make these decisions, there are often alternative ways to go.
For example, suppose you were asked to translate the sentence Claire gave
Scruffy to Max. You might introduce a binary predicate GaveScruffy(x, y),
meaning x gave Scruffy to y, and then translate the original sentence as
GaveScruffy(claire, max). Alternatively, you might introduce a three-place pred-
icate Gave(x, y, z), meaning x gave y to z, and then translate the sentence as
Gave(claire, scruffy, max).
There is nothing wrong with either of these predicates, or their resulting
translations, so long as you have clearly specified what the predicates mean.
Of course, they may not be equally useful when you go on to translate other
choosing predicates sentences. The first predicate will allow you to translate sentences like Max
gave Scruffy to Evan and Evan gave Scruffy to Miles. But if you then run into
the sentence Max gave Carl to Claire, you would be stuck, and would have
to introduce an entirely new predicate, say, GaveCarl(x, y). The three-place
predicate is thus more flexible. A first-order language that contained it (plus
the relevant names) would be able to translate any of these sentences.
In general, when designing a first-order language we try to economize on
the predicates by introducing more flexible ones, like Gave(x, y, z), rather than

Chapter 1
General first-order languages / 29

less flexible ones, like GaveScruffy(x, y) and GaveCarl(x, y). This produces a
more expressive language, and one that makes the logical relations between
various claims more perspicuous.
Names can be introduced into a first-order language to refer to anything
that can be considered an object. But we construe the notion of an “object” objects
pretty flexibly—to cover anything that we can make claims about. We’ve al-
ready seen languages with names for people and the blocks of Tarski’s World.
Later in the chapter, we’ll introduce languages with names for sets and num-
bers. Sometimes we will want to have names for still other kinds of “objects,”
like days or times. Suppose, for example, that we want to translate the sen-
tences:

Claire gave Scruffy to Max on Saturday.


Sunday, Max gave Scruffy to Evan.

Here, we might introduce a four-place predicate Gave(w, x, y, z), meaning w


gave x to y on day z, plus names for particular days, like last Saturday and
last Sunday. The resulting translations would look something like this:

Gave(claire, scruffy, max, saturday)


Gave(max, scruffy, evan, sunday)

Designing a first-order language with just the right names and predicates
requires some skill. Usually, the overall goal is to come up with a language
that can say everything you want, but that uses the smallest “vocabulary”
possible. Picking the right names and predicates is the key to doing this.

Exercises

1.8 Suppose we have two first-order languages: the first contains the binary predicates
✎ GaveScruffy(x, y) and GaveCarl(x, y), and the names max and claire; the second contains the
ternary predicate Gave(x, y, z) and the names max, claire, scruffy, and carl.
1. List all of the atomic sentences that can be expressed in the first language. (Some of
these may say weird things like GaveScruffy(claire, claire), but don’t worry about that.)
2. How many atomic sentences can be expressed in the second language? (Count all of
them, including odd ones like Gave(scruffy, scruffy, scruffy).)
3. How many names and binary predicates would a language like the first need in order
to say everything you can say in the second?

Section 1.4
30 / Atomic Sentences

Table 1.2: Names and predicates for a language.

English fol Comment

Names:
Max max
Claire claire
Folly folly The name of a certain dog.
Carl carl The name of another dog.
Scruffy scruffy The name of a certain cat.
Pris pris The name of another cat.
2 pm, Jan 2, 2001 2:00 The name of a time.
2:01 pm, Jan 2, 2001 2:01 One minute later.
.. ..
. . Similarly for other times.

Predicates:
x is a pet Pet(x)
x is a person Person(x)
x is a student Student(x)
t is earlier than t0 t < t0 Earlier-than for times.
x was hungry at time t Hungry(x, t)
x was angry at time t Angry(x, t)
x owned y at time t Owned(x, y, t)
x gave y to z at t Gave(x, y, z, t)
x fed y at time t Fed(x, y, t)

1.9 We will be giving a number of problems that use the symbols explained in Table 1.2. Start a
➶ new sentence file in Tarski’s World and translate the following into fol, using the names and
predicates listed in the table. (You will have to type the names and predicates in by hand.
Make sure you type them exactly as they appear in the table; for example, use 2:00, not 2:00
pm or 2 pm.) All references to times are assumed to be to times on January 2, 2001.
1. Claire owned Folly at 2 pm.
2. Claire gave Pris to Max at 2:05 pm.
3. Max is a student.
4. Claire fed Carl at 2 pm.
5. Folly belonged to Max at 3:05 pm.
6. 2:00 pm is earlier than 2:05 pm.

Name and submit your file in the usual way.

Chapter 1
Function symbols / 31

1.10 Translate the following into natural sounding, colloquial English, consulting Table 1.2.
✎ 1. Owned(max, scruffy, 2:00)
2. Fed(max, scruffy, 2:30)
3. Gave(max, scruffy, claire, 3:00)
4. 2:00 < 2:00
1.11 For each sentence in the following list, suggest a translation into an atomic sentence of fol. In
✎? addition to giving the translation, explain what kinds of objects your names refer to and the
intended meaning of the predicate you use.
1. Max shook hands with Claire.
2. Max shook hands with Claire yesterday.
3. AIDS is less contagious than influenza.
4. Spain is between France and Portugal in size.
5. Misery loves company.

Section 1.5
Function symbols
Some first-order languages have, in addition to names and predicates, other
expressions that can appear in atomic sentences. These expressions are called
function symbols. Function symbols allow us to form name-like terms from function symbols
names and other name-like terms. They allow us to express, using atomic
sentences, complex claims that could not be perspicuously expressed using
just names and predicates. Some English examples will help clarify this.
English has many sorts of noun phrases, expressions that can be combined
with a verb phrase to get a sentence. Besides names like Max and Claire,
other noun phrases include expressions like Max’s father, Claire’s mother,
Every girl who knows Max, No boy who knows Claire, Someone and so forth.
Each of these combines with a singular verb phrase such as likes unbuttered
popcorn to make a sentence. But notice that the sentences that result have
very different logical properties. For example,
Claire’s mother likes unbuttered popcorn
implies that someone likes unbuttered popcorn, while
No boy who knows Claire likes unbuttered popcorn
does not.
Since these noun phrases have such different logical properties, they are terms
treated differently in fol. Those that intuitively refer to an individual are

Section 1.5
32 / Atomic Sentences

called “terms,” and behave like the individual constants we have already dis-
cussed. In fact, individual constants are the simplest terms, and more complex
terms are built from them using function symbols. Noun phrases like No boy
who knows Claire are handled with very different devices, known as quanti-
fiers, which we will discuss later.
The fol analog of the noun phrase Max’s father is the term father(max).
It is formed by putting a function symbol, father, in front of the individual
complex terms constant max. The result is a complex term that we use to refer to the father
of the person referred to by the name max. Similarly, we can put the function
symbol mother together with the name claire and get the term mother(claire),
which functions pretty much like the English term Claire’s mother.
We can repeat this construction as many times as we like, forming more
and more complex terms:
father(father(max))
mother(father(claire))
mother(mother(mother(claire)))
The first of these refers to Max’s paternal grandfather, the second to Claire’s
paternal grandmother, and so forth.
These function symbols are called unary function symbols, because, like
unary predicates, they take one argument. The resulting terms function just
like names, and can be used in forming atomic sentences. For instance, the
fol sentence
Taller(father(max), max)
says that Max’s father is taller than Max. Thus, in a language containing
function symbols, the definition of atomic sentence needs to be modified to
allow complex terms to appear in the argument positions in addition to names.
function symbols vs. Students often confuse function symbols with predicates, because both
predicates take terms as arguments. But there is a big difference. When you combine a
unary function symbol with a term you do not get a sentence, but another
term: something that refers (or should refer) to an object of some sort. This is
why function symbols can be reapplied over and over again. As we have seen,
the following makes perfectly good sense:
father(father(max))
This, on the other hand, is total nonsense:
Dodec(Dodec(a))
To help prevent this confusion, we will always capitalize predicates of fol and
leave function symbols and names in lower case.

Chapter 1
Function symbols / 33

Besides unary function symbols, fol allows function symbols of any ar-
ity. Thus, for example, we can have binary function symbols. Simple English arity of function
counterparts of binary function symbols are hard to come up with, but they symbols
are quite common in mathematics. For instance, we might have a function
symbol sum that combines with two terms, t1 and t2 , to give a new term,
sum(t1 , t2 ), which refers to the sum of the numbers referred to by t1 and t2 .
Then the complex term sum(3, 5) would give us another way of referring to
8. In a later section, we will introduce a function symbol to denote addition,
but we will use infix notation, rather than prefix notation. Thus 3 + 5 will be
used instead of sum(3, 5).
In fol, just as we assume that every name refers to an actual object,
we also assume that every complex term refers to exactly one object. This
is a somewhat artificial assumption, since many function-like expressions in
English don’t always work this way. Though we may assume that

mother(father(father(max)))

refers to an actual (deceased) individual—one of Max’s great-grandmothers—


there may be other uses of these function symbols that don’t seem to give
us genuinely referring expressions. For example, perhaps the complex terms
mother(adam) and mother(eve) fail to refer to any individuals, if Adam and Eve
were in fact the first people. And certainly the complex term mother(3) doesn’t
refer to anything, since the number three has no mother. When designing a
first-order language with function symbols, you should try to ensure that your
complex terms always refer to unique, existing individuals.
The blocks world language as it is implemented in Tarski’s World does not functions symbols for
contain function symbols, but we could easily extend the language to include blocks language
some. Suppose for example we introduced the function expressions fm, bm, lm
and rm, that allowed us to form complex terms like:

fm(a)
lm(bm(c))
rm(rm(fm(d)))

We could interpret these function symbols so that, for example, fm(a)


refers to the frontmost block in the same column as a. Thus, if there are
several blocks in the column with a, then fm(a) refers to the one closest to
you. If a is the only block in the column, or is the frontmost in its column,
then fm(a) would refer to a. Analogously, bm, lm and rm could be interpreted
to mean backmost, leftmost and rightmost, respectively.
With this interpretation, the term lm(bm(c)) would refer to the leftmost
block in the same row as the backmost block in the same column as c. The

Section 1.5
34 / Atomic Sentences

atomic sentence Larger(lm(bm(c)), c) would then be true if and only if this


block is larger than c.
Notice that in this expanded language, the sentence lm(bm(c)) = bm(lm(c))
is not always true. (Can you think of an example where it is false?) On the
other hand, fm(fm(a)) = fm(a) is always true. Can you think of any other
atomic sentences using these function symbols that are always true? How
about sentences that are always false?

Remember

In a language with function symbols,

◦ Complex terms are typically formed by putting a function symbol of


arity n in front of n terms (simple or complex).

◦ Complex terms are used just like names (simple terms) in forming
atomic sentences.

◦ In fol, complex terms are assumed to refer to one and only one object.

Exercises

1.12 Express in English the claims made by the following sentences of fol as clearly as you can.
✎ You should try to make your English sentences as natural as possible. All the sentences are,
by the way, true.
1. Taller(father(claire), father(max))
2. john = father(max)
3. Taller(claire, mother(mother(claire)))
4. Taller(mother(mother(max)), mother(father(max)))
5. mother(melanie) = mother(claire)
1.13 Assume that we have expanded the blocks language to include the function symbols fm, bm, lm
✎ and rm described earlier. Then the following formulas would all be sentences of the language:
1. Tet(lm(e))
2. fm(c) = c
3. bm(b) = bm(e)
4. FrontOf(fm(e), e)
5. LeftOf(fm(b), b)

Chapter 1
Function symbols / 35

6. SameRow(rm(c), c)
7. bm(lm(c)) = lm(bm(c))
8. SameShape(lm(b), bm(rm(e)))
9. d = lm(fm(rm(bm(d))))
10. Between(b, lm(b), rm(b))

Fill in the following table with true’s and false’s according to whether the indicated sentence
is true or false in the indicated world. Since Tarski’s World does not understand the function
symbols, you will not be able to check your answers. We have filled in a few of the entries for
you. Turn in the completed table to your instructor.

Leibniz’s Bolzano’s Boole’s Wittgenstein’s


1. true
2.
3.
4.
5. false
6. true
7.
8. false
9.
10.

1.14 As you probably noticed in doing Exercise 1.13, three of the sentences came out true in all
➶ four worlds. It turns out that one of these three cannot be falsified in any world, because of
the meanings of the predicates and function symbols it contains. Your goal in this problem is
to build a world in which all of the other sentences in Exercise 1.13 come out false. When you
have found such a world, submit it as World 1.14.

1.15 Suppose we have two first-order languages for talking about fathers. The first, which we’ll
✎ call the functional language, contains the names claire, melanie, and jon, the function symbol
father, and the predicates = and Taller. The second language, which we will call the relational
language, has the same names, no function symbols, and the binary predicates =, Taller, and
FatherOf, where FatherOf(c, b) means that c is the father of b. Translate the following atomic
sentences from the relational language into the functional language. Be careful. Some atomic
sentences, such as claire = claire, are in both languages! Such a sentence counts as a translation
of itself.
1. FatherOf(jon, claire)
2. FatherOf(jon, melanie)

Section 1.5
36 / Atomic Sentences

3. Taller(claire, melanie)

Which of the following atomic sentences of the functional language can be translated into atomic
sentences of the relational language? Translate those that can be and explain the problem with
those that can’t.

4. father(melanie) = jon
5. father(melanie) = father(claire)
6. Taller(father(claire), father(jon))

When we add connectives and quantifiers to the language, we will be able to translate freely
back and forth between the functional and relational languages.

1.16 Let’s suppose that everyone has a favorite movie star. Given this assumption, make up a first-
✎ order language for talking about people and their favorite movie stars. Use a function symbol
that allows you to refer to an individual’s favorite actor, plus a relation symbol that allows
you to say that one person is a better actor than another. Explain the interpretation of your
function and relation symbols, and then use your language to express the following claims:
1. Harrison is Nancy’s favorite actor.
2. Nancy’s favorite actor is better than Sean.
3. Nancy’s favorite actor is better than Max’s.
4. Claire’s favorite actor’s favorite actor is Brad.
5. Sean is his own favorite actor.

1.17 Make up a first-order language for talking about people and their relative heights. Instead of
✎? using relation symbols like Taller, however, use a function symbol that allows you to refer to
people’s heights, plus the relation symbols = and <. Explain the interpretation of your function
symbol, and then use your language to express the following two claims:
1. George is taller than Sam.
2. Sam and Mary are the same height.

Do you see any problem with this function symbol? If so, explain the problem. [Hint: What
happens if you apply the function symbol twice?]

1.18 For each sentence in the following list, suggest a translation into an atomic sentence of fol. In
✎? addition to giving the translation, explain what kinds of objects your names refer to and the
intended meaning of the predicates and function symbols you use.
1. Indiana’s capital is larger than California’s.
2. Hitler’s mistress died in 1945.
3. Max shook Claire’s father’s hand.
4. Max is his father’s son.
5. John and Nancy’s eldest child is younger than Jon and Mary Ellen’s.

Chapter 1
The first-order language of set theory / 37

Section 1.6
The first-order language of set theory

Fol was initially developed for use in mathematics, and consequently the
most familiar first-order languages are those associated with various branches
of mathematics. One of the most common of these is the language of set
theory. This language has only two predicates, both binary. The first is the predicates of set theory
identity symbol, =, which we have already encountered, and the second is the
symbol ∈, for set membership.
It is standard to use infix notation for both of these predicates. Thus, in
set theory, atomic sentences are always formed by placing individual constants
on either side of one of the two predicates. This allows us to make identity
claims, of the form a = b, and membership claims, of the form a ∈ b (where
a and b are individual constants).
A sentence of the form a ∈ b is true if and only if the thing named by b is membership (∈)
a set, and the thing named by a is a member of that set. For example, suppose
a names the number 2 and b names the set {2, 4, 6}. Then the following table
tells us which membership claims made up using these names are true and
which are false.2
a ∈ a false
a ∈ b true
b ∈ a false
b ∈ b false
Notice that there is one striking difference between the atomic sentences
of set theory and the atomic sentences of the blocks language. In the blocks
language, you can have a sentence, like LeftOf(a, b), that is true in a world,
but which can be made false simply by moving one of the objects. Moving
an object does not change the way the name works, but it can turn a true
sentence into a false one, just as the sentence Claire is sitting down can go
from true to false in virtue of Claire’s standing up.
In set theory, we won’t find this sort of thing happening. Here, the analog
of a world is just a domain of objects and sets. For example, our domain
might consist of all natural numbers, sets of natural numbers, sets of sets of
natural numbers, and so forth. The difference between these “worlds” and
those of Tarski’s World is that the truth or falsity of the atomic sentences is
determined entirely once the reference of the names is fixed. There is nothing
that corresponds to moving the blocks around. Thus if the universe contains
2 For the purposes of this discussion we are assuming that numbers are not sets, and that

sets can contain either numbers or other sets as members.

Section 1.6
38 / Atomic Sentences

the objects 2 and {2, 4, 6}, and if the names a and b are assigned to them,
then the atomic sentences must get the values indicated in the previous table.
The only way those values can change is if the names name different things.
Identity claims also work this way, both in set theory and in Tarski’s World.

Exercises

1.19 Which of the following atomic sentences in the first-order language of set theory are true
➶ and which are false? We use, in addition to a and b as above, the name c for 6 and d for
{2, 7, {2, 4, 6}}.
1. a ∈ c
2. a ∈ d
3. b ∈ c
4. b ∈ d
5. c ∈ d
6. c ∈ b

To answer this exercise, submit a Tarski’s World sentence file with an uppercase T or F in each
sentence slot to indicate your assessment.

Section 1.7
The first-order language of arithmetic

While neither the blocks language as implemented in Tarski’s World nor the
language of set theory has function symbols, there are languages that use
them extensively. One such first-order language is the language of arithmetic.
This language allows us to express statements about the natural numbers
0, 1, 2, 3, . . . , and the usual operations of addition and multiplication.
There are several more or less equivalent ways of setting up this language.
predicates (=, <) and The one we will use has two names, 0 and 1, two binary relation symbols, =
functions (+, ×) of and <, and two binary function symbols, + and ×. The atomic sentences are
arithmetic those that can be built up out of these symbols. We will use infix notation
both for the relation symbols and the function symbols.
Notice that there are infinitely many different terms in this language (for
example, 0, 1, (1 + 1), ((1 + 1) + 1), (((1 + 1) + 1) + 1), . . . ), and so an infinite
number of atomic sentences. Our list also shows that every natural number is
named by some term of the language. This raises the question of how we can
specify the set of terms in a precise way. We can’t list them all explicitly, since

Chapter 1
The first-order language of arithmetic / 39

there are too many. The way we get around this is by using what is known as
an inductive definition.
Definition The terms of first-order arithmetic are formed in the following terms of arithmetic
way:
1. The names 0, 1 are terms.

2. If t1 , t2 are terms, then the expressions (t1 + t2 ) and (t1 × t2 ) are also
terms.
3. Nothing is a term unless it can be obtained by repeated application of
(1) and (2).

We should point out that this definition does indeed allow the function
symbols to be applied over and over. Thus, (1 + 1) is a term by clause 2 and
the fact that 1 is a term. In which case ((1 + 1) × (1 + 1)) is also a term, again
by clause 2. And so forth.
The third clause in the above definition is not as straightforward as one
might want, since the phrase “can be obtained by repeated application of” is
a bit vague. In Chapter 16, we will see how to give definitions like the above
in a more satisfactory way, one that avoids this vague clause.
The atomic sentences in the language of first-order arithmetic are those atomic sentences of
that can be formed from the terms and the two binary predicate symbols, = arithmetic
and <. So, for example, the fol version of 1 times 1 is less than 1 plus 1 is
the following:
(1 × 1) < (1 + 1)

Exercises

1.20 Show that the following expressions are terms in the first-order language of arithmetic. Do this
✎ by explaining which clauses of the definition are applied and in what order. What numbers do
they refer to?
1. (0 + 0)
2. (0 + (1 × 0))
3. ((1 + 1) + ((1 + 1) × (1 + 1)))
4. (((1 × 1) × 1) × 1)

1.21 Find a way to express the fact that three 1.22 Show that there are infinitely many
?
✎ is less than four using the first-order lan- ✎ terms in the first-order language of
guage of arithmetic. arithmetic referring to the number one.

Section 1.7
40 / Atomic Sentences

Section 1.8
Alternative notation

As we said before, fol is like a family of languages. But, as if that were not
enough diversity, even the very same first-order language comes in a variety
of dialects. Indeed, almost no two logic books use exactly the same notational
conventions in writing first-order sentences. For this reason, it is important
to have some familiarity with the different dialects—the different notational
conventions—and to be able to translate smoothly between them. At the end
of most chapters, we discuss common notational differences that you are likely
to encounter.
Some notational differences, though not many, occur even at the level of
atomic sentences. For example, some authors insist on putting parentheses
around atomic sentences whose binary predicates are in infix position. So
(a = b) is used rather than a = b. By contrast, some authors omit parentheses
surrounding the argument positions (and the commas between them) when
the predicate is in prefix position. These authors use Rab instead of R(a, b).
We have opted for the latter simply because we use predicates made up of
several letters, and the parentheses make it clear where the predicate ends
and the arguments begin: Cubed is not nearly as perspicuous as Cube(d).
What is important in these choices is that sentences should be unambigu-
ous and easy to read. Typically, the first aim requires parentheses to be used in
one way or another, while the second suggests using no more than is necessary.

Chapter 1
Chapter 2

The Logic of Atomic Sentences


A major concern in logic is the concept of logical consequence: When does
one sentence, statement, or claim follow logically from others? In fact, one of
the main motivations in the design of fol was to make logical consequence
as perspicuous as possible. It was thought that by avoiding the ambiguities
and complexities of ordinary language, we would be able to recognize the
consequences of our claims more easily. This is, to a certain extent, true; but
it is also true that we should be able to recognize the consequences of our
claims whether or not they are expressed in fol.
In this chapter, we will explain what we mean by “logical consequence,” or
equivalently, what we mean when we say that an argument is “logically valid.”
This is a fairly simple concept to understand, but it can also be devilishly dif-
ficult to apply in specific cases. Indeed, in mathematics there are many, many
examples where we do not know whether a given claim is a consequence of
other known truths. Mathematicians may work for years or decades trying
to answer such questions. After explaining the notion of consequence, we will
describe the principal techniques for showing that a claim is or is not a con-
sequence of other claims, and begin presenting what is known as a formal
system of deduction, a system that allows us to show that a sentence of fol is
a consequence of others. We will continue developing this system as we learn
more about fol in later chapters.

Section 2.1
Valid and sound arguments
Just what do we mean by logical consequence? Or rather, since this phrase
is sometimes used in quite different contexts, what does a logician mean by
logical consequence?
A few examples will help. First, let’s say that an argument is any series arguments, premises,
of statements in which one (called the conclusion) is meant to follow from, or and conclusions
be supported by, the others (called the premises). Don’t think of two people
arguing back and forth, but of one person trying to convince another of some
conclusion on the basis of mutually accepted premises. Arguments in our
sense may appear as part of the more disagreeable sort of “arguments”—
the kind parents have with their children—but our arguments also appear

41
42 / The Logic of Atomic Sentences

in newspaper editorials, in scholarly books, and in all forms of scientific and


rational discourse. Name calling doesn’t count.
There are many devices in ordinary language for indicating premises and
identifying premises conclusions of arguments. Words like hence, thus, so, and consequently are
and conclusions used to indicate that what follows is the conclusion of an argument. The words
because, since, after all, and the like are generally used to indicate premises.
Here are a couple of examples of arguments:

All men are mortal. Socrates is a man. So, Socrates is mortal.

Lucretius is a man. After all, Lucretius is mortal and all men are
mortal.

One difference between these two arguments is the placement of the con-
clusion. In the first argument, the conclusion comes at the end, while in the
second, it comes at the start. This is indicated by the words so and after all,
respectively. A more important difference is that the first argument is good,
while the second is bad. We will say that the first argument is logically valid,
logical consequence or that its conclusion is a logical consequence of its premises. The reason we
say this is that it is impossible for this conclusion to be false if the premises are
true. In contrast, our second conclusion might be false (suppose Lucretius is
my pet goldfish), even though the premises are true (goldfish are notoriously
mortal). The second conclusion is not a logical consequence of its premises.
logically valid Roughly speaking, an argument is logically valid if and only if the conclu-
arguments sion must be true on the assumption that the premises are true. Notice that
this does not mean that an argument’s premises have to be true in order for it
to be valid. When we give arguments, we naturally intend the premises to be
true, but sometimes we’re wrong about that. We’ll say more about this possi-
bility in a minute. In the meantime, note that our first example above would
be a valid argument even if it turned out that we were mistaken about one
of the premises, say if Socrates turned out to be a robot rather than a man.
It would still be impossible for the premises to be true and the conclusion
false. In that eventuality, we would still say that the argument was logically
valid, but since it had a false premise, we would not be guaranteed that the
conclusion was true. It would be a valid argument with a false premise.
Here is another example of a valid argument, this time one expressed in
the blocks language. Suppose we are told that Cube(c) and that c = b. Then it
certainly follows that Cube(b). Why? Because there is no possible way for the
premises to be true—for c to be a cube and for c to be the very same object
as b—without the conclusion being true as well. Note that we can recognize
that the last statement is a consequence of the first two without knowing that

Chapter 2
Valid and sound arguments / 43

the premises are actually, as a matter of fact, true. For the crucial observation
is that if the premises are true, then the conclusion must also be true.
A valid argument is one that guarantees the truth of its conclusion on
the assumption that the premises are true. Now, as we said before, when we
actually present arguments, we want them to be more than just valid: we also
want the premises to be true. If an argument is valid and the premises are also
true, then the argument is said to be sound. Thus a sound argument insures sound arguments
the truth of its conclusion. The argument about Socrates given above was not
only valid, it was sound, since its premises were true. (He was not, contrary
to rumors, a robot.) But here is an example of a valid argument that is not
sound:
All rich actors are good actors. Brad Pitt is a rich actor. So he must
be a good actor.
The reason this argument is unsound is that its first premise is false.
Because of this, although the argument is indeed valid, we are not assured
that the conclusion is true. It may be, but then again it may not. We in fact
think that Brad Pitt is a good actor, but the present argument does not show
this.
Logic focuses, for the most part, on the validity of arguments, rather than
their soundness. There is a simple reason for this. The truth of an argument’s
premises is generally an issue that is none of the logician’s business: the truth
of “Socrates is a man” is something historians had to ascertain; the falsity of
“All rich actors are good actors” is something a movie critic might weigh in
about. What logicians can tell you is how to reason correctly, given what you
know or believe to be true. Making sure that the premises of your arguments
are true is something that, by and large, we leave up to you.
In this book, we often use a special format to display arguments, which we
call “Fitch format” after the logician Frederic Fitch. The format makes clear Fitch format
which sentences are premises and which is the conclusion. In Fitch format, we
would display the above, unsound argument like this:

All rich actors are good actors.


Brad Pitt is a rich actor.
Brad Pitt is a good actor.

Here, the sentences above the short, horizontal line are the premises, and
the sentence below the line is the conclusion. We call the horizontal line the
Fitch bar. Notice that we have omitted the words “So . . . must be . . .” in the Fitch bar
conclusion, because they were in the original only to make clear which sen-
tence was supposed to be the conclusion of the argument. In our conventional

Section 2.1
44 / The Logic of Atomic Sentences

format, the Fitch bar gives us this information, and so these words are no
longer needed.

Remember

1. An argument is a series of statements in which one, called the conclu-


sion, is meant to be a consequence of the others, called the premises.

2. An argument is valid if the conclusion must be true in any circum-


stance in which the premises are true. We say that the conclusion of
a logically valid argument is a logical consequence of its premises.

3. An argument is sound if it is valid and the premises are all true.

Exercises

2.1 (Classifying arguments) Open the file Socrates’ Sentences. This file contains eight arguments
➶|✎ separated by dashed lines, with the premises and conclusion of each labeled.
1. In the first column of the following table, classify each of these arguments as valid or
invalid. In making these assessments, you may presuppose any general features of the
worlds that can be built in Tarski’s World (for example, that two blocks cannot occupy
the same square on the grid).

Sound in Sound in
Argument Valid? Socrates’ World? Wittgenstein’s World?
1.
2.
3.
4.
5.
6.
7.
8.

2. Now open Socrates’ World and evaluate each sentence. Use the results of your evaluation
to enter sound or unsound in each row of the second column in the table, depending on
whether the argument is sound or unsound in this world. (Remember that only valid
arguments can be sound; invalid arguments are automatically unsound.)

Chapter 2
Valid and sound arguments / 45

3. Open Wittgenstein’s World and fill in the third column of the table.
4. For each argument that you have marked invalid in the table, construct a world in
which the argument’s premises are all true but the conclusion is false. Submit the
world as World 2.1.x, where x is the number of the argument. (If you have trouble
doing this, you may want to rethink your assessment of the argument’s validity.) Turn
in your completed table to your instructor.

This problem makes a very important point, one that students of logic sometimes forget. The
point is that the validity of an argument depends only on the argument, not on facts about
the specific world the statements are about. The soundness of an argument, on the other hand,
depends on both the argument and the world.

2.2 (Classifying arguments) For each of the arguments below, identify the premises and conclusion
✎ by putting the argument into Fitch format. Then say whether the argument is valid. For the
first five arguments, also give your opinion about whether they are sound. (Remember that
only valid arguments can be sound.) If your assessment of an argument depends on particular
interpretations of the predicates, explain these dependencies.
1. Anyone who wins an academy award is famous. Meryl Streep won an academy award.
Hence, Meryl Streep is famous.
2. Harrison Ford is not famous. After all, actors who win academy awards are famous,
and he has never won one.
3. The right to bear arms is the most important freedom. Charlton Heston said so, and
he’s never wrong.
4. Al Gore must be dishonest. After all, he’s a politician and hardly any politicians are
honest.
5. Mark Twain lived in Hannibal, Missouri, since Sam Clemens was born there, and Mark
Twain is Sam Clemens.
6. No one under 21 bought beer here last night, officer. Geez, we were closed, so no one
bought anything last night.
7. Claire must live on the same street as Laura, since she lives on the same street as Max
and he and Laura live on the same street.
2.3 For each of the arguments below, identify the premises and conclusion by putting the argument
✎ into Fitch format, and state whether the argument is valid. If your assessment of an argument
depends on particular interpretations of the predicates, explain these dependencies.
1. Many of the students in the film class attend film screenings. Consequently, there must
be many students in the film class.
2. There are few students in the film class, but many of them attend the film screenings.
So there are many students in the film class.

Section 2.1
46 / The Logic of Atomic Sentences

3. There are many students in the film class. After all, many students attend film screen-
ings and only students in the film class attend screenings.
4. There are thirty students in my logic class. Some of the students turned in their
homework on time. Most of the students went to the all-night party. So some student
who went to the party managed to turn in the homework on time.
5. There are thirty students in my logic class. Some student who went to the all-night
party must have turned in the homework on time. Some of the students turned in their
homework on time, and they all went to the party.
6. There are thirty students in my logic class. Most of the students turned in their home-
work on time. Most of the students went to the all-night party. Thus, some student
who went to the party turned in the homework on time.

2.4 (Validity and truth) Can a valid argument have false premises and a false conclusion? False
✎ premises and a true conclusion? True premises and a false conclusion? True premises and a
true conclusion? If you answer yes to any of these, give an example of such an argument. If
your answer is no, explain why.

Section 2.2
Methods of proof

Our description of the logical consequence relation is fine, as far as it goes.


But it doesn’t give us everything we would like. In particular, it does not tell
us how to show that a given conclusion S follows, or does not follow, from
some premises P, Q, R, . . . . In the examples we have looked at, this may not
seem very problematic, since the answers are fairly obvious. But when we are
dealing with more complicated sentences or more subtle reasoning, things are
sometimes far from simple.
In this course you will learn the fundamental methods of showing when
claims follow from other claims and when they do not. The main technique
for doing the latter, for showing that a given conclusion does not follow from
some premises, is to find a possible circumstance in which the premises are
true but the conclusion false. In fact we have already used this method to
show that the argument about Lucretius was invalid. We will use the method
repeatedly, and introduce more precise versions of it as we go on.
What methods are available to us for showing that a given claim is a
logical consequence of some premises? Here, the key notion is that of a proof.
proof A proof is a step-by-step demonstration that a conclusion (say S) follows
from some premises (say P, Q, R). The way a proof works is by establishing a
series of intermediate conclusions, each of which is an obvious consequence of

Chapter 2
Methods of proof / 47

the original premises and the intermediate conclusions previously established.


The proof ends when we finally establish S as an obvious consequence of the
original premises and the intermediate conclusions. For example, from P, Q, R
it might be obvious that S1 follows. And from all of these, including S1 , it
might be obvious that S2 follows. Finally, from all these together we might
be able to draw our desired conclusion S. If our individual steps are correct,
then the proof shows that S is indeed a consequence of P, Q, R. After all, if
the premises are all true, then our intermediate conclusions must be true as
well. And in that case, our final conclusion must be true, too.
Consider a simple, concrete example. Suppose we want to show that Socrates
sometimes worries about dying is a logical consequence of the four premises
Socrates is a man, All men are mortal, No mortal lives forever, and Everyone
who will eventually die sometimes worries about it. A proof of this conclusion
might pass through the following intermediate steps. First we note that from
the first two premises it follows that Socrates is mortal. From this intermedi-
ate conclusion and the third premise (that no mortal lives forever), it follows
that Socrates will eventually die. But this, along with the fourth premise,
gives us the desired conclusion, that Socrates sometimes worries about dying.
By the way, when we say that S is a logical consequence of premises
P, Q, . . . , we do not insist that each of the premises really play an essen-
tial role. So, for example, if S is a logical consequence of P then it is also a
logical consequence of P and Q. This follows immediately from the definition
of logical consequence. But it has a corollary for our notion of proof: We do
not insist that each of the premises in a proof actually be used in the proof.
A proof that S follows from premises P1 , . . . , Pn may be quite long and
complicated. But each step in the proof is supposed to provide absolutely
incontrovertible evidence that the intermediate conclusion follows from things
already established. Here, the logician’s standards of rigor are extreme. It is demand for rigor
not enough to show that each step in a purported proof almost certainly
follows from the ones that come before. That may be good enough for getting
around in our daily life, but it is not good enough if our aim is to demonstrate
that S must be true provided P1 , . . . , Pn are all true.
There is a practical reason for this demand for rigor. In ordinary life,
we frequently reason in a step-by-step fashion, without requiring absolute
certainty at each step. For most purposes, this is fine, since our everyday
“proofs” generally traverse only a small number of intermediate conclusions.
But in many types of reasoning, this is not the case.
Think of what you did in high school geometry. First you started with a
small number of axioms that stated the basic premises of Euclidean geometry.
You then began to prove conclusions, called theorems, from these axioms. As

Section 2.2
48 / The Logic of Atomic Sentences

you went on to prove more interesting theorems, your proofs would cite earlier
theorems. These earlier theorems were treated as intermediate conclusions in
justifying the new results. What this means is that the complete proofs of
the later theorems really include the proofs of the earlier theorems that they
presuppose. Thus, if they were written out in full, they would contain hundreds
or perhaps thousands of steps. Now suppose we only insisted that each step
show with probability .99 that the conclusion follows from the premises. Then
each step in such a proof would be a pretty good bet, but given a long enough
proof, the proof would carry virtually no weight at all about the truth of the
conclusion.
This demand for certainty becomes even more important in proofs done by
computers. Nowadays, theorems are sometimes proven by computers, and the
proofs can be millions of steps long. If we allowed even the slightest uncertainty
in the individual steps, then this uncertainty would multiply until the alleged
“proof” made the truth of the conclusion no more likely than its falsity.
Each time we introduce new types of expressions into our language, we will
methods of proof discuss new methods of proof supported by those expressions. We begin by
discussing the main informal methods of proof used in mathematics, science,
and everyday life, emphasizing the more important methods like indirect and
conditional proof. Following this discussion we will “formalize” the methods
formal systems by incorporating them into what we call a formal system of deduction. A
formal system of deduction uses a fixed set of rules specifying what counts as
an acceptable step in a proof.
The difference between an informal proof and a formal proof is not one of
informal proofs rigor, but of style. An informal proof of the sort used by mathematicians is
every bit as rigorous as a formal proof. But it is stated in English and is usu-
ally more free-wheeling, leaving out the more obvious steps. For example, we
could present our earlier argument about Socrates in the form of the following
informal proof:

Proof: Since Socrates is a man and all men are mortal, it follows
that Socrates is mortal. But all mortals will eventually die, since
that is what it means to be mortal. So Socrates will eventually die.
But we are given that everyone who will eventually die sometimes
worries about it. Hence Socrates sometimes worries about dying.

formal proofs A formal proof, by contrast, employs a fixed stock of rules and a highly styl-
ized method of presentation. For example, the simple argument from Cube(c)
and c = b to Cube(b) discussed in the last section will, in our formal system,
take the following form:

Chapter 2
Methods of proof / 49

1. Cube(c)
2. c = b
3. Cube(b) = Elim: 1, 2

As you can see, we use an extension of the Fitch format as a way of presenting
formal proofs. The main difference is that a formal proof will usually have more
than one step following the Fitch bar (though not in this example), and each
of these steps will be justified by citing a rule of the formal system. We will
explain later the various conventions used in formal proofs.
In the course of this book you will learn how to give both informal and formal vs. informal
formal proofs. We do not want to give the impression that formal proofs are proofs
somehow better than informal proofs. On the contrary, for purposes of proving
things for ourselves, or communicating proofs to others, informal methods are
usually preferable. Formal proofs come into their own in two ways. One is that
they display the logical structure of a proof in a form that can be mechanically
checked. There are advantages to this, if you are a logic teacher grading lots
of homework, a computer, or not inclined to think for some other reason. The
other is that they allow us to prove things about provability itself, such as
Gödel’s Completeness Theorem and Incompleteness Theorems, discussed in
the final section of this book.

Remember

1. A proof of a statement S from premises P1 , . . . , Pn is a step-by-step


demonstration which shows that S must be true in any circumstances
in which the premises P1 , . . . , Pn are all true.

2. Informal and formal proofs differ in style, not in rigor.

Proofs involving the identity symbol


We have already seen one example of an important method of proof. If we can
prove, from whatever our premises happen to be, that b = c, then we know
that anything that is true of b is also true of c. After all, b is c. In philosophy,
this simple observation sometimes goes by the fancy name the indiscernibility indiscernibility of
of identicals and sometimes by the less pretentious name substitution. Shake- identicals
speare no doubt had this principle in mind when he wrote “A rose, by any
other name, would smell as sweet.”
We will call the formal rule corresponding to this principle Identity Elimi- identity elimination

Section 2.2
50 / The Logic of Atomic Sentences

nation, abbreviated = Elim. The reason for this name is that an application
of this rule “eliminates” a use of the identity symbol when we move from the
premises of the argument to its conclusion. We will have another rule that
introduces the identity symbol.
The principle of identity elimination is used repeatedly in mathematics.
For example, the following derivation uses the principle in conjunction with
the well-known algebraic identity x2 − 1 = (x − 1)(x + 1):

x2 > x2 − 1

so
x2 > (x − 1)(x + 1)
We are all familiar with reasoning that uses such substitutions repeatedly.
Another principle, so simple that one often overlooks it, is the so-called
reflexivity of identity or reflexivity of identity. The formal rule corresponding to it is called Identity
identity elimination Introduction, or = Intro, since it allows us to introduce identity statements
into proofs. It tells us that any sentence of the form a = a can be validly
inferred from whatever premises are at hand, or from no premises at all. This
is because of the assumption made in fol that names always refer to one and
only one object. This is not true about English, as we have noted before. But
it is in fol, which means that in a proof you can always take any name a
that is in use and assert a = a, if it suits your purpose for some reason. (As a
matter of fact, it is rarely of much use.) Gertrude Stein was surely referring
to this principle when she observed “A rose is a rose is a rose.”
symmetry of identity Another principle, a bit more useful, is that of the symmetry of identity. It
allows us to conclude b = a from a = b. Actually, if we wanted, we could derive
this as a consequence of our first two principles, by means of the following
proof.
Proof: Suppose that a = b. We know that a = a, by the reflexivity
of identity. Now substitute the name b for the first use of the name
a in a = a, using the indiscernibility of identicals. We come up with
b = a, as desired.

The previous paragraph is another example of an informal proof. In an


informal proof, we often begin by stating the premises or assumptions of the
proof, and then explain in a step-by-step fashion how we can get from these
assumptions to the desired conclusion. There are no strict rules about how
detailed the explanation needs to be. This depends on the sophistication of
the intended audience for the proof. But each step must be phrased in clear
and unambiguous English, and the validity of the step must be apparent. In
the next section, we will see how to formalize the above proof.

Chapter 2
Methods of proof / 51

A third principle about identity that bears noting is its so-called transitivity. transitivity of identity
If a = b and b = c are both true, then so is a = c. This is so obvious that there
is no particular need to prove it, but it can be proved using the indiscernibility
of identicals. (See Exercise 2.5.)
If you are using a language that contains function symbols (introduced
in the optional Section 1.5), the identity principles we’ve discussed also hold
for complex terms built up using function symbols. For example, if you know
that Happy(john) and john = father(max), you can use identity elimination to
conclude Happy(father(max)), even though father(max) is a complex term, not
a name. In fact, the example where we substituted (x − 1)(x + 1) for x2 − 1
also applied the indiscernibility of identicals to complex terms.

Remember

There are four important principles that hold of the identity relation:

1. = Elim: If b = c, then whatever holds of b holds of c. This is also


known as the indiscernibility of identicals.

2. = Intro: Sentences of the form b = b are always true (in fol). This
is also known as the reflexivity of identity.

3. Symmetry of Identity: If b = c, then c = b.

4. Transitivity of Identity: If a = b and b = c, then a = c.

The latter two principles follow from the first two.

Proofs involving other predicates and relations


Sometimes there will be logical dependencies among the predicate symbols in
a first-order language, dependencies similar to those just discussed involving
the identity symbol. This is the case, for example, in the blocks language.
When this is so, proofs may need to exploit such relationships. For example,
the sentence Larger(a, c) is a consequence of Larger(a, b) and Larger(b, c). This
is because the larger-than relation, like the identity relation, is transitive. It is other transitive
because of this that any world where the latter two sentences are true will also relations
be one in which the first is true. Similar examples are given in the problems.
Another example of this sort that is used frequently in mathematics in-
volves the transitivity of the less-than relation. You frequently encounter

Section 2.2
52 / The Logic of Atomic Sentences

proofs written in the following form:

k1 < k2
k2 < k3
k3 < k4
so
k1 < k4
This proof contains two implicit uses of the transitivity of <.
There is no way to catalog all the legitimate inferences involving predicate
and relation symbols in all the languages we might have occasion to deal with.
But the example of identity gives us a few things to look for. Many relations
besides identity are transitive: larger than and less than are just two examples.
reflexive and symmetric And many are reflexive and/or symmetric: being the same size as and being in
relations the same row as are both. But you will run across other logical dependencies
that don’t fall under these headings. For instance, you might be told that b
is larger than c and want to infer that c is smaller than b. This holds because
inverse relations larger than and smaller than are what is known as “inverses”: they refer to
the same relation but point, so to speak, in opposite directions. Usually, you
will have no trouble spotting the logical dependencies among predicates, but
in giving a proof, you need to make explicit the ones you are assuming.
Let’s look at one final example before trying our hand at some exercises.
Suppose we were asked to give an informal proof of the following argument:

RightOf(b, c)
LeftOf(d, e)
b=d
LeftOf(c, e)

Our informal proof might run like this:

Proof: We are told that b is to the right of c. So c must be to the


left of b, since right of and left of are inverses of one another. And
since b = d, c is left of d, by the indiscernibility of identicals. But we
are also told that d is left of e, and consequently c is to the left of e,
by the transitivity of left of. This is our desired conclusion.

Chapter 2
Methods of proof / 53

Exercises

2.5 (Transitivity of Identity) Give an in- 2.6 Give an informal proof that the follow-
✎ formal proof of the following argument ✎ ing argument is valid. If you proved the
using only indiscernibility of identicals. transitivity of identity by doing Exer-
Make sure you say which name is be- cise 2.5, you may use this principle; oth-
ing substituted for which, and in what erwise, use only the indiscernibility of
sentence. identicals.
b=c SameRow(a, a)
a=b a=b
b=c
a=c
SameRow(c, a)

2.7 Consider the following sentences.



1. Max and Claire are not related.
2. Nancy is Max’s mother.
3. Nancy is not Claire’s mother.

Does (3) follow from (1) and (2)? Does (2) follow from (1) and (3)? Does (1) follow from (2) and
(3)? In each case, if your answer is no, describe a possible circumstance in which the premises
are true and the conclusion false.

Given the meanings of the atomic predicates in the blocks language, assess the following arguments for
validity. (You may again assume any general facts about the worlds that can be built in Tarski’s World.)
If the argument is valid, give an informal proof of its validity and turn it in on paper to your instructor.
If the conclusion is not a consequence of the premises, submit a world in which the premises are true
and the conclusion false.

2.8 Large(a) 2.9 LeftOf(a, b) 2.10 SameSize(b, c)


➶|✎ Larger(a, c) ➶|✎ b=c ➶|✎ SameShape(b, c)
Small(c) RightOf(c, a) b=c

2.11 LeftOf(a, b) 2.12 BackOf(a, b) 2.13 SameSize(a, b)


➶|✎ RightOf(c, a) ➶|✎ FrontOf(a, c) ➶|✎ Larger(a, c)
Smaller(d, c)
LeftOf(b, c) FrontOf(b, c)
Smaller(d, b)

Section 2.2
54 / The Logic of Atomic Sentences

2.14 Between(b, a, c)
➶|✎ LeftOf(a, c)
LeftOf(a, b)

Section 2.3
Formal proofs

In this section we will begin introducing our system for presenting formal
deductive systems proofs, what is known as a “deductive system.” There are many different
styles of deductive systems. The system we present in the first three parts of
the system F the book, which we will call F, is a “Fitch-style” system, so called because
Frederic Fitch first introduced this format for giving proofs. We will look at
a very different deductive system in Part IV, one known as the resolution
method, which is of considerable importance in computer science.
In the system F, a proof of a conclusion S from premises P, Q, and R, looks
very much like an argument presented in Fitch format. The main difference is
that the proof displays, in addition to the conclusion S, all of the intermediate
conclusions S1 , . . . , Sn that we derive in getting from the premises to the
conclusion S:

P
Q
R
S1 Justification 1
.. ..
. .
Sn Justification n
S Justification n+1

There are two graphical devices to notice here, the vertical and horizontal
lines. The vertical line that runs on the left of the steps draws our attention
to the fact that we have a single purported proof consisting of a sequence
of several steps. The horizontal Fitch bar indicates the division between the
claims that are assumed and those that allegedly follow from them. Thus the
fact that P, Q, and R are above the bar shows that these are the premises of
our proof, while the fact that S1 , . . . , Sn , and S are below the bar shows that
these sentences are supposed to follow logically from the premises.

Chapter 2
Formal proofs / 55

Notice that on the right of every step below the Fitch bar, we give a
justification of the step. In our deductive system, a justification indicates justification
which rule allows us to make the step, and which earlier steps (if any) the rule
is applied to. In giving an actual formal proof, we will number the steps, so
we can refer to them in justifying later steps.
We already gave one example of a formal proof in the system F, back on
page 48. For another example, here is a formalization of our informal proof of
the symmetry of identity.

1. a = b
2. a = a = Intro
3. b = a = Elim: 2, 1

In the right hand margin of this proof you find a justification for each step
below the Fitch bar. These are applications of rules we are about to introduce.
The numbers at the right of step 3 show that this step follows from steps 2
and 1 by means of the rule cited.
The first rule we use in the above proof is Identity Introduction. This = Intro
rule allows you to introduce, for any name (or complex term) n in use in
the proof, the assertion n = n. You are allowed to do this at any step in the
proof, and need not cite any earlier step as justification. We will abbreviate
our statement of this rule in the following way:

Identity Introduction (= Intro):

. n=n

We have used an additional graphical device in stating this rule. This is


the symbol . . We will use it in stating rules to indicate which step is being
licensed by the rule. In this example there is only one step mentioned in the
rule, but in other examples there will be several steps.
The second rule of F is Identity Elimination. It tells us that if we have = Elim
proven a sentence containing n (which we indicate by writing P(n)) and a
sentence of the form n = m, then we are justified in asserting any sentence
which results from P(n) by replacing some or all of the occurrences of n by m.

Section 2.3
56 / The Logic of Atomic Sentences

Identity Elimination (= Elim):

P(n)
..
.
n=m
..
.
. P(m)

When we apply this rule, it does not matter which of P(n) and n = m occurs
first in the proof, as long as they both appear before P(m), the inferred step.
In justifying the step, we cite the name of the rule, followed by the steps in
which P(n) and n = m occur, in that order.
We could also introduce rules justified by the meanings of other predicates
besides = into the system F . For example, we could introduce a formal rule
of the following sort:

Bidirectionality of Between:

Between(a, b, c)
..
.
. Between(a, c, b)

We don’t do this because there are just too many such rules. We could state
them for a few predicates, but certainly not all of the predicates you will
encounter in first-order languages.
Reiteration There is one rule that is not technically necessary, but which will make
some proofs look more natural. This rule is called Reiteration, and simply
allows you to repeat an earlier step, if you so desire.

Reiteration (Reit):

P
..
.
. P

To use the Reiteration rule, just repeat the sentence in question and, on the
right, write “Reit: x,” where x is the number of the earlier occurrence of the
sentence.

Chapter 2
Formal proofs / 57

Reiteration is obviously a valid rule of inference, since any sentence is a


logical consequence of itself. The reason for having the rule will become clear
as proofs in the system F become more complicated. For now, let’s just say
that it is like remarking, in the course of giving an informal proof, “we have
already shown that P.” This is often a helpful reminder to the person reading
the proof.
Now that we have the first three rules of F, let’s try our hand construct-
ing a formal proof. Suppose we were asked to prove SameRow(b, a) from the
premises SameRow(a, a) and b = a. We might begin by writing down the
premises and the conclusion, leaving space in between to fill in the inter-
mediate steps in our proof.

1. SameRow(a, a)
2. b = a
..
.
?. SameRow(b, a)

It might at first seem that this proof should be a one step application of
= Elim. But notice that the way we have stated this rule requires that we
replace the first name in the identity sentence, b, for the second, a, but we
want to substitute the other way around. So we need to derive a = b as an
intermediate conclusion before we can apply = Elim.

1. SameRow(a, a)
2. b = a
..
.
?. a = b
?. SameRow(b, a) = Elim: 1, ?

Since we have already seen how to prove the symmetry of identity, we can
now fill in all the steps of the proof. The finished proof looks like this. Make
sure you understand why all the steps are there and how we arrived at them.

1. SameRow(a, a)
2. b = a
3. b = b = Intro
4. a = b = Elim: 3, 2
5. SameRow(b, a) = Elim: 1, 4

Section 2.3
58 / The Logic of Atomic Sentences

Section 2.4
Constructing proofs in Fitch

Writing out a long formal proof in complete detail, let alone reading or check-
ing it, can be a pretty tedious business. The system F makes this less painful
than many formal systems, but it’s still not easy. This book comes with a sec-
the program Fitch ond program, Fitch, that makes constructing formal proofs much less painful.
Fitch can also check your proof, telling you whether it is correct, and if it isn’t,
which step or steps are mistaken. This means you will never be in any doubt
about whether your formal proofs meet the standard of rigor demanded of
them. And, as a practical matter, you can make sure they are correct before
submitting them.
There are other ways in which Fitch makes life simpler, as well. One is that
Fitch vs. F Fitch is more flexible than the system F. It lets you take certain shortcuts
that are logically correct but do not, strictly speaking, fall under the rules of
F. You can always go back and expand a proof in Fitch to a formally correct
F proof, but we won’t often insist on this.
Let us now use Fitch to construct a simple formal proof. Before going on,
you will want to read the first few sections of the chapter on how to use Fitch
in the manual.

You try it
................................................................
I 1. We are going to use Fitch to construct the formal proof of SameRow(b, a)
from premises SameRow(a, a) and b = a. Launch Fitch and open the file
Identity 1. Here we have the beginnings of the formal proof. The premises
appear above the Fitch bar. It may look slightly different from the proofs
we have in the book, since in Fitch the steps don’t have to be numbered,
for reasons we’ll soon find out. (If you would like to have numbered steps,
you can choose Show Step Numbers from the Proof menu. But don’t
try this yet.)
I 2. Before we start to construct the proof, notice that at the bottom of the
proof window there is a separate pane called the “goal strip,” contain-
ing the goal of the proof. In this case the goal is to prove the sentence
SameRow(b, a). If we successfully satisfy this goal, we will be able to get
Fitch to put a checkmark to the right of the goal.
I 3. Let’s construct the proof. What we need to do is fill in the steps needed
to complete the proof, just as we did at the end of the last section. Add

Chapter 2
Constructing proofs in Fitch / 59

a new step to the proof by choosing Add Step After from the Proof
menu. In the new step, enter the sentence a = b, either by typing it in or
by using the toolbar at the top of the proof window. We will first use this
step to get our conclusion and then go back and prove this step.

4. Once you have entered a = b, add another step below this and enter the J
goal sentence SameRow(b, a). Use the mouse to click on the word Rule?
that appears to the right of SameRow(b, a). In the menu that pops up, go
to the Elimination Rules and select =. If you did this right, the rule name
should now say = Elim. If not, try again.

5. Next cite the first premise and the intermediate sentence you first entered. J
You do this in Fitch by clicking on the two sentences, in either order. If
you click on the wrong one, just click again and it will be un-cited. Once
you have the right sentences cited, choose Verify Proof from the Proof
menu. The last step should now check out, as it is a valid instance of =
Elim. The step containing a = b will not check out, since we haven’t yet
indicated what it follows from. Nor will the goal check out, since we don’t
yet have a complete proof of SameRow(b, a). All in good time.

6. Now add a step before the first introduced step (the one containing a = b), J
and enter the sentence b = b. Do this by moving the focus slider (the
triangle in the left margin) to the step containing a = b and choosing
Add Step Before from the Proof menu. (If the new step appears in
the wrong place, choose Delete Step from the Proof menu.) Enter the
sentence b = b and justify it by using the rule = Intro. Check the step.

7. Finally, justify the step containing a = b by using the = Elim rule. You J
will need to move the focus slider to this step, and then cite the second
premise and the sentence b = b. Now the whole proof, including the goal,
should check out. To find out if it does, choose Verify Proof from the
Proof menu. The proof should look like the completed proof on page 57,
except for the absence of numbers on the steps. (Try out Show Step
Numbers from the Proof menu now. The highlighting on support steps
will go away and numbers will appear, just like in the book.)

8. We mentioned earlier that Fitch lets you take some shortcuts, allowing J
you to do things in one step that would take several if we adhered strictly
to F . This proof is a case in point. We have constructed a proof that falls
under F but Fitch actually has symmetry of identity built into = Elim.
So we could prove the conclusion directly from the two premises, using a
single application of the rule = Elim. We’ll do this next.

Section 2.4
60 / The Logic of Atomic Sentences

I 9. Add another step at the very end of your proof. Here’s a trick you will find
handy: Click on the goal sentence at the very bottom of the window. This
puts the focus on the goal sentence. Choose Copy from the Edit menu,
and then click back on the empty step at the end of your proof. Choose
Paste from the Edit menu and the goal sentence will be entered into this
step. This time, justify the new step using = Elim and citing just the two
premises. You will see that the step checks out.
I 10. Save your proof as Proof Identity 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Since the proof system F does not have any rules for atomic predicates
other than identity, neither does Fitch. However, Fitch does have a mecha-
nism that, among other things, lets you check for consequences among atomic
sentences that involve many of the predicates in the blocks world language.1
Analytic Consequence This is a rule we call Analytic Consequence or Ana Con for short. Ana
Con is not restricted to atomic sentences, but that is the only application
of the rule we will discuss at the moment. This rule allows you to cite some
sentences in support of a claim if any world that makes the cited sentences
true also makes the conclusion true, given the meaning of the predicates as
used in Tarski’s World. Let’s get a feeling for Ana Con with some examples.

You try it
................................................................
I 1. Use Fitch to open the file Ana Con 1. In this file you will find nine premises
followed by six conclusions that are consequences of these premises. Indeed,
each of the conclusions follows from three or fewer of the premises.

I 2. Position the focus slider (the little triangle) at the first conclusion following
the Fitch bar, SameShape(c, b). We have invoked the rule Ana Con but
we have not cited any sentences. This conclusion follows from Cube(b) and
Cube(c). Cite these sentences and check the step.

I 3. Now move the focus slider to the step containing SameRow(b, a). Since
the relation of being in the same row is symmetric and transitive, this
follows from SameRow(b, c) and SameRow(a, c). Cite these two sentences
and check the step.
1 This mechanism does not handle the predicates Adjoins and Between, due to the com-

plexity of the ways the meanings of these predicates interact with the others.

Chapter 2
Constructing proofs in Fitch / 61

4. The third conclusion, BackOf(e, c), follows from three of the premises. See J
if you can find them. Cite them. If you get it wrong, Fitch will give you
an X when you try to check the step.

5. Now fill in the citations needed to make the fourth and fifth conclusions J
check out. For these, you will have to invoke the Ana Con rule yourself.
(You will find the rule on the Con submenu of the Rule? popup.)

6. The final conclusion, SameCol(b, b), does not require that any premises be J
cited in support. It is simply an analytic truth, that is, true in virtue of
its meaning. Specify the rule and check this step.

7. When you are done, choose Verify Proof to see that all the goals check J
out. Save your work as Proof Ana Con 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The Ana Con mechanism is not really a rule, technically speaking, though rules vs. Con
we will continue to call it that since it appears on the Rule? menu in Fitch. mechanisms
This mechanism, along with the two others appearing on the Con submenu,
apply complicated procedures to see whether the sentence in question follows
from the cited sentences. As we will explain later, these three items try to find
proofs of the sentence in question “behind the scenes,” and then give you a
checkmark if they succeed. The proof they find may in fact apply many, many
different rules in getting from the cited steps to the target sentence.
The main difference you will run into between the genuine rules in Fitch
and the mechanisms appearing on the Con menu is that the latter “rules”
will sometimes fail even though your step is actually correct. With the genuine
rules, Fitch will always give your step either a checkmark or an X, depending
on whether the rule is applied correctly. But with the Con mechanisms, Fitch
will sometimes try to find a proof of the target sentence but fail. In these
cases, Fitch will give the step a question mark rather than a check or an X,
since there might be a complicated proof that it just couldn’t find.
To mark the difference between the genuine rules of F and the three con-
sequence mechanisms, Fitch displays the rule names in green and the conse-
quence mechanisms in blue. Because the Con mechanisms look for a proof
behind the scenes, we will often ask you not to use them in giving solutions to
homework problems. After all, the point is not to have Fitch do your home-
work for you! In the following problems, you should only use the Ana Con
rule if we explicitly say you can. To see whether a problem allows you to use
any of the Con mechanisms, double click on the goal or choose See Goal
Constraints from the Goal menu.

Section 2.4
62 / The Logic of Atomic Sentences

Remember

The deductive system you will be learning is a Fitch-style deductive sys-


tem, named F. The computer application that assists you in constructing
proofs in F is therefore called Fitch. If you write out your proofs on paper,
you are using the system F, but not the program Fitch.

Exercises

2.15 If you skipped the You try it sections, go back and do them now. Submit the files Proof
➶ Identity 1 and Proof Ana Con 1.

2.16 Use Fitch to give a formal version of the informal proof you gave in Exercise 2.5. Remember,
➶ you will find the problem setup in the file Exercise 2.16. You should begin your proof from this
saved file. Save your completed proof as Proof 2.16.

In the following exercises, use Fitch to construct a formal proof that the conclusion is a consequence of
the premises. Remember, begin your proof by opening the corresponding file, Exercise 2.x, and save your
solution as Proof 2.x. We’re going to stop reminding you.

2.17 SameCol(a, b) 2.18 Between(a, d, b)


➶ b=c ➶ a=c
c=d e=b
SameCol(a, d) Between(c, d, e)

2.19 Smaller(a, b) 2.20 RightOf(b, c)


➶ Smaller(b, c) ➶ LeftOf(d, e)
b=d
Smaller(a, c)
LeftOf(c, e)
You will need to use Ana Con in this
Make your proof parallel the informal
proof. This proof shows that the pred-
proof we gave on page 52, using both
icate Smaller in the blocks language is
an identity rule and Ana Con (where
transitive.
necessary).

Chapter 2
Demonstrating nonconsequence / 63

Section 2.5
Demonstrating nonconsequence

Proofs come in a variety of different forms. When a mathematician proves


a theorem, or when a prosecutor proves a defendant’s guilt, they are show-
ing that a particular claim follows from certain accepted information, the
information they take as given. This kind of proof is what we call a proof of proofs of
consequence, a proof that a particular piece of information must be true if the consequence
given information, the premises of the argument, are correct.
A very different, but equally important kind of proof is a proof of nonconse- proofs of
quence. When a defense attorney shows that the crime might have been com- nonconsequence
mitted by someone other than the client, say by the butler, the attorney is
trying to prove that the client’s guilt does not follow from the evidence in the
case. When mathematicians show that the parallel postulate is not a conse-
quence of the other axioms of Euclidean geometry, they are doing the same
thing: they are showing that it would be possible for the claim in question (the
parallel postulate) to be false, even if the other information (the remaining
axioms) is true.
We have introduced a few methods for demonstrating the validity of an
argument, for showing that its conclusion is a consequence of its premises. We
will be returning to this topic repeatedly in the chapters that follow, adding
new tools for demonstrating consequence as we add new expressions to our
language. In this section, we discuss the most important method for demon-
strating nonconsequence, that is, for showing that some purported conclusion
is not a consequence of the premises provided in the argument.
Recall that logical consequence was defined in terms of the validity of
arguments. An argument is valid if every possible circumstance that makes
the premises of the argument true also makes the conclusion true. Put the
other way around, the argument is invalid if there is some circumstance that
makes the premises true but the conclusion false. Finding such a circumstance
is the key to demonstrating nonconsequence.
To show that a sentence Q is not a consequence of premises P1, . . . , Pn ,
we must show that the argument with premises P1 , . . . , Pn and conclusion Q
is invalid. This requires us to demonstrate that it is possible for P1 , . . . , Pn to
be true while Q is simultaneously false. That is, we must show that there is
a possible situation or circumstance in which the premises are all true while
the conclusion is false. Such a circumstance is said to be a counterexample to counterexamples
the argument.
Informal proofs of nonconsequence can resort to many ingenious ways for

Section 2.5
64 / The Logic of Atomic Sentences

informal proofs of showing the existence of a counterexample. We might simply describe what is
nonconsequence clearly a possible situation, one that makes the premises true and the conclu-
sion false. This is the technique used by defense attorneys, who hope to create
a reasonable doubt that their client is guilty (the prosecutor’s conclusion) in
spite of the evidence in the case (the prosecution’s premises). We might draw
a picture of such a situation or build a model out of Lego blocks or clay.
We might act out a situation. Anything that clearly shows the existence of a
counterexample is fair game.
Recall the following argument from an earlier exercise.

Al Gore is a politician.
Hardly any politicians are honest.
Al Gore is dishonest.

If the premises of this argument are true, then the conclusion is likely. But
still the argument is not valid: the conclusion is not a logical consequence of
the premises. How can we see this? Well, imagine a situation where there are
10,000 politicians, and that Al Gore is the only honest one of the lot. In such
circumstances both premises would be true but the conclusion would be false.
Such a situation is a counterexample to the argument; it demonstrates that
the argument is invalid.
What we have just given is an informal proof of nonconsequence. Are
there such things as formal proofs of nonconsequence, similar to the formal
proofs of validity constructed in F? In general, no. But we will define the
notion of a formal proof of nonconsequence for the blocks language used in
Tarski’s World. These formal proofs of nonconsequence are simply stylized
counterparts of informal counterexamples.
formal proofs of For the blocks language, we will say that a formal proof that Q is not a
nonconsequence consequence of P1 , . . . , Pn consists of a sentence file with P1, . . . , Pn labeled
as premises, Q labeled as conclusion, and a world file that makes each of
P1 , . . . , Pn true and Q false. The world depicted in the world file will be called
the counterexample to the argument in the sentence file.

You try it
................................................................
I 1. Launch Tarski’s World and open the sentence file Bill’s Argument. This
argument claims that Between(b, a, d) follows from these three premises:
Between(b, c, d), Between(a, b, d), and Left(a, c). Do you think it does?
I 2. Start a new world and put four blocks, labeled a, b, c, and d on one row
of the grid.

Chapter 2
Demonstrating nonconsequence / 65

3. Arrange the blocks so that the conclusion is false. Check the premises. If J
any of them are false, rearrange the blocks until they are all true. Is the
conclusion still false? If not, keep trying.

4. If you have trouble, try putting them in the order d, a, b, c. Now you will J
find that all the premises are true but the conclusion is false. This world is
a counterexample to the argument. Thus we have demonstrated that the
conclusion does not follow from the premises.

5. Save your counterexample as World Counterexample 1. J


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Remember

To demonstrate the invalidity of an argument with premises P1 , . . . , Pn


and conclusion Q, find a counterexample: a possible circumstance that
makes P1 , . . . , Pn all true but Q false. Such a counterexample shows that
Q is not a consequence of P1 , . . . , Pn .

Exercises

2.21 If you have skipped the You try it section, go back and do it now. Submit the world file World
➶ Counterexample 1.

2.22 Is the following argument valid? Sound? If it is valid, give an informal proof of it. If it is not
✎ valid, give an informal counterexample to it.
All computer scientists are rich. Anyone who knows how to program a computer is a
computer scientist. Bill Gates is rich. Therefore, Bill Gates knows how to program a
computer.

2.23 Is the following argument valid? Sound? If it is valid, give an informal proof of it. If it is not
✎ valid, give an informal counterexample to it.
Philosophers have the intelligence needed to be computer scientists. Anyone who be-
comes a computer scientist will eventually become wealthy. Anyone with the intelli-
gence needed to be a computer scientist will become one. Therefore, every philosopher
will become wealthy.

Section 2.5
66 / The Logic of Atomic Sentences

Each of the following problems presents a formal argument in the blocks language. If the argument is
valid, submit a proof of it using Fitch. (You will find Exercise files for each of these in the usual place.)
Important: if you use Ana Con in your proof, cite at most two sentences in each application. If the
argument is not valid, submit a counterexample world using Tarski’s World.

2.24 Larger(b, c) 2.25 FrontOf(a, b)


➶ Smaller(b, d) ➶ LeftOf(a, c)
SameSize(d, e) SameCol(a, b)
Larger(e, c) FrontOf(c, b)

2.26 SameRow(b, c) 2.27 SameRow(b, c)


➶ SameRow(a, d) ➶ SameRow(a, d)
SameRow(d, f) SameRow(d, f)
LeftOf(a, b) FrontOf(a, b)
LeftOf(f, c) FrontOf(f, c)

Section 2.6
Alternative notation

You will often see arguments presented in the following way, rather than
in Fitch format. The symbol .·. (read “therefore”) is used to indicate the
conclusion:

All men are mortal.


Socrates is a man.
.·. Socrates is mortal.

There is a huge variety of formal deductive systems, each with its own
notation. We can’t possibly cover all of these alternatives, though we describe
one, the resolution method, in Chapter 17.

Chapter 2
Chapter 3

The Boolean Connectives


So far, we have discussed only atomic claims. To form complex claims, fol pro-
vides us with connectives and quantifiers. In this chapter we take up the three
simplest connectives: conjunction, disjunction, and negation, corresponding
to simple uses of the English and, or, and it is not the case that. Because they Boolean connectives
were first studied systematically by the English logician George Boole, they
are called the Boolean operators or Boolean connectives.
The Boolean connectives are also known as truth-functional connectives. truth-functional
There are additional truth-functional connectives which we will talk about connectives
later. These connectives are called “truth functional” because the truth value
of a complex sentence built up using these connectives depends on nothing
more than the truth values of the simpler sentences from which it is built.
Because of this, we can explain the meaning of a truth-functional connective
in a couple of ways. Perhaps the easiest is by constructing a truth table, a truth table
table that shows how the truth value of a sentence formed with the connec-
tive depends on the truth values of the sentence’s immediate parts. We will
give such tables for each of the connectives we introduce. A more interesting Henkin-Hintikka game
way, and one that can be particularly illuminating, is by means of a game,
sometimes called the Henkin-Hintikka game, after the logicians Leon Henkin
and Jaakko Hintikka.
Imagine that two people, say Max and Claire, disagree about the truth
value of a complex sentence. Max claims it is true, Claire claims it is false. The
two repeatedly challenge one another to justify their claims in terms of simpler
claims, until finally their disagreement is reduced to a simple atomic claim,
one involving an atomic sentence. At that point they can simply examine the
world to see whether the atomic claim is true—at least in the case of claims
about the sorts of worlds we find in Tarski’s World. These successive challenges
can be thought of as a game where one player will win, the other will lose. The
legal moves at any stage depend on the form of the sentence. We will explain
them below. The one who can ultimately justify his or her claims is the winner.
When you play this game in Tarski’s World, the computer takes the side
opposite you, even if it knows you are right. If you are mistaken in your initial
assessment, the computer will be sure to win the game. If you are right,
though, the computer plugs away, hoping you will blunder. If you slip up, the
computer will win the game. We will use the game rules as a second way of
explaining the meanings of the truth-functional connectives.

67
68 / The Boolean Connectives

Section 3.1
Negation symbol: ¬
The symbol ¬ is used to express negation in our language, the notion we
commonly express in English using terms like not, it is not the case that, non-
and un-. In first-order logic, we always apply this symbol to the front of a
sentence to be negated, while in English there is a much more subtle system
for expressing negative claims. For example, the English sentences John isn’t
home and It is not the case that John is home have the same first-order
translation:
¬Home(john)
This sentence is true if and only if Home(john) isn’t true, that is, just in case
John isn’t home.
In English, we generally avoid double negatives—negatives inside other
negatives. For example, the sentence It doesn’t make no difference is problem-
atic. If someone says it, they usually mean that it doesn’t make any difference.
In other words, the second negative just functions as an intensifier of some
sort. On the other hand, this sentence could be used to mean just what it
says, that it does not make no difference, it makes some difference.
Fol is much more systematic. You can put a negation symbol in front of
any sentence whatsoever, and it always negates it, no matter how many other
negation symbols the sentence already contains. For example, the sentence
¬¬Home(john)
negates the sentence
¬Home(john)
and so is true if and only if John is home.
The negation symbol, then, can apply to complex sentences as well as to
literals atomic sentences. We will say that a sentence is a literal if it is either atomic
or the negation of an atomic sentence. This notion of a literal will be useful
later on.
nonidentity symbol (6=) We will abbreviate negated identity claims, such as ¬(b = c), using 6=, as
in b 6= c. The symbol 6= is available on the keyboard palettes in both Tarski’s
World and Fitch.

Semantics and the game rule for negation


Given any sentence P of fol (atomic or complex), there is another sentence
¬P. This sentence is true if and only if P is false. This can be expressed in
terms of the following truth table.

Chapter 3
Negation symbol: ¬ / 69

P ¬P
true false truth table for ¬
false true

The game rule for negation is very simple, since you never have to do game rule for ¬
anything. Once you commit yourself to the truth of ¬P this is the same as
committing yourself to the falsity of P. Similarly, if you commit yourself to
the falsity of ¬P, this is tantamount to committing yourself to the truth of
P. So in either case Tarski’s World simply replaces your commitment about
the more complex sentence by the opposite commitment about the simpler
sentence.

You try it
................................................................
1. Open Wittgenstein’s World. Start a new sentence file and write the following J
sentence.
¬¬¬¬¬Between(e, d, f)

2. Use the Verify button to check the truth value of the sentence. J

3. Now play the game, choosing whichever commitment you please. What J
happens to the number of negation symbols as the game proceeds? What
happens to your commitment?

4. Now play the game again with the opposite commitment. If you won the J
first time, you should lose this time, and vice versa. Don’t feel bad about
losing.

5. There is no need to save the sentence file when you are done. J
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Remember

1. If P is a sentence of fol, then so is ¬P.

2. The sentence ¬P is true if and only if P is not true.

3. A sentence that is either atomic or the negation of an atomic sentence


is called a literal.

Section 3.1
70 / The Boolean Connectives

Exercises

3.1 If you skipped the You try it section, go back and do it now. There are no files to submit,
but you wouldn’t want to miss it.

3.2 (Assessing negated sentences) Open Boole’s World and Brouwer’s Sentences. In the sentence file
➶ you will find a list of sentences built up from atomic sentences using only the negation symbol.
Read each sentence and decide whether you think it is true or false. Check your assessment. If
the sentence is false, make it true by adding or deleting a negation sign. When you have made
all the sentences in the file true, submit the modified file as Sentences 3.2

3.3 (Building a world) Start a new sentence file. Write the following sentences in your file and save
➶ the file as Sentences 3.3.
1. ¬Tet(f)
2. ¬SameCol(c, a)
3. ¬¬SameCol(c, b)
4. ¬Dodec(f)
5. c 6= b
6. ¬(d 6= e)
7. ¬SameShape(f, c)
8. ¬¬SameShape(d, c)
9. ¬Cube(e)
10. ¬Tet(c)

Now start a new world file and build a world where all these sentences are true. As you modify
the world to make the later sentences true, make sure that you have not accidentally falsified
any of the earlier sentences. When you are done, submit both your sentences and your world.

3.4 Let P be a true sentence, and let Q be formed by putting some number of negation symbols
✎ in front of P. Show that if you put an even number of negation symbols, then Q is true, but
that if you put an odd number, then Q is false. [Hint: A complete proof of this simple fact
would require what is known as “mathematical induction.” If you are familiar with proof by
induction, then go ahead and give a proof. If you are not, just explain as clearly as you can
why this is true.]
Now assume that P is atomic but of unknown truth value, and that Q is formed as before.
No matter how many negation symbols Q has, it will always have the same truth value as a
literal, namely either the literal P or the literal ¬P. Describe a simple procedure for determining
which.

Chapter 3
Conjunction symbol: ∧ / 71

Section 3.2
Conjunction symbol: ∧
The symbol ∧ is used to express conjunction in our language, the notion we
normally express in English using terms like and, moreover, and but. In first-
order logic, this connective is always placed between two sentences, whereas in
English we can also conjoin other parts of speech, such as nouns. For example,
the English sentences John and Mary are home and John is home and Mary
is home have the same first-order translation:
Home(john) ∧ Home(mary)
This sentence is read aloud as “Home John and home Mary.” It is true if and
only if John is home and Mary is home.
In English, we can also conjoin verb phrases, as in the sentence John slipped
and fell. But in fol we must translate this the same way we would translate
John slipped and John fell :
Slipped(john) ∧ Fell(john)
This sentence is true if and only if the atomic sentences Slipped(john) and
Fell(john) are both true.
A lot of times, a sentence of fol will contain ∧ when there is no visible
sign of conjunction in the English sentence at all. How, for example, do you
think we might express the English sentence d is a large cube in fol? If you
guessed
Large(d) ∧ Cube(d)
you were right. This sentence is true if and only if d is large and d is a cube—
that is, if d is a large cube.
Some uses of the English and are not accurately mirrored by the fol
conjunction symbol. For example, suppose we are talking about an evening
when Max and Claire were together. If we were to say Max went home and
Claire went to sleep, our assertion would carry with it a temporal implication,
namely that Max went home before Claire went to sleep. Similarly, if we were to
reverse the order and assert Claire went to sleep and Max went home it would
suggest a very different sort of situation. By contrast, no such implication,
implicit or explicit, is intended when we use the symbol ∧. The sentence
WentHome(max) ∧ FellAsleep(claire)
is true in exactly the same circumstances as
FellAsleep(claire) ∧ WentHome(max)

Section 3.2
72 / The Boolean Connectives

Semantics and the game rule for ∧


Just as with negation, we can put complex sentences as well as simple ones
together with ∧. A sentence P ∧ Q is true if and only if both P and Q are true.
Thus P ∧ Q is false if either or both of P or Q is false. This can be summarized
by the following truth table.
P Q P∧Q
truth table for ∧ true true true
true false false
false true false
false false false

The Tarski’s World game is more interesting for conjunctions than nega-
game rule for ∧ tions. The way the game proceeds depends on whether you have committed
to true or to false. If you commit to the truth of P ∧ Q then you have
implicitly committed yourself to the truth of each of P and Q. Thus, Tarski’s
World gets to choose either one of these simpler sentences and hold you to the
truth of it. (Which one will Tarski’s World choose? If one or both of them are
false, it will choose a false one so that it can win the game. If both are true,
it will choose at random, hoping that you will make a mistake later on.)
If you commit to the falsity of P ∧ Q, then you are claiming that at least
one of P or Q is false. In this case, Tarski’s World will ask you to choose one of
the two and thereby explicitly commit to its being false. The one you choose
had better be false, or you will eventually lose the game.

You try it
................................................................
I 1. Open Claire’s World. Start a new sentence file and enter the sentence

¬Cube(a) ∧ ¬Cube(b) ∧ ¬Cube(c)

I 2. Notice that this sentence is false in this world, since c is a cube. Play
the game committed (mistakenly) to the truth of the sentence. You will
see that Tarski’s World immediately zeros in on the false conjunct. Your
commitment to the truth of the sentence guarantees that you will lose the
game, but along the way, the reason the sentence is false becomes apparent.
I 3. Now begin playing the game committed to the falsity of the sentence.
When Tarski’s World asks you to choose a conjunct you think is false,
pick the first sentence. This is not the false conjunct, but select it anyway
and see what happens after you choose OK.

Chapter 3
Conjunction symbol: ∧ / 73

4. Play until Tarski’s World says that you have lost. Then click on Back a J
couple of times, until you are back to where you are asked to choose a
false conjunct. This time pick the false conjunct and resume the play of
the game from that point. This time you will win.

5. Notice that you can lose the game even when your original assessment J
is correct, if you make a bad choice along the way. But Tarski’s World
always allows you to back up and make different choices. If your original
assessment is correct, there will always be a way to win the game. If it
is impossible for you to win the game, then your original assessment was
wrong.

6. Save your sentence file as Sentences Game 1 when you are done. J
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Remember

1. If P and Q are sentences of fol, then so is P ∧ Q.

2. The sentence P ∧ Q is true if and only if both P and Q are true.

Exercises

3.5 If you skipped the You try it section, go back and do it now. Make sure you follow all the
➶ instructions. Submit the file Sentences Game 1.
3.6 Start a new sentence file and open Wittgenstein’s World. Write the following sentences in the
➶ sentence file.
1. Tet(f) ∧ Small(f)
2. Tet(f) ∧ Large(f)
3. Tet(f) ∧ ¬Small(f)
4. Tet(f) ∧ ¬Large(f)
5. ¬Tet(f) ∧ ¬Small(f)
6. ¬Tet(f) ∧ ¬Large(f)
7. ¬(Tet(f) ∧ Small(f))
8. ¬(Tet(f) ∧ Large(f))

Section 3.2
74 / The Boolean Connectives

9. ¬(¬Tet(f) ∧ ¬Small(f))
10. ¬(¬Tet(f) ∧ ¬Large(f))

Once you have written these sentences, decide which you think are true. Record your eval-
uations, to help you remember. Then go through and use Tarski’s World to evaluate your
assessments. Whenever you are wrong, play the game to see where you went wrong.
If you are never wrong, playing the game will not be very instructive. Play the game a
couple times anyway, just for fun. In particular, try playing the game committed to the falsity
of sentence 9. Since this sentence is true in Wittgenstein’s World, Tarski’s World should be able
to beat you. Make sure you understand everything that happens as the game proceeds.
Next, change the size or shape of block f , predict how this will affect the truth values of
your ten sentences, and see if your prediction is right. What is the maximum number of these
sentences that you can get to be true in a single world? Build a world in which the maximum
number of sentences are true. Submit both your sentence file and your world file, naming them
as usual.

3.7 (Building a world) Open Max’s Sentences. Build a world where all these sentences are true.
➶ You should start with a world with six blocks and make changes to it, trying to make all the
sentences true. Be sure that as you make a later sentence true you do not inadvertently falsify
an earlier sentence.

Section 3.3
Disjunction symbol: ∨

The symbol ∨ is used to express disjunction in our language, the notion we


express in English using or. In first-order logic, this connective, like the con-
junction sign, is always placed between two sentences, whereas in English we
can also disjoin nouns, verbs, and other parts of speech. For example, the
English sentences John or Mary is home and John is home or Mary is home
both have the same first-order translation:

Home(john) ∨ Home(mary)

This fol sentence is read “Home John or home Mary.”


exclusive vs. inclusive Although the English or is sometimes used in an “exclusive” sense, to say
disjunction that exactly one (i.e., one but no more than one) of the two disjoined sentences
is true, the first-order logic ∨ is always given an “inclusive” interpretation: it
means that at least one and possibly both of the two disjoined sentences is
true. Thus, our sample sentence is true if John is home but Mary is not, if
Mary is home but John is not, or if both John and Mary are home.

Chapter 3
Disjunction symbol: ∨ / 75

If we wanted to express the exclusive sense of or in the above example, we


could do it as follows:

[Home(john) ∨ Home(mary)] ∧ ¬[Home(john) ∧ Home(mary)]

As you can see, this sentence says that John or Mary is home, but it is not
the case that they are both home.
Many students are tempted to say that the English expression either . . . or
expresses exclusive disjunction. While this is sometimes the case (and indeed
the simple or is often used exclusively), it isn’t always. For example, suppose
Pris and Scruffy are in the next room and the sound of a cat fight suddenly
breaks out. If we say Either Pris bit Scruffy or Scruffy bit Pris, we would not
be wrong if each had bit the other. So this would be translated as

Bit(pris, scruffy) ∨ Bit(scruffy, pris)

We will see later that the expression either sometimes plays a different logical
function.
Another important English expression that we can capture without intro-
ducing additional symbols is neither. . . nor. Thus Neither John nor Mary is
at home would be expressed as:

¬(Home(john) ∨ Home(mary))

This says that it’s not the case that at least one of them is at home, i.e., that
neither of them is home.

Semantics and the game rule for ∨


Given any two sentences P and Q of fol, atomic or not, we can combine them
using ∨ to form a new sentence P ∨ Q. The sentence P ∨ Q is true if at least
one of P or Q is true. Otherwise it is false. Here is the truth table.

P Q P∨Q
true true true
truth table for ∨
true false true
false true true
false false false

The game rules for ∨ are the “duals” of those for ∧. If you commit yourself game rule for ∨
to the truth of P ∨ Q, then Tarski’s World will make you live up to this by
committing yourself to the truth of one or the other. If you commit yourself to
the falsity of P ∨ Q, then you are implicitly committing yourself to the falsity

Section 3.3
76 / The Boolean Connectives

of each, so Tarski’s World will choose one and hold you to the commitment
that it is false. (Tarski’s World will, of course, try to win by picking a true
one, if it can.)

You try it
................................................................
I 1. Open the file Ackermann’s World. Start a new sentence file and enter the
sentence

Cube(c) ∨ ¬(Cube(a) ∨ Cube(b))

Make sure you get the parentheses right!


I 2. Play the game committed (mistakenly) to this sentence being true. Since
the sentence is a disjunction, and you are committed to true, you will
be asked to pick a disjunct that you think is true. Since the first one is
obviously false, pick the second.
I 3. You now find yourself committed to the falsity of a (true) disjunction.
Hence you are committed to the falsity of each disjunct. Tarski’s World
will then point out that you are committed to the falsity of Cube(b). But
this is clearly wrong, since b is a cube. Continue until Tarski’s World says
you have lost.
I 4. Play the game again, this time committed to the falsity of the sentence.
You should be able to win the game this time. If you don’t, back up and
try again.
I 5. Save your sentence file as Sentences Game 2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Remember

1. If P and Q are sentences of fol, then so is P ∨ Q.

2. The sentence P ∨ Q is true if and only if P is true or Q is true (or both


are true).

Exercises

3.8 If you skipped the You try it section, go back and do it now. You’ll be glad you did. Well,
➶ maybe. Submit the file Sentences Game 2.

Chapter 3
Remarks about the game / 77

3.9 Open Wittgenstein’s World and the sentence file Sentences 3.6 that you created for Exercise 3.6.
➶ Edit the sentences by replacing ∧ by ∨ throughout, saving the edited list as Sentences 3.9.
Once you have changed these sentences, decide which you think are true. Again, record your
evaluations to help you remember them. Then go through and use Tarski’s World to evaluate
your assessment. Whenever you are wrong, play the game to see where you went wrong. If you
are never wrong, then play the game anyway a couple times, knowing that you should win. As
in Exercise 3.6, find the maximum number of sentences you can make true by changing the
size or shape (or both) of block f . Submit both your sentences and world.

3.10 Open Ramsey’s World and start a new sentence file. Type the following four sentences into the
➶ file:
1. Between(a, b, c) ∨ Between(b, a, c)
2. FrontOf(a, b) ∨ FrontOf(c, b)
3. ¬SameRow(b, c) ∨ LeftOf(b, a)
4. RightOf(b, a) ∨ Tet(a)

Assess each of these sentences in Ramsey’s World and check your assessment. Then make a single
change to the world that makes all four of the sentences come out false. Save the modified world
as World 3.10. Submit both files.

Section 3.4
Remarks about the game

We summarize the game rules for the three connectives, ¬, ∧, and ∨, in


Table 3.1. The first column indicates the form of the sentence in question,
and the second indicates your current commitment, true or false. Which
player moves depends on this commitment, as shown in the third column.
The goal of that player’s move is indicated in the final column. Notice that commitment and rules
although the player to move depends on the commitment, the goal of that
move does not depend on the commitment. You can see why this is so by
thinking about the first row of the table, the one for P ∨ Q. When you are
committed to true, it is clear that your goal should be to choose a true
disjunct. But when you are committed to false, Tarski’s World is committed
to true, and so also has the same goal of choosing a true disjunct.
There is one somewhat subtle point that should be made about our way of
describing the game. We have said, for example, that when you are committed
to the truth of the disjunction P ∨ Q, you are committed to the truth of one
of the disjuncts. This of course is true, but does not mean you necessarily
know which of P or Q is true. For example, if you have a sentence of the form

Section 3.4
78 / The Boolean Connectives

Table 3.1: Game rules for ∧, ∨, and ¬

Form Your commitment Player to move Goal

true you Choose one of


P∨Q P, Q that
false Tarski’s World is true.

true Tarski’s World Choose one of


P∧Q P, Q that
false you is false.

Replace ¬P
¬P either — by P and
switch
commitment.

P ∨ ¬P, then you know that it is true, no matter how the world is. After all,
if P is not true, then ¬P will be true, and vice versa; in either event P ∨ ¬P
will be true. But if P is quite complex, or if you have imperfect information
about the world, you may not know which of P or ¬P is true. Suppose P
is a sentence like There is a whale swimming below the Golden Gate Bridge
right now. In such a case you would be willing to commit to the truth of the
disjunction (since either there is or there isn’t) without knowing just how to
play the game and win. You know that there is a winning strategy for the
game, but just don’t know what it is.
Since there is a moral imperative to live up to one’s commitments, the
use of the term “commitment” in describing the game is a bit misleading.
You are perfectly justified in asserting the truth of P ∨ ¬P, even if you do
not happen to know your winning strategy for playing the game. Indeed, it
would be foolish to claim that the sentence is not true. But if you do claim
that P ∨ ¬P is true, and then play the game, you will be asked to say which
of P or ¬P you think is true. With Tarski’s World, unlike in real life, you can
always get complete information about the world by going to the 2D view,
and so always live up to such commitments.

Chapter 3
Ambiguity and parentheses / 79

Exercises

Here is a problem that illustrates the remarks we made about sometimes being able to tell that a sentence
is true, without knowing how to win the game.

3.11 Make sure Tarski’s World is set to display the world in 3D. Then open Kleene’s World and
✎ Kleene’s Sentences. Some objects are hidden behind other objects, thus making it impossible
to assess the truth of some of the sentences. Each of the six names a, b, c, d, e, and f are in use,
naming some object. Now even though you cannot see all the objects, some of the sentences in
the list can be evaluated with just the information at hand. Assess the truth of each claim, if
you can, without recourse to the 2-D view. Then play the game. If your initial commitment is
right, but you lose the game, back up and play over again. Then go through and add comments
to each sentence explaining whether you can assess its truth in the world as shown, and why.
Finally, display the 2-D view and check your work. We have annotated the first sentence for you
to give you the idea. (The semicolon “;” tells Tarski’s World that what follows is a comment.)
When you are done, print out your annotated sentences to turn in to your instructor.

Section 3.5
Ambiguity and parentheses
When we first described fol, we stressed the lack of ambiguity of this language
as opposed to ordinary languages. For example, English allows us to say things
like Max is home or Claire is home and Carl is happy. This sentence can be
understood in two quite different ways. One reading claims that either Claire
is home and Carl is happy, or Max is home. On this reading, the sentence
would be true if Max was home, even if Carl was unhappy. The other reading
claims both that Max or Claire is home and that Carl is happy.
Fol avoids this sort of ambiguity by requiring the use of parentheses, much
the way they are used in algebra. So, for example, fol would not have one
sentence corresponding to the ambiguous English sentence, but two:
Home(max) ∨ (Home(claire) ∧ Happy(carl))
(Home(max) ∨ Home(claire)) ∧ Happy(carl)
The parentheses in the first indicate that it is a disjunction, whose second
disjunct is itself a conjunction. In the second, they indicate that the sentence
is a conjunction whose first conjunct is a disjunction. As a result, the truth
conditions for the two are quite different. This is analogous to the difference
in algebra between the expressions 2 + (x × 3) and (2 + x) × 3. This analogy
between logic and algebra is one we will come back to later.

Section 3.5
80 / The Boolean Connectives

scope of negation Parentheses are also used to indicate the “scope” of a negation symbol
when it appears in a complex sentence. So, for example, the two sentences
¬Home(claire) ∧ Home(max)
¬(Home(claire) ∧ Home(max))
mean quite different things. The first is a conjunction of literals, the first of
which says Claire is not home, the second of which says that Max is home. By
contrast, the second sentence is a negation of a sentence which itself is a con-
junction: it says that they are not both home. You have already encountered
this use of parentheses in earlier exercises.
Many logic books require that you always put parentheses around any pair
of sentences joined by a binary connective (such as ∧ or ∨). These books do
not allow sentences of the form:
P∧Q∧R
but instead require one of the following:
((P ∧ Q) ∧ R)
(P ∧ (Q ∧ R))
The version of fol that we use in this book is not so fussy, in a couple of ways.
leaving out parentheses First of all, it allows you to conjoin any number of sentences without using
parentheses, since the result is not ambiguous, and similarly for disjunctions.
Second, it allows you to leave off the outermost parentheses, since they serve
no useful purpose. You can also add extra parentheses (or brackets or braces)
if you want to for the sake of readability. For the most part, all we will require
is that your expression be unambiguous.

Remember

Parentheses must be used whenever ambiguity would result from their


omission. In practice, this means that conjunctions and disjunctions must
be “wrapped” in parentheses whenever combined by means of some other
connective.

You try it
................................................................
I 1. Let’s try our hand at evaluating some sentences built up from atomic
sentences using all three connectives ∧, ∨, ¬. Open Boole’s Sentences and
Wittgenstein’s World. If you changed the size or shape of f while doing
Exercises 3.6 and 3.9, make sure that you change it back to a large tetra-
hedron.

Chapter 3
Ambiguity and parentheses / 81

2. Evaluate each sentence in the file and check your assessment. If your as- J
sessment is wrong, play the game to see why. Don’t go from one sentence
to the next until you understand why it has the truth value it does.

3. Do you see the importance of parentheses? After you understand all the J
sentences, go back and see which of the false sentences you can make true
just by adding, deleting, or moving parentheses, but without making any
other changes. Save your file as Sentences Ambiguity 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Exercises

To really master a new language, you have to use it, not just read about it. The exercises and problems
that follow are intended to let you do just that.

3.12 If you skipped the You try it section, go back and do it now. Submit the file Sentences
➶ Ambiguity 1.

3.13 (Building a world) Open Schröder’s Sentences. Build a single world where all the sentences
➶ in this file are true. As you work through the sentences, you will find yourself successively
modifying the world. Whenever you make a change in the world, be careful that you don’t
make one of your earlier sentences false. When you are finished, verify that all the sentences
are really true.

3.14 (Parentheses) Show that the sentence 3.15 (More parentheses) Show that
➶ ➶
¬(Small(a) ∨ Small(b)) Cube(a) ∧ (Cube(b) ∨ Cube(c))

is not a consequence of the sentence is not a consequence of the sentence

¬Small(a) ∨ Small(b) (Cube(a) ∧ Cube(b)) ∨ Cube(c)

You will do this by submitting a coun- You will do this by submitting a coun-
terexample world in which the second terexample world in which the second
sentence is true but the first sentence is sentence is true but the first sentence is
false. false.
3.16 (DeMorgan Equivalences) Open the file DeMorgan’s Sentences. Construct a world where all the
➶ odd numbered sentences are true. Notice that no matter how you do this, the even numbered
sentences also come out true. Submit this as World 3.16.1. Next build a world where all the
odd numbered sentences are false. Notice that no matter how you do it, the even numbered
sentences also come out false. Submit this as World 3.16.2.

Section 3.5
82 / The Boolean Connectives

3.17 In Exercise 3.16, you noticed an important fact about the relation between the even and odd
✎ numbered sentences in DeMorgan’s Sentences. Try to explain why each even numbered sentence
always has the same truth value as the odd numbered sentence that precedes it.

Section 3.6
Equivalent ways of saying things

Every language has many ways of saying the same thing. This is particularly
true of English, which has absorbed a remarkable number of words from other
languages in the course of its history. But in any language, speakers always
have a choice of many synonymous ways of getting across their point. The
world would be a boring place if there were just one way to make a given
claim.
Fol is no exception, even though it is far less rich in its expressive capaci-
ties than English. In the blocks language, for example, none of our predicates
is synonymous with another predicate, though it is obvious that we could
do without many of them without cutting down on the claims expressible in
the language. For instance, we could get by without the predicate RightOf by
expressing everything we need to say in terms of the predicate LeftOf, sys-
tematically reversing the order of the names to get equivalent claims. This is
not to say that RightOf means the same thing as LeftOf—it obviously does
not—but just that the blocks language offers us a simple way to construct
equivalent claims using these predicates. In the exercises at the end of this
section, we explore a number of equivalences made possible by the predicates
of the blocks language.
Some versions of fol are more parsimonious with their basic predicates
than the blocks language, and so may not provide equivalent ways of express-
ing atomic claims. But even these languages cannot avoid multiple ways of
expressing more complex claims. For example, P ∧ Q and Q ∧ P express the
same claim in any first-order language. More interesting, because of the su-
perficial differences in form, are the equivalences illustrated in Exercise 3.16,
DeMorgan’s laws known as DeMorgan’s laws. The first of DeMorgan’s laws tells us that the
negation of a conjunction, ¬(P ∧ Q), is logically equivalent to the disjunction
of the negations of the original conjuncts: ¬P ∨ ¬Q. The other tells us that
the negation of a disjunction, ¬(P ∨ Q), is equivalent to the conjunction of
the negations of the original disjuncts: ¬P ∧ ¬Q. These laws are simple con-
sequences of the meanings of the Boolean connectives. Writing S1 ⇔ S2 to
indicate that S1 and S2 are logically equivalent, we can express DeMorgan’s

Chapter 3
Equivalent ways of saying things / 83

laws in the following way:

¬(P ∧ Q) ⇔ (¬P ∨ ¬Q)


¬(P ∨ Q) ⇔ (¬P ∧ ¬Q)

There are many other equivalences that arise from the meanings of the
Boolean connectives. Perhaps the simplest is known as the principle of double double negation
negation. Double negation says that a sentence of the form ¬¬P is equivalent
to the sentence P. We will systematically discuss these and other equiva-
lences in the next chapter. In the meantime, we simply note these important
equivalences before going on. Recognizing that there is more than one way of
expressing a claim is essential before we tackle complicated claims involving
the Boolean connectives.

Remember

(Double negation and DeMorgan’s Laws) For any sentences P and Q:

1. Double negation: ¬¬P ⇔ P

2. DeMorgan: ¬(P ∧ Q) ⇔ (¬P ∨ ¬Q)

3. DeMorgan: ¬(P ∨ Q) ⇔ (¬P ∧ ¬Q)

Exercises

3.18 (Equivalences in the blocks language) In the blocks language used in Tarski’s World there are
➶ a number of equivalent ways of expressing some of the predicates. Open Bernays’ Sentences.
You will find a list of atomic sentences, where every other sentence is left blank. In each blank,
write a sentence that is equivalent to the sentence above it, but does not use the predicate
used in that sentence. (In doing this, you may presuppose any general facts about Tarski’s
World, for example that blocks come in only three shapes.) If your answers are correct, the odd
numbered sentences will have the same truth values as the even numbered sentences in every
world. Check that they do in Ackermann’s World, Bolzano’s World, Boole’s World, and Leibniz’s
World. Submit the modified sentence file as Sentences 3.18.

3.19 (Equivalences in English) There are also equivalent ways of expressing predicates in English.
✎ For each of the following sentences of fol, find an atomic sentence in English that expresses
the same thing. For example, the sentence Man(max) ∧ ¬Married(max) could be expressed in

Section 3.6
84 / The Boolean Connectives

English by means of the atomic sentence Max is a bachelor.


1. FatherOf(chris, alex) ∨ MotherOf(chris, alex)
2. BrotherOf(chris, alex) ∨ SisterOf(chris, alex)
3. Human(chris) ∧ Adult(chris) ∧ ¬Woman(chris)
4. Number(4) ∧ ¬Odd(4)
5. Person(chris) ∧ ¬Odd(chris)
6. mother(mother(alex)) = mary ∨ mother(father(alex)) = mary [Notice that mother and
father are function symbols. If you did not cover Section 1.5, you may skip this sen-
tence.]

Section 3.7
Translation

An important skill that you will want to master is that of translating from
English to fol, and vice versa. But before you can do that, you need to know
how to express yourself in both languages. The problems below are designed
to help you learn these related skills.
correct translation How do we know if a translation is correct? Intuitively, a correct translation
is a sentence with the same meaning as the one being translated. But what
is the meaning? Fol finesses this question, settling for “truth conditions.”
What we require of a correct translation in fol is that it be true in the same
circumstances as the original sentence. If two sentences are true in exactly
truth conditions the same circumstances, we say that they have the same truth conditions. For
sentences of Tarski’s World, this boils down to being true in the very same
worlds.
Note that it is not sufficient that the two sentences have the same truth
value in some particular world. If that were so, then any true sentence of
English could be translated by any true sentence of fol. So, for example,
if Claire and Max are both at home, we could translate Max is at home by
means of Home(claire). No, having the same actual truth value is not enough.
They have to have the same truth values in all circumstances.

Remember

In order for an fol sentence to be a good translation of an English sen-


tence, it is sufficient that the two sentences have the same truth values
in all possible circumstances, that is, that they have the same truth con-
ditions.

Chapter 3
Translation / 85

In general, this is all we require of translations into and out of fol. Thus,
given an English sentence S and a good fol translation of it, say S, any other
sentence S0 that is equivalent to S will also count as an acceptable translation
of it, since S and S0 have the same truth conditions. But there is a matter of
style. Some good translations are better than others. You want sentences that
are easy to understand. But you also want to keep the fol connectives close
to the English, if possible.
For example, a good translation of It is not true that Claire and Max are
both at home would be given by

¬(Home(claire) ∧ Home(max))

This is equivalent to the following sentence (by the first DeMorgan law), so
we count it too as an acceptable translation:

¬Home(claire) ∨ ¬Home(max)

But there is a clear stylistic sense in which the first is a better translation, since
it conforms more closely to the form of the original. There are no hard and
fast rules for determining which among several logically equivalent sentences
is the best translation of a given sentence.
Many stylistic features of English have nothing to do with the truth con-
ditions of a sentence, and simply can’t be captured in an fol translation. For
example, consider the English sentence Pris is hungry but Carl is not. This
sentence tells us two things, that Pris is hungry and that Carl is not hungry.
So it would be translated into fol as

Hungry(pris) ∧ ¬Hungry(carl)

When it comes to truth conditions, but expresses the same truth function
as and. Yet it is clear that but carries an additional suggestion that and does but, however, yet,
not, namely, that the listener may find the sentence following the but a bit sur- nonetheless
prising, given the expectations raised by the sentence preceding it. The words
but, however, yet, nonetheless, and so forth, all express ordinary conjunction,
and so are translated into fol using ∧. The fact that they also communicate
a sense of unexpectedness is just lost in the translation. Fol, as much as we
love it, sometimes sacrifices style for clarity.
In Exercise 21, sentences 1, 8, and 10, you will discover an important
function that the English phrases either. . . or and both. . . and sometimes play. either. . . or, both. . . and
Either helps disambiguate the following or by indicating how far to the left
its scope extends; similarly both indicates how far to the left the following
and extends. For example, Either Max is home and Claire is home or Carl

Section 3.7
86 / The Boolean Connectives

is happy is unambiguous, whereas it would be ambiguous without the either.


What it means is that

[Home(max) ∧ Home(claire)] ∨ Happy(carl)

In other words, either and both can sometimes act as left parentheses act in
fol. The same list of sentences demonstrates many other uses of either and
both.

Remember

1. The English expression and sometimes suggests a temporal ordering;


the fol expression ∧ never does.

2. The English expressions but, however, yet, nonetheless, and moreover


are all stylistic variants of and.

3. The English expressions either and both are often used like parentheses
to clarify an otherwise ambiguous sentence.

Exercises

3.20 (Describing a simple world) Open Boole’s World. Start a new sentence file, named Sen-
➶ tences 3.20, where you will describe some features of this world. Check each of your sentences
to see that it is indeed a sentence and that it is true in this world.
1. Notice that f (the large dodecahedron in the back) is not in front of a. Use your first
sentence to say this.
2. Notice that f is to the right of a and to the left of b. Use your second sentence to say
this.
3. Use your third sentence to say that f is either in back of or smaller than a.
4. Express the fact that both e and d are between c and a.
5. Note that neither e nor d is larger than c. Use your fifth sentence to say this.
6. Notice that e is neither larger than nor smaller than d. Use your sixth sentence to say
this.
7. Notice that c is smaller than a but larger than e. State this fact.
8. Note that c is in front of f; moreover, it is smaller than f . Use your eighth sentence
to state these things.

Chapter 3
Translation / 87

9. Notice that b is in the same row as a but is not in the same column as f. Use your
ninth sentence to express this fact.
10. Notice that e is not in the same column as either c or d. Use your tenth sentence to
state this.

Now let’s change the world so that none of the above mentioned facts hold. We can do this as
follows. First move f to the front right corner of the grid. (Be careful not to drop it off the
edge. You might find it easier to make the move from the 2-D view. If you accidentally drop
it, just open Boole’s World again.) Then move e to the back left corner of the grid and make
it large. Now none of the facts hold; if your answers to 1–10 are correct, all of the sentences
should now be false. Verify that they are. If any are still true, can you figure out where you went
wrong? Submit your sentences when you think they are correct. There is no need to submit
the modified world file.

3.21 (Some translations) Tarski’s World provides you with a very useful way to check whether your
➶ translation of a given English sentence is correct. If it is correct, then it will always have the
same truth value as the English sentence, no matter what world the two are evaluated in. So
when you are in doubt about one of your translations, simply build some worlds where the
English sentence is true, others where it is false, and check to see that your translation has
the right truth values in these worlds. You should use this technique frequently in all of the
translation exercises.
Start a new sentence file, and use it to enter translations of the following English sentences
into first-order logic. You will only need to use the connectives ∧, ∨, and ¬.
1. Either a is small or both c and d are large.
2. d and e are both in back of b.
3. d and e are both in back of b and larger than it.
4. Both d and c are cubes, however neither of them is small.
5. Neither e nor a is to the right of c and to the left of b.
6. Either e is not large or it is in back of a.
7. c is neither between a and b, nor in front of either of them.
8. Either both a and e are tetrahedra or both a and f are.
9. Neither d nor c is in front of either c or b.
10. c is either between d and f or smaller than both of them.
11. It is not the case that b is in the same row as c.
12. b is in the same column as e, which is in the same row as d, which in turn is in the
same column as a.

Before you submit your sentence file, do the next exercise.

Section 3.7
88 / The Boolean Connectives

3.22 (Checking your translations) Open Wittgenstein’s World. Notice that all of the English sentences
➶ from Exercise 3.21 are true in this world. Thus, if your translations are accurate, they will also
be true in this world. Check to see that they are. If you made any mistakes, go back and fix
them. But as we have stressed, even if one of your sentences comes out true in Wittgenstein’s
World, it does not mean that it is a proper translation of the corresponding English sentence.
All you know for sure is that your translation and the original sentence have the same truth
value in this particular world. If the translation is correct, it will have the same truth value as
the English sentence in every world. Thus, to have a better test of your translations, we will
examine them in a number of worlds, to see if they have the same truth values as their English
counterparts in all of these worlds.
Let’s start by making modifications to Wittgenstein’s World. Make all the large or medium
objects small, and the small objects large. With these changes in the world, the English sen-
tences 1, 3, 4, and 10 become false, while the rest remain true. Verify that the same holds for
your translations. If not, correct your translations. Next, rotate your modified Wittgenstein’s
World 90◦ clockwise. Now sentences 5, 6, 8, 9, and 11 should be the only true ones that remain.
Let’s check your translations in another world. Open Boole’s World. The only English sen-
tences that are true in this world are sentences 6 and 11. Verify that all of your translations
except 6 and 11 are false. If not, correct your translations.
Now modify Boole’s World by exchanging the positions of b and c. With this change, the
English sentences 2, 5, 6, 7, and 11 come out true, while the rest are false. Check that the same
is true of your translations.
There is nothing to submit except Sentences 3.21.
3.23 Start a new sentence file and translate the following into fol. Use the names and predicates
➶ presented in Table 1.2 on page 30.
1. Max is a student, not a pet.
2. Claire fed Folly at 2 pm and then ten minutes later gave her to Max.
3. Folly belonged to either Max or Claire at 2:05 pm.
4. Neither Max nor Claire fed Folly at 2 pm or at 2:05 pm.
5. 2:00 pm is between 1:55 pm and 2:05 pm.
6. When Max gave Folly to Claire at 2 pm, Folly wasn’t hungry, but she was an hour
later.
3.24 Referring again to Table 1.2, page 30, translate the following into natural, colloquial English.
✎ Turn in your translations to your instructor.
1. Student(claire) ∧ ¬Student(max)
2. Pet(pris) ∧ ¬Owned(max, pris, 2:00)
3. Owned(claire, pris, 2:00) ∨ Owned(claire, folly, 2:00)
4. ¬(Fed(max, pris, 2:00) ∧ Fed(max, folly, 2:00))

Chapter 3
Alternative notation / 89

5. ((Gave(max, pris, claire, 2:00) ∧ Hungry(pris, 2:00)) ∨


(Gave(max, folly, claire, 2:00) ∧ Hungry(folly, 2:00))) ∧
Angry(claire, 2:05)

3.25 Translate the following into fol, introducing names, predicates, and function symbols as
✎? needed. Explain the meaning of each predicate and function symbol, unless it is completely
obvious.
1. AIDS is less contagious than influenza, but more deadly.
2. Abe fooled Stephen on Sunday, but not on Monday.
3. Sean or Brad admires Meryl and Harrison.
4. Daisy is a jolly miller, and lives on the River Dee.
5. Polonius’s eldest child was neither a borrower nor a lender.

Section 3.8
Alternative notation

As we mentioned in Chapter 2, there are various dialect differences among


users of fol. It is important to be aware of these so that you will not be
stymied by superficial differences. In fact, you will run into alternate symbols
being used for each of the three connectives studied in this chapter.
The most common variant of the negation sign, ¬, is the symbol known
as the tilde, ∼. Thus you will frequently encounter ∼ P where we would write
¬P. A more old-fashioned alternative is to draw a bar completely across the
negated sentence, as in P. This has one advantage over ¬, in that it allows
you to avoid certain uses of parentheses, since the bar indicates its own scope
by what lies under it. For example, where we have to write ¬(P ∧ Q), the
bar equivalent would simply be P ∧ Q. None of these symbols are available
on all keyboards, a serious problem in some contexts, such as programming
languages. Because of this, many programming languages use an exclamation
point to indicate negation. In the Java programming language, for example,
¬P would be written !P.
There are only two common variants of ∧. By far the most common is
&, or sometimes (as in Java), &&. An older notation uses a centered dot, as
in multiplication. To make things more confusing still, the dot is sometimes
omitted, again as in multiplication. Thus, for P ∧ Q you might see any of the
following: P&Q, P&&Q, P · Q, or just PQ.
Happily, the symbol ∨ is pretty standard. The only exception you may
encounter is a single or double vertical line, used in programming languages.
So if you see P | Q or P k Q, what is meant is probably P ∨ Q. Unfortunately,

Section 3.8
90 / The Boolean Connectives

though, some old textbooks use P | Q to express not both P and Q.

Alternatives to parentheses
There are ways to get around the use of parentheses in fol. At one time, a
dot notation common alternative to parentheses was a system known as dot notation. This
system involved placing little dots next to connectives indicating their relative
“power” or scope. In this system, the two sentences we write as P ∨ (Q ∧ R)
and (P ∨ Q) ∧ R would have been written P ∨. Q ∧ R and P ∨ Q .∧ R, respec-
tively. With more complex sentences, multiple dots were used. Fortunately,
this notation has just about died out, and the present authors never speak to
anyone who uses it.
Polish notation Another approach to parentheses is known as Polish notation. In Polish
notation, the usual infix notation is replaced by prefix notation, and this
makes parentheses unnecessary. Thus the distinction between our ¬(P ∨ Q)
and (¬P ∨ Q) would, in prefix form, come out as ¬ ∨ PQ and ∨¬PQ, the
order of the connectives indicating which includes the other in its scope.
Besides prefix notation, Polish notation uses certain capital letters for
connectives (N for ¬, K for ∧, and A for ∨), and lower case letters for its atomic
sentences (to distinguish them from connectives). So an actual sentence of the
Polish dialect would look like this:
ApKNqr
Since this expression starts with A, we know right away that it is a disjunction.
What follows must be its two disjuncts, in sequence. So the first disjunct is p
and the second is KNqr, that is, the conjunction of the negation of q and of r.
So this is the Polish version of

P ∨ (¬Q ∧ R)

Though Polish notation may look hard to read, many of you have already
mastered a version of it. Calculators use two styles for entering formulas. One
reverse Polish notation is known as algebraic style, the other as RPN style. The RPN stands for
“reverse Polish notation.” If you have a calculator that uses RPN, then to
calculate the value of, say, (7 × 8) + 3 you enter things in this order: 7, 8, ×,
3, +. This is just the reverse of the Polish, or prefix, ordering.
In order for Polish notation to work without parentheses, the connectives
must all have a fixed arity. If we allowed conjunction to take an arbitrary num-
ber of sentences as arguments, rather than requiring exactly two, a sentence
like KpNKqrs would be ambiguous. It could either mean P ∧ ¬(Q ∧ R) ∧ S or
P ∧ ¬(Q ∧ R ∧ S), and these aren’t equivalent.

Chapter 3
Alternative notation / 91

Remember

The following table summarizes the alternative notations discussed so far.

Our notation Common equivalents


¬P ∼ P, P, !P, Np
P∧Q P&Q, P&&Q, P · Q, PQ, Kpq
P∨Q P | Q, P k Q, Apq

Exercises

3.26 (Overcoming dialect differences) The 3.27 (Translating from Polish) Try your hand
➶ following are all sentences of fol. But ➶ at translating the following sentences
they’re in different dialects. Submit a from Polish notation into our dialect.
sentence file in which you’ve translated Submit the resulting sentence file.
them into our dialect. 1. NKpq
1. P&Q 2. KNpq
2. !(P k (Q&&P)) 3. NAKpqArs
3. (∼ P ∨ Q) · P 4. NAKpAqrs
4. P(∼ Q ∨ RS) 5. NAKApqrs

3.28 (Boolean searches) You have probably heard of tools for searching data that permit “full
✎ Boolean searches.” This means that the search language allows you to use the Boolean connec-
tives we have been studying. Before you can do a search, though, you have to figure out what
dialect the search mechanism uses. Let’s try out a search engine that uses !, &, and |.
Using your web browser, go to the Alta Vista search page at http://www.altavista.com/.
Click on the link for Advanced Search, since only the advanced search page allows full
Boolean searches. Suppose you want to find information about the use of Tarski’s World at
other colleges and universities.
1. Type tarski in the search field and then click Search. You will find that there are way
too many web sites that mention Tarski, who was, after all, a famous logician.
2. Type tarski’s world in the search field and then click Search. This will find all sites
that contain the words “tarski’s world.” There are still quite a few, and many of them
are not colleges or universities, but book dealers and such. We need to exclude sites
whose web addresses end with “.com” or “.org.”

Section 3.8
92 / The Boolean Connectives

Figure 3.1: Boolean combinations of solids: A ∨ B,


A ∧ ¬B, ¬A ∧ B, and A ∧ B.

3. Type tarski’s world & !(domain:com | domain:org) in the search field and then click
Search. This will find all sites that contain “tarski’s world” but do not contain either
the “.com” or “.org” domains in their web addresses.
4. Type tarski’s world & !(domain:com | domain:org) & (fitch | proof) in the search field
and then click Search. This will find all sites that contain “tarski’s world,” do not
contain either the “.com” or “.org” domains in their web addresses, but do contain
either the word “fitch” or the word “proof.”
5. Construct a search to find any web pages containing references to Tarski’s World and
Boole, but neither Fitch nor Submit. Print the list of sites that result from your search,
write your Boolean expression at the top, and turn it in to your instructor.

3.29 (Boolean solids) When we do a Boolean search, we are really using a generalization of the
✎ Boolean truth functions. We specify a Boolean combination of words as a criterion for finding
documents that contain (or do not contain) those words. Another generalization of the Boolean
operations is to spatial objects. In Figure 3.1 we show four ways to combine a vertical cylinder
(A) with a horizontal cylinder (B) to yield a new solid. Give an intuitive explanation of how the
Boolean connectives are being applied in this example. Then describe what the object ¬(A ∧ B)
would be like and explain why we didn’t give you a picture of this solid.

Chapter 3
Chapter 4

The Logic of Boolean


Connectives

The connectives ∧, ∨, and ¬ are truth-functional connectives. Recall what


this means: the truth value of a complex sentence built by means of one of
these symbols can be determined simply by looking at the truth values of the
sentence’s immediate constituents. So to know whether P ∨ Q is true, we need
only know the truth values of P and Q. This particularly simple behavior is
what allows us to capture the meanings of truth-functional connectives using
truth tables.
Other connectives we could study are not this simple. Consider, the sen-
tence it is necessarily the case that S. Since some true claims are necessarily truth-functional vs.
true, that is, could not have been false (for instance, a = a), while other true non-truth-functional
claims are not necessarily true (for instance, Cube(a)), we can’t figure out the operators
truth value of the original sentence if we are only told the truth value of its
constituent sentence S. It is necessarily the case, unlike it is not the case, is
not truth-functional.
The fact that the Boolean connectives are truth functional makes it very
easy to explain their meanings. It also provides us with a simple but power-
ful technique to study their logic. The technique is an extension of the truth
tables used to present the meanings of the connectives. It turns out that we
can often calculate the logical properties of complex sentences by construct-
ing truth tables that display all possible assignments of truth values to the
atomic constituents from which the sentences are built. The technique can,
for example, tell us that a particular sentence S is a logical consequence of
some premises P1 , . . . , Pn . And since logical consequence is one of our main
concerns, the technique is an important one to learn.
In this chapter we will discuss what truth tables can tell us about three
related logical notions: the notions of logical consequence, logical equivalence,
and logical truth. Although we’ve already discussed logical consequence at
some length, we’ll tackle these in reverse order, since the related truth table
techniques are easier to understand in that order.

93
94 / The Logic of Boolean Connectives

Section 4.1
Tautologies and logical truth

We said that a sentence S is a logical consequence of a set of premises P1 , . . . , Pn


if it is impossible for the premises all to be true while the conclusion S is false.
That is, the conclusion must be true if the premises are true.
Notice that according to this definition there are some sentences that are
logical consequences of any set of premises, even the empty set. This will be
true of any sentence whose truth is itself a logical necessity. For example,
given our assumptions about fol, the sentence a = a is necessarily true. So
of course, no matter what your initial premises may be, it will be impossible
for those premises to be true and for a = a to be false—simply because it is
impossible for a = a to be false! We will call such logically necessary sentences
logical truth logical truths.
The intuitive notions of logical possibility and logical necessity have al-
ready come up several times in this book in characterizing valid arguments
and the consequence relation. But this is the first time we have applied them
logical possibility and to individual sentences. Intuitively, a sentence is logically possible if it could
necessity be (or could have been) true, at least on logical grounds. There might be some
other reasons, say physical, why the statement could not be true, but there are
no logical reasons preventing it. For example, it is not physically possible to
go faster than the speed of light, though it is logically possible: they do it on
Star Trek all the time. On the other hand, it is not even logically possible for
an object not to be identical to itself. That would simply violate the meaning
of identity. The way it is usually put is that a claim is logically possible if
there is some logically possible circumstance (or situation or world) in which
the claim is true. Similarly, a sentence is logically necessary if it is true in
every logically possible circumstance.
These notions are quite important, but they are also annoyingly vague.
As we proceed through this book, we will introduce several precise concepts
that help us clarify these notions. The first of these precise concepts, which
tautology we introduce in this section, is the notion of a tautology.
How can a precise concept help clarify an imprecise, intuitive notion? Let’s
think for a moment about the blocks language and the intuitive notion of
logical possibility. Presumably, a sentence of the blocks language is logically
possible if there could be a blocks world in which it is true. Clearly, if we can
construct a world in Tarski’s World that makes it true, then this demonstrates
that the sentence is indeed logically possible. On the other hand, there are
logically possible sentences that can’t be made true in the worlds you can

Chapter 4
Tautologies and logical truth / 95

build with Tarski’s World. For example, the sentence

¬(Tet(b) ∨ Cube(b) ∨ Dodec(b))

is surely logically possible, say if b were a sphere or an icosahedron. You can’t


build such a world with Tarski’s World, but that is not logic’s fault, just as it’s
not logic’s fault that you can’t travel faster than the speed of light. Tarski’s
World has its non-logical laws and constraints just like the physical world.
The Tarski’s World program gives rise to a precise notion of possibility
for sentences in the blocks language. We could say that a sentence is tw- tw-possible
possible if it is true in some world that can be built using the program. Our
observations in the preceding paragraph could then be rephrased by saying
that every tw-possible sentence is logically possible, but that the reverse is
not in general true. Some logically possible sentences are not tw-possible.
It may seem surprising that we can make such definitive claims involving
a vague notion like logical possibility. But really, it’s no more surprising than
the fact that we can say with certainty that a particular apple is red, even
though the boundaries of the color red are vague. There may be cases where
it is hard to decide whether something is red, but this doesn’t mean there
aren’t many perfectly clear-cut cases.
Tarski’s World gives us a precise method for showing that a sentence of
the blocks language is logically possible, since whatever is possible in Tarski’s
World is logically possible. In this section, we will introduce another precise
method, one that can be used to show that a sentence built up using truth-
functional connectives is logically necessary. The method uses truth tables truth table method
to show that certain sentences cannot possibly be false, due simply to the
meanings of the truth-functional connectives they contain. Like the method
given to us by Tarski’s World, the truth table method works only in one
direction: when it says that a sentence is logically necessary, then it definitely
is. On the other hand, some sentences are logically necessary for reasons that
the truth table method cannot detect.
Suppose we have a complex sentence S with n atomic sentences, A1 , . . . , An .
To build a truth table for S, one writes the atomic sentences A1 ,. . . , An across
the top of the page, with the sentence S to their right. It is customary to
draw a double line separating the atomic sentences from S. Your truth table
will have one row for every possible way of assigning true and false to the
atomic sentences. Since there are two possible assignments to each atomic
sentence, there will be 2n rows. Thus if n = 1 there will be two rows, if n = 2 number of rows in a
there will be four rows, if n = 3 there will be eight rows, if n = 4 there will truth table
be sixteen rows, and so forth. It is customary to make the leftmost column
have the top half of the rows marked true, the second half false. The next

Section 4.1
96 / The Logic of Boolean Connectives

row splits each of these, marking the first and third quarters of the rows with
true, the second and fourth quarters with false, and so on. This will result
in the last column having true and false alternating down the column.
Let’s start by looking at a very simple example of a truth table, one for the
sentence Cube(a) ∨ ¬Cube(a). Since this sentence is built up from one atomic
sentence, our truth table will contain two rows, one for the case where Cube(a)
is true and one for when it is false.

Cube(a) Cube(a) ∨ ¬Cube(a)


T
F

In a truth table, the column or columns under the atomic sentences are
reference columns called reference columns. Once the reference columns have been filled in, we
are ready to fill in the remainder of the table. To do this, we construct columns
of T’s and F’s beneath each connective of the target sentence S. These columns
are filled in one by one, using the truth tables for the various connectives. We
start by working on connectives that apply only to atomic sentences. Once
this is done, we work on connectives that apply to sentences whose main
connective has already had its column filled in. We continue this process until
the main connective of S has had its column filled in. This is the column that
shows how the truth of S depends on the truth of its atomic parts.
Our first step in filling in this truth table, then, is to calculate the truth
values that should go in the column under the innermost connective, which in
this case is the ¬. We do this by referring to the truth values in the reference
column under Cube(a), switching values in accord with the meaning of ¬.

Cube(a) Cube(a) ∨ ¬Cube(a)


t F
f T

Once this column is filled in, we can determine the truth values that should
go under the ∨ by looking at the values under Cube(a) and those under the
negation sign, since these correspond to the values of the two disjuncts to
which ∨ is applied. (Do you understand this?) Since there is at least one T in
each row, the final column of the truth table looks like this.

Cube(a) Cube(a) ∨ ¬Cube(a)


t T f
f T t

Chapter 4
Tautologies and logical truth / 97

Not surprisingly, our table tells us that the sentence Cube(a) ∨ ¬Cube(a)
cannot be false. It is what we will call a tautology, an especially simple kind
of logical truth. We will give a precise definition of tautologies later. Our
sentence is in fact an instance of a principle, P ∨ ¬P, that is known as the law law of excluded middle
of the excluded middle. Every instance of this principle is a tautology.
Let’s next look at a more complex truth table, one for a sentence built up
from three atomic sentences.
(Cube(a) ∧ Cube(b)) ∨ ¬Cube(c)
In order to make our table easier to read, we will abbreviate the atomic
sentences by A, B, and C. Since there are three atomic sentences, our table
will have eight (23) rows. Look carefully at how we’ve arranged the T’s and
F’s and convince yourself that every possible assignment is represented by one
of the rows.
A B C (A ∧ B) ∨ ¬C
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F

Since two of the connectives in the target sentence apply to atomic sen-
tences whose values are specified in the reference column, we can fill in these
columns using the truth tables for ∧ and ¬ given earlier.

A B C (A ∧ B) ∨ ¬C
t t t T F
t t f T T
t f t F F
t f f F T
f t t F F
f t f F T
f f t F F
f f f F T

This leaves only one connective, the main connective of the sentence. We fill
in the column under it by referring to the two columns just completed, using
the truth table for ∨.

Section 4.1
98 / The Logic of Boolean Connectives

A B C (A ∧ B) ∨ ¬C
t t t t T f
t t f t T t
t f t f F f
t f f f T t
f t t f F f
f t f f T t
f f t f F f
f f f f T t

When we inspect the final column of this table, the one beneath the con-
nective ∨, we see that the sentence will be false in any circumstance where
Cube(c) is true and one of Cube(a) or Cube(b) is false. This table shows that
our sentence is not a tautology. Furthermore, since there clearly are blocks
worlds in which c is a cube and either a or b is not, the claim made by our
original sentence is not logically necessary.
Let’s look at one more example, this time for a sentence of the form

¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B

This sentence, though it has the same number of atomic constituents, is con-
siderably more complex than our previous example. We begin the truth table
by filling in the columns under the two connectives that apply directly to
atomic sentences.
A B C ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B
t t t F T
t t f F F
t f t F F
t f f F F
f t t T T
f t f T F
f f t T F
f f f T F

We can now fill in the column under the ∨ that connects ¬A and B ∧ C by
referring to the columns just filled in. This column will have an F in it if and
only if both of the constituents are false.

Chapter 4
Tautologies and logical truth / 99

A B C ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B


t t t f T t
t t f f F f
t f t f F f
t f f f F f
f t t t T t
f t f t T f
f f t t T f
f f f t T f

We now fill in the column under the remaining ∧. To do this, we need to


refer to the reference column under A, and to the just completed column. The
best way to do this is to run two fingers down the relevant columns and enter
a T in only those rows where both your fingers are pointing to T’s.

A B C ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B


t t t T f t t
t t f F f f f
t f t F f f f
t f f F f f f
f t t F t t t
f t f F t t f
f f t F t t f
f f f F t t f

We can now fill in the column for the remaining ¬ by referring to the previously
completed column. The ¬ simply reverses T’s and F’s.

A B C ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B


t t t F t f t t
t t f T f f f f
t f t T f f f f
t f f T f f f f
f t t T f t t t
f t f T f t t f
f f t T f t t f
f f f T f t t f

Finally, we can fill in the column under the main connective of our sentence.
We do this with the two-finger method: running our fingers down the reference
column for B and the just completed column, entering T whenever at least
one finger points to a T.

Section 4.1
100 / The Logic of Boolean Connectives

A B C ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B


t t t f t f t t T
t t f t f f f f T
t f t t f f f f T
t f f t f f f f T
f t t t f t t t T
f t f t f t t f T
f f t t f t t f T
f f f t f t t f T

tautology We will say that a tautology is any sentence whose truth table has only T’s
in the column under its main connective. Thus, we see from the final column
of the above table that any sentence of the form

¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B

is a tautology.

You try it
................................................................
I 1. Open the program Boole from the software that came with the book. We
will use Boole to reconstruct the truth table just discussed. The first thing
to do is enter the sentence ¬(A ∧ (¬A ∨ (B ∧ C))) ∨ B at the top, right of
the table. To do this, use the toolbar to enter the logical symbols and
the keyboard to type the letters A, B, and C. (You can also enter the
logical symbols from the keyboard by typing &, |, and ∼ for ∧, ∨, and
¬, respectively. If you enter the logical symbols from the keyboard, make
sure you add spaces before and after the binary connectives so that the
columns under them will be reasonably spaced out.) If your sentence is
well formed, the small “(1)” above the sentence will turn green.

I 2. To build the reference columns, click in the top left portion of the table to
move your insertion point to the top of the first reference column. Enter C
in this column. Then choose Add Column Before from the Table menu
and enter B. Repeat this procedure and add a column headed by A. To fill
in the reference columns, click under each of them in turn, and type the
desired pattern of T’s and F’s.
I 3. Click under the various connectives in the target sentence, and notice
that turquoise squares appear in the columns whose values the connective
depends upon. Select a column so that the highlighted columns are already

Chapter 4
Tautologies and logical truth / 101

filled in, and fill in that column with the appropriate truth values. Continue
this process until your table is complete. When you are done, click on the
button Verify Table to see if all the values are correct and your table
complete.

4. Once you have a correct and complete truth table, click on the Assess- J
ment button under the toolbar. This will allow you to say whether you
think the sentence is a tautology. Say that it is (since it is), and check your
assessment by clicking on the button Verify Assess. Save your table as
Table Tautology 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
There is a slight problem with our definition of a tautology, in that it
assumes that every sentence has a main connective. This is almost always the
case, but not in sentences like: main connectives
P∧Q∧R

For purposes of constructing truth tables, we will assume that the main con-
nective in conjunctions with more than two conjuncts is always the rightmost
∧. That is to say, we will construct a truth table for P ∧ Q ∧ R the same way
we would construct a truth table for:

(P ∧ Q) ∧ R

More generally, we construct the truth table for:

P1 ∧ P2 ∧ P3 ∧ . . . ∧ Pn

as if it were “punctuated” like this:

(((P1 ∧ P2 ) ∧ P3) ∧ . . .) ∧ Pn

We treat long disjunctions similarly.


Any tautology is logically necessary. After all, its truth is guaranteed sim- tautologies and logical
ply by its structure and the meanings of the truth-functional connectives. necessity
Tautologies are logical necessities in a very strong sense. Their truth is inde-
pendent of both the way the world happens to be and even the meanings of
the atomic sentences out of which they are composed.
It should be clear, however, that not all logically necessary claims are
tautologies. The simplest example of a logically necessary claim that is not
a tautology is the fol sentence a = a. Since this is an atomic sentence, its
truth table would contain one T and one F. The truth table method is too
coarse to recognize that the row containing the F does not represent a genuine
possibility.

Section 4.1
102 / The Logic of Boolean Connectives

Figure 4.1: The relation between tautologies, logical truths, and tw-
necessities.

You should be able to think of any number of sentences that are not
tautological, but which nonetheless seem logically necessary. For example, the
sentence
¬(Larger(a, b) ∧ Larger(b, a))
cannot possibly be false, yet a truth table for the sentence will not show this.
The sentence will be false in the row of the truth table that assigns T to both
Larger(a, b) and Larger(b, a).
We now have two methods for exploring the notions of logical possibility
and necessity, at least for the blocks language. First, there are the blocks
worlds that can be constructed using Tarski’s World. If a sentence is true
in some such world, we have called it tw-possible. Similarly, if a sentence is
true in all worlds that we can construct using Tarski’s World, we can call it
tw-necessary. The second method is that of truth tables. If a sentence comes
out true in every row of its truth table, we could call it tt-necessary or, more
traditionally, tautological. If a sentence is true in at least one row of its truth
tt-possible table, we will call it tt-possible.
None of these concepts correspond exactly to the vague notions of logi-

Chapter 4
Tautologies and logical truth / 103

cal possibility and necessity. But there are clear and important relationships
between the notions. On the necessity side, we know that all tautologies are
logically necessary, and that all logical necessities are tw-necessary. These
relationships are depicted in the “Euler circle” diagram in Figure 4.1, where
we have represented the set of logical necessities as the interior of a circle
with a fuzzy boundary. The set of tautologies is represented by a precise cir-
cle contained inside the fuzzy circle, and the set of Tarski’s World necessities
is represented by a precise circle containing both these circles.
There is, in fact, another method for showing that a sentence is a logical
truth, one that uses the technique of proofs. If you can prove a sentence using proof and logical truth
no premises whatsoever, then the sentence is logically necessary. In the fol-
lowing chapters, we will give you some more methods for giving proofs. Using
these, you will be able to prove that sentences are logically necessary without
constructing their truth tables. When we add quantifiers to our language, the
gap between tautologies and logical truths will become very apparent, making
the truth table method less useful. By contrast, the methods of proof that we
discuss later will extend naturally to sentences containing quantifiers.

Remember

Let S be a sentence of fol built up from atomic sentences by means of


truth-functional connectives alone. A truth table for S shows how the
truth of S depends on the truth of its atomic parts.

1. S is a tautology if and only if every row of the truth table assigns true
to S.

2. If S is a tautology, then S is a logical truth (that is, is logically neces-


sary).

3. Some logical truths are not tautologies.

4. S is tt-possible if and only if at least one row of the truth table assigns
true to S.

Section 4.1
104 / The Logic of Boolean Connectives

Exercises

In this chapter, you will often be using Boole to construct truth tables. Although Boole has the capability
of building and filling in reference columns for you, do not use this feature. To understand truth tables,
you need to be able to do this yourself. In later chapters, we will let you use the feature, once you’ve
learned how to do it yourself. The Grade Grinder will, by the way, be able to tell if Boole constructed
the reference columns.
4.1 If you skipped the You try it section, go back and do it now. Submit the file Table Tautology 1.

4.2 Assume that A, B, and C are atomic sentences. Use Boole to construct truth tables for each of
➶ the following sentences and, based on your truth tables, say which are tautologies. Name your
tables Table 4.2.x, where x is the number of the sentence.
1. (A ∧ B) ∨ (¬A ∨ ¬B)
2. (A ∧ B) ∨ (A ∧ ¬B)
3. ¬(A ∧ B) ∨ C
4. (A ∨ B) ∨ ¬(A ∨ (B ∧ C))
4.3 In Exercise 4.2 you should have discovered that two of the four sentences are tautologies, and
✎? hence logical truths.
1. Suppose you are told that the atomic sentence A is in fact a logical truth (for example,
a = a). Can you determine whether any additional sentences in the list (1)-(4) are
logically necessary based on this information?
2. Suppose you are told that A is in fact a logically false sentence (for example, a 6= a).
Can you determine whether any additional sentences in the list (1)-(4) are logical
truths based on this information?
In the following four exercises, use Boole to construct truth tables and indicate whether the sentence
is tt-possible and whether it is a tautology. Remember how you should treat long conjunctions and
disjunctions.
4.4 ¬(B ∧ ¬C ∧ ¬B) 4.5 A ∨ ¬(B ∨ ¬(C ∧ A))
➶ ➶
4.6 ¬[¬A ∨ ¬(B ∧ C) ∨ (A ∧ B)] 4.7 ¬[(¬A ∨ B) ∧ ¬(C ∧ D)]
➶ ➶
4.8 Make a copy of the Euler circle diagram on page 102 and place the numbers of the following
✎ sentences in the appropriate region.
1. a = b
2. a = b ∨ b = b

Chapter 4
Tautologies and logical truth / 105

3. a= b∧b= b
4. ¬(Large(a) ∧ Large(b) ∧ Adjoins(a, b))
5. Larger(a, b) ∨ ¬Larger(a, b)
6. Larger(a, b) ∨ Smaller(a, b)
7. ¬Tet(a) ∨ ¬Cube(b) ∨ a 6= b
8. ¬(Small(a) ∧ Small(b)) ∨ Small(a)
9. SameSize(a, b) ∨ ¬(Small(a) ∧ Small(b))
10. ¬(SameCol(a, b) ∧ SameRow(a, b))

4.9 (Logical dependencies) Use Tarski’s World to open Weiner’s Sentences. Fill in a table of the
➶|✎ following sort for the ten sentences in this file.
Sentence tw-possible tt-possible
1
2
3
..
.
10

1. In the first column, put yes if the sentence is tw-possible, that is, if it is possible to
make the sentence true by building a world, and no otherwise. If your answer is yes
for a sentence, then construct such a world and save it as World 4.9.x, where x is the
number of the sentence in question. Submit these worlds.
2. In the second column, put yes if the sentence is tt-possible, that is, if there is a row of
the truth table which makes the sentence true. If you think any sentence is tt-possible
but not tw-possible, construct a truth table in Boole for the sentence and submit it
as Table 4.9.x, where x is the number of the sentence in question.
3. Are any of the sentences tw-possible but not tt-possible? Explain why not. Turn in
your table and explanation to your instructor.

4.10 Draw an Euler circle diagram similar to the diagram on page 102, but this time showing the
?
✎ relationship between the notions of logical possibility, tw-possibility, and tt-possibility. For
each region in the diagram, indicate an example sentence that would fall in that region. Don’t
forget the region that falls outside all the circles.
All necessary truths are obviously possible: since they are true in all possible circumstances,
they are surely true in some possible circumstances. Given this reflection, where would the
sentences from our previous diagram on page 102 fit into the new diagram?

4.11 Suppose that S is a tautology, with atomic sentences A, B, and C. Suppose that we replace
✎?? all occurrences of A by another sentence P, possibly complex. Explain why the resulting sentence

Section 4.1
106 / The Logic of Boolean Connectives

is still a tautology. This is expressed by saying that substitution preserves tautologicality.


Explain why substitution of atomic sentences does not always preserve logical truth, even
though it preserves tautologies. Give an example.

Section 4.2
Logical and tautological equivalence
In the last chapter, we introduced the notion of logically equivalent sentences,
sentences that have the same truth values in every possible circumstance.
When two sentences are logically equivalent, we also say they have the same
truth conditions, since the conditions under which they come out true or false
are identical.
The notion of logical equivalence, like logical necessity, is somewhat vague,
logical equivalence but not in a way that prevents us from studying it with precision. For here too
we can introduce precise concepts that bear a clear relationship to the intuitive
notion we aim to understand better. The key concept we will introduce in this
tautological equivalence section is that of tautological equivalence. Two sentences are tautologically
equivalent if they can be seen to be equivalent simply in virtue of the meanings
of the truth-functional connectives. As you might expect, we can check for
tautological equivalence using truth tables.
Suppose we have two sentences, S and S0 , that we want to check for tau-
tological equivalence. What we do is construct a truth table with a reference
column for each of the atomic sentences that appear in either of the two sen-
tences. To the right, we write both S and S0 , with a vertical line separating
them, and fill in the truth values under the connectives as usual. We call this
joint truth tables a joint truth table for the sentences S and S0 . When the joint truth table is
completed, we compare the column under the main connective of S with the
column under the main connective of S0 . If these columns are identical, then
we know that the truth conditions of the two sentences are the same.
Let’s look at an example. Using A and B to stand for arbitrary atomic
sentences, let us test the first DeMorgan law for tautological equivalence. We
would do this by means of the following joint truth table.
A B ¬ (A ∧ B) ¬A ∨ ¬B
t t F t f F f
t f T f f T t
f t T f t T f
f f T f t T t

In this table, the columns in bold correspond to the main connectives of the

Chapter 4
Logical and tautological equivalence / 107

two sentences. Since these columns are identical, we know that the sentences
must have the same truth values, no matter what the truth values of their
atomic constituents may be. This holds simply in virtue of the structure of
the two sentences and the meanings of the Boolean connectives. So, the two
sentences are indeed tautologically equivalent.
Let’s look at a second example, this time to see whether the sentence
¬((A ∨ B) ∧ ¬C) is tautologically equivalent to (¬A ∧ ¬B) ∨ C. To construct a
truth table for this pair of sentences, we will need eight rows, since there are
three atomic sentences. The completed table looks like this.

A B C ¬ ((A ∨ B) ∧ ¬C) (¬A ∧ ¬B) ∨ C


t t t T t f f f f f T
t t f F t t t f f f F
t f t T t f f f f t T
t f f F t t t f f t F
f t t T t f f t f f T
f t f F t t t t f f F
f f t T f f f t t t T
f f f T f f t t t t T

Once again, scanning the final columns under the two main connectives reveals
that the sentences are tautologically equivalent, and hence logically equivalent.
All tautologically equivalent sentences are logically equivalent, but the
reverse does not in general hold. Indeed, the relationship between these no- tautological vs. logical
tions is the same as that between tautologies and logical truths. Tautological equivalence
equivalence is a strict form of logical equivalence, one that won’t apply to
some logically equivalent pairs of sentences. Consider the pair of sentences:

a = b ∧ Cube(a)
a = b ∧ Cube(b)

These sentences are logically equivalent, as is demonstrated in the following


informal proof.

Proof: Suppose that the sentence a = b ∧ Cube(a) is true. Then


a = b and Cube(a) are both true. Using the indiscernibility of identi-
cals (Identity Elimination), we know that Cube(b) is true, and hence
that a = b ∧ Cube(b) is true. So the truth of a = b ∧ Cube(a) logically
implies the truth of a = b ∧ Cube(b).
The reverse holds as well. For suppose that a = b ∧ Cube(b) is true.
Then by symmetry of identity, we also know b = a. From this and
Cube(b) we can conclude Cube(a), and hence that a = b ∧ Cube(a)

Section 4.2
108 / The Logic of Boolean Connectives

is true. So the truth of a = b ∧ Cube(b) implies the truth of a = b ∧


Cube(a).
Thus a = b ∧ Cube(a) is true if and only if a = b ∧ Cube(b) is true.

This proof shows that these two sentences have the same truth values in
any possible circumstance. For if one were true and the other false, this would
contradict the conclusion of one of the two parts of the proof. But consider
number of rows in what happens when we construct a joint truth table for these sentences. Three
joint table atomic sentences appear in the pair of sentences, so the joint table will look
like this. (Notice that the ordinary truth table for either of the sentences alone
would have only four rows, but that the joint table must have eight. Do you
understand why?)

a=b Cube(a) Cube(b) a = b ∧ Cube(a) a = b ∧ Cube(b)


t t t T T
t t f T F
t f t F T
t f f F F
f t t F F
f t f F F
f f t F F
f f f F F

This table shows that the two sentences are not tautologically equivalent,
since it assigns the sentences different values in the second and third rows.
Look closely at those two rows to see what’s going on. Notice that in both
of these rows, a = b is assigned T while Cube(a) and Cube(b) are assigned
different truth values. Of course, we know that neither of these rows corre-
sponds to a logically possible circumstance, since if a and b are identical, the
truth values of Cube(a) and Cube(b) must be the same. But the truth table
method doesn’t detect this, since it is sensitive only to the meanings of the
truth-functional connectives.
As we expand our language to include quantifiers, we will find many logical
equivalences that are not tautological equivalences. But this is not to say
there aren’t a lot of important and interesting tautological equivalences. We’ve
already highlighted three in the last chapter: double negation and the two
DeMorgan equivalences. We leave it to you to check that these principles are,
in fact, tautological equivalences. In the next section, we will introduce other
principles and see how they can be used to simplify sentences of fol.

Chapter 4
Logical and tautological equivalence / 109

Remember

Let S and S0 be a sentences of fol built up from atomic sentences


by means of truth-functional connectives alone. To test for tautological
equivalence, we construct a joint truth table for the two sentences.

1. S and S0 are tautologically equivalent if and only if every row of the


joint truth table assigns the same values to S and S0 .

2. If S and S0 are tautologically equivalent, then they are logically equiv-


alent.

3. Some logically equivalent sentences are not tautologically equivalent.

Exercises

In Exercises 4.12-4.18, use Boole to construct joint truth tables showing that the pairs of sentences are
logically (indeed, tautologically) equivalent. To add a second sentence to your joint truth table, choose
Add Column After from the Table menu. Don’t forget to specify your assessments, and remember,
you should build and fill in your own reference columns.

4.12 (DeMorgan)
➶ ¬(A ∨ B) and ¬A ∧ ¬B

4.13 (Associativity) 4.14 (Associativity)


➶ (A ∧ B) ∧ C and A ∧ (B ∧ C) ➶ (A ∨ B) ∨ C and A ∨ (B ∨ C)

4.15 (Idempotence) 4.16 (Idempotence)


➶ A ∧ B ∧ A and A ∧ B ➶ A ∨ B ∨ A and A ∨ B

4.17 (Distribution) 4.18 (Distribution)


➶ A ∧ (B ∨ C) and (A ∧ B) ∨ (A ∧ C) ➶ A ∨ (B ∧ C) and (A ∨ B) ∧ (A ∨ C)

4.19 (tw-equivalence) Suppose we introduced the notion of tw-equivalence, saying that two sen-
✎ tences of the blocks language are tw-equivalent if and only if they have the same truth value
in every world that can be constructed in Tarski’s World.
1. What is the relationship between tw-equivalence, tautological equivalence and logical
equivalence?
2. Give an example of a pair of sentences that are tw-equivalent but not logically equiv-
alent.

Section 4.2
110 / The Logic of Boolean Connectives

Section 4.3
Logical and tautological consequence
Our main concern in this book is with the logical consequence relation, of
which logical truth and logical equivalence can be thought of as very special
cases: A logical truth is a sentence that is a logical consequence of any set
of premises, and logically equivalent sentences are sentences that are logical
consequences of one another.
As you’ve probably guessed, truth tables allow us to define a precise notion
of tautological consequence, a strict form of logical consequence, just as they
allowed us to define tautologies and tautological equivalence, strict forms of
logical truth and logical equivalence.
Let’s look at the simple case of two sentences, P and Q, both built from
atomic sentences by means of truth-functional connectives. Suppose you want
to know whether Q is a consequence of P. Create a joint truth table for P
and Q, just like you would if you were testing for tautological equivalence.
After you fill in the columns for P and Q, scan the columns under the main
connectives for these sentences. In particular, look at every row of the table in
tautological consequence which P is true. If each such row is also one in which Q is true, then Q is said
to be a tautological consequence of P. The truth table shows that if P is true,
then Q must be true as well, and that this holds simply due to the meanings
of the truth-functional connectives.
Just as tautologies are logically necessary, so too any tautological conse-
quence Q of a sentence P must also be a logical consequence of P. We can
see this by proving that if Q is not a logical consequence of P, then it can’t
possibly pass our truth table test for tautological consequence.
Proof: Suppose Q is not a logical consequence of P. Then by our def-
inition of logical consequence, there must be a possible circumstance
in which P is true but Q is false. This circumstance will determine
truth values for the atomic sentences in P and Q, and these values
will correspond to a row in the joint truth table for P and Q, since
all possible assignments of truth values to the atomic sentences are
represented in the truth table. Further, since P and Q are built up
from the atomic sentences by truth-functional connectives, and since
the former is true in the original circumstance and the latter false,
P will be assigned T in this row and Q will be assigned F. Hence, Q
is not a tautological consequence of P.
Let’s look at a very simple example. Suppose we wanted to check to see
whether A ∨ B is a consequence of A ∧ B. The joint truth table for these sen-

Chapter 4
Logical and tautological consequence / 111

tences looks like this.


A B A∧B A∨B
t t T T
t f F T
f t F T
f f F F
When you compare the columns under these two sentences, you see that the
sentences are most definitely not tautologically equivalent. No surprise. But
we are interested in whether A ∧ B logically implies A ∨ B, and so the only
rows we care about are those in which the former sentence is true. A ∧ B is only
true in the first row, and A ∨ B is also true in that row. So this table shows
that A ∨ B is a tautological consequence (and hence a logical consequence) of
A ∧ B.
Notice that our table also shows that A ∧ B is not a tautological conse-
quence of A ∨ B, since there are rows in which the latter is true and the former
false. Does this show that A ∧ B is not a logical consequence of A ∨ B? Well,
we have to be careful. A ∧ B is not in general a logical consequence of A ∨ B,
but it might be in certain cases, depending on the sentences A and B. We’ll
ask you to come up with an example in the exercises.
Not every logical consequence of a sentence is a tautological consequence
of that sentence. For example, the sentence a = c is a logical consequence of logical vs. tautological
the sentence (a = b ∧ b = c), but it is not a tautological consequence of it. consequence
Think about the row that assigns T to the atomic sentences a = b and b = c,
but F to the sentence a = c. Clearly this row, which prevents a = c from being
a tautological consequence of (a = b ∧ b = c), does not respect the meanings
of the atomic sentences out of which the sentences are built. It does not
correspond to a genuinely possible circumstance, but the truth table method
does not detect this.
The truth table method of checking tautological consequence is not re-
stricted to just one premise. You can apply it to arguments with any number
of premises P1 , . . . , Pn and conclusion Q. To do so, you have to construct a
joint truth table for all of the sentences P1 , . . . , Pn and Q. Once you’ve done
this, you need to check every row in which the premises all come out true to
see whether the conclusion comes out true as well. If so, the conclusion is a
tautological consequence of the premises.
Let’s try this out on a couple of simple examples. First, suppose we want
to check to see whether B is a consequence of the two premises A ∨ B and ¬A.
The joint truth table for these three sentences comes out like this. (Notice
that since one of our target sentences, the conclusion B, is atomic, we have
simply repeated the reference column when this sentence appears again on

Section 4.3
112 / The Logic of Boolean Connectives

the right.)

A B A∨B ¬A B
t t T F T
t f T F F
f t T T T
f f F T F

Scanning the columns under our two premises, A ∨ B and ¬A, we see that
there is only one row where both premises come out true, namely the third.
And in the third row, the conclusion B also comes out true. So B is indeed a
tautological (and hence logical) consequence of these premises.
In both of the examples we’ve looked at so far, there has been only one
row in which the premises all came out true. This makes the arguments easy
to check for validity, but it’s not at all something you can count on. For
example, suppose we used the truth table method to check whether A ∨ C
is a consequence of A ∨ ¬B and B ∨ C. The joint truth table for these three
sentences looks like this.
A B C A ∨ ¬B B∨C A∨C
t t t T f T T
t t f T f T T
t f t T t T T
t f f T t F T
f t t F f T T
f t f F f T F
f f t T t T T
f f f T t F F

Here, there are four rows in which the premises, A ∨ ¬B and B ∨ C, are
both true: the first, second, third, and seventh. But in each of these rows the
conclusion, A ∨ C, is also true. The conclusion is true in other rows as well, but
we don’t care about that. This inference, from A ∨ ¬B and B ∨ C to A ∨ C, is
logically valid, and is an instance of an important pattern known in computer
science as resolution.
We should look at an example where the truth table method reveals that
the conclusion is not a tautological consequence of the premises. Actually, the
last truth table will serve this purpose. For this table also shows that the
sentence A ∨ ¬B is not a tautological consequence of the two premises B ∨ C
and A ∨ C. Can you find the row that shows this? (Hint: It’s got to be the
first, second, third, fifth, or seventh, since these are the rows in which B ∨ C
and A ∨ C are both true.)

Chapter 4
Logical and tautological consequence / 113

Remember

Let P1 , . . . , Pn and Q be sentences of fol built up from atomic sentences


by means of truth functional connectives alone. Construct a joint truth
table for all of these sentences.

1. Q is a tautological consequence of P1 , . . . , Pn if and only if every row


that assigns T to each of P1, . . . , Pn also assigns T to Q.

2. If Q is a tautological consequence of P1 , . . . , Pn , then Q is also a logical


consequence of P1, . . . , Pn .

3. Some logical consequences are not tautological consequences.

Exercises

For each of the arguments below, use the truth table method to determine whether the conclusion is a
tautological consequence of the premises. Your truth table for Exercise 4.24 will be fairly large. It’s good
for the soul to build a large truth table every once in a while. Be thankful you have Boole to help you.
(But make sure you build your own reference columns!)

4.20 (Tet(a) ∧ Small(a)) ∨ Small(b) 4.21 Taller(claire, max) ∨ Taller(max, claire)


➶ ➶ Taller(claire, max)
Small(a) ∨ Small(b)
¬Taller(max, claire)

4.22 Large(a)
➶ Cube(a) ∨ Dodec(a)
(Cube(a) ∧ Large(a)) ∨ (Dodec(a) ∧ Large(a))

4.23 A ∨ ¬B 4.24 ¬A ∨ B ∨ C
➶? B∨C ➶? ¬C ∨ D
C∨D ¬(B ∧ ¬E)
A ∨ ¬D D ∨ ¬A ∨ E

4.25 Give an example of two different sentences A and B in the blocks language such that A ∧ B is
?
✎ a logical consequence of A ∨ B. [Hint: Note that A ∧ A is a logical consequence of A ∨ A, but
here we insist that A and B be distinct sentences.]

Section 4.3
114 / The Logic of Boolean Connectives

Section 4.4
Tautological consequence in Fitch
We hope you solved Exercise 4.24, because the solution gives you a sense
of both the power and the drawbacks of the truth table method. We were
tempted to ask you to construct a table requiring 64 rows, but thought better
of it. Constructing large truth tables may build character, but like most things
that build character, it’s a drag.
Checking to see if Q is a tautological consequence of P1 , . . . , Pn is a me-
chanical procedure. If the sentences are long it may require a lot of tedious
work, but it doesn’t take any originality. This is just the sort of thing that
computers are good at. Because of this, we have built a mechanism into Fitch,
Taut Con mechanism called Taut Con, that is similar to Ana Con but checks to see whether a
sentence is a tautological consequence of the sentences cited in support. Like
Ana Con, Taut Con is not really an inference rule (we will introduce infer-
ence rules for the Boolean connectives in Chapter 6), but is useful for quickly
testing whether one sentence follows tautologically from others.

You try it
................................................................
I 1. Launch Fitch and open the file Taut Con 1. In this file you will find an
argument that has the same form as the argument in Exercise 4.23. (Ignore
the two goal sentences. We’ll get to them later.) Move the focus slider to
the last step of the proof. From the Rule? menu, go down to the Con
submenu and choose Taut Con.
I 2. Now cite the three premises as support for this sentence and check the
step. The step will not check out since this sentence is not a tautological
consequence of the premises, as you discovered if you did Exercise 4.23,
which has the same form as this inference.
I 3. Edit the step that did not check out to read:

Home(max) ∨ Hungry(carl)

This sentence is a tautological consequence of two of the premises. Figure


out which two and cite just them. If you cited the right two, the step
should check out. Try it.
I 4. Add another step to the proof and enter the sentence:

Hungry(carl) ∨ (Home(max) ∧ Hungry(pris))

Chapter 4
Tautological consequence in Fitch / 115

Use Taut Con to see if this sentence follows tautologically from the three
premises. Choose Verify Proof from the Proof menu. You will find that
although the step checks out, the goal does not. This is because we have
put a special constraint on your use of Taut Con in this exercise.

5. Choose See Goal Constraints from the Goal menu. You will find that J
in this proof, you are allowed to use Taut Con, but can only cite two
or fewer support sentences when you use it. Close the goal window to get
back to the proof.

6. The sentence you entered also follows from the sentence immediately above J
it plus just one of the three premises. Uncite the three premises and see
if you can get the step to check out citing just two sentences in support.
Once you succeed, verify the proof and save it as Proof Taut Con 1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
You are probably curious about the relationship between Taut Con and
Ana Con—and for that matter, what the other mysterious item on the Con
menu, FO Con, might do. These are in fact three increasingly strong methods Taut Con, FO Con,
that Fitch uses to test for logical consequence. Taut Con is the weakest. It and Ana Con
checks to see whether the current step follows from the cited sentences in virtue
of the meanings of the truth-functional connectives. It ignores the meanings of
any predicates that appear in the sentence and, when we introduce quantifiers
into the language, it will ignore those as well.
FO Con, which stands for “first-order consequence,” pays attention to the
truth-functional connectives, the quantifiers, and the identity predicate when
it checks for consequence. FO Con would, for example, identify a = c as a
consequence of a = b ∧ b = c. It is stronger than Taut Con in the sense that
any consequence that Taut Con recognizes as valid will also be recognized
by FO Con. But it may take longer since it has to apply a more complex
procedure, thanks to identity and the quantifiers. After we get to quantifiers,
we’ll talk more about the procedure it is applying.
The strongest rule of the three is Ana Con, which tries to recognize con-
sequences due to truth-functional connectives, quantifiers, identity, and most
of the blocks language predicates. (Ana Con ignores Between and Adjoins,
simply for practical reasons.) Any inference that checks out using either Taut
Con or FO Con should, in principle, check out using Ana Con as well. In
practice, though, the procedure that Ana Con uses may bog down or run
out of memory in cases where the first two have no trouble.
As we said before, you should only use a procedure from the Con menu
when the exercise makes clear that the procedure is allowed in the solution.

Section 4.4
116 / The Logic of Boolean Connectives

Moreover if an exercise asks you to use Taut Con, don’t use FO Con or Ana
Con instead, even if these more powerful rules seem to work just as well. If
you are in doubt about which rules you are allowed to use, choose See Goal
Constraints from the Goal menu.

You try it
................................................................
I 1. Open the file Taut Con 2. You will find a proof containing ten steps whose
rules have not been specified.
I 2. Focus on each step in turn. You will find that the supporting steps have
already been cited. Convince yourself that the step follows from the cited
sentences. Is it a tautological consequence of the sentences cited? If so,
change the rule to Taut Con and see if you were right. If not, change it
to Ana Con and see if it checks out. (If Taut Con will work, make sure
you use it rather than the stronger Ana Con.)
I 3. When all of your steps check out using Taut Con or Ana Con, go back
and find the one step whose rule can be changed from Ana Con to the
weaker FO Con.
I 4. When each step checks out using the weakest Con rule possible, save your
proof as Proof Taut Con 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Exercises

4.26 If you skipped the You try it sections, go back and do them now. Submit the files Proof Taut
➶ Con 1 and Proof Taut Con 2.
For each of the following arguments, decide whether the conclusion is a tautological consequence of the
premises. If it is, submit a proof that establishes the conclusion using one or more applications of Taut
Con. Do not cite more than two sentences at a time for any of your applications of Taut Con. If
the conclusion is not a consequence of the premises, submit a counterexample world showing that the
argument is not valid.

4.27 Cube(a) ∨ Cube(b) 4.28 Large(a) ∨ Large(b)


➶ Dodec(c) ∨ Dodec(d) ➶ Large(a) ∨ Large(c)
¬Cube(a) ∨ ¬Dodec(c)
Large(a) ∧ (Large(b) ∨ Large(c))
Cube(b) ∨ Dodec(d)

Chapter 4
Pushing negation around / 117

4.29 Small(a) ∨ Small(b) 4.30 Tet(a) ∨ ¬(Tet(b) ∧ Tet(c))


➶ Small(b) ∨ Small(c) ➶ ¬(¬Tet(b) ∨ ¬Tet(d))
Small(c) ∨ Small(d) (Tet(e) ∧ Tet(c)) ∨ (Tet(c) ∧ Tet(d))
Small(d) ∨ Small(e)
Tet(a)
¬Small(c)
Small(a) ∨ Small(e)

Section 4.5
Pushing negation around
When two sentences are logically equivalent, each is a logical consequence of
the other. As a result, in giving an informal proof, you can always go from
an established sentence to one that is logically equivalent to it. This fact
makes observations like the DeMorgan laws and double negation quite useful
in giving informal proofs.
What makes these equivalences even more useful is the fact that logically
equivalent sentences can be substituted for one another in the context of a substitution of logical
larger sentence and the resulting sentences will also be logically equivalent. equivalents
An example will help illustrate what we mean. Suppose we start with the
sentence:

¬(Cube(a) ∧ ¬¬Small(a))

By the principle of double negation, we know that Small(a) is logically equiv-


alent to ¬¬Small(a). Since these have exactly the same truth conditions, we
can substitute Small(a) for ¬¬Small(a) in the context of the above sentence,
and the result,

¬(Cube(a) ∧ Small(a))

will be logically equivalent to the original, a fact that you can check by con-
structing a joint truth table for the two sentences.
We can state this important fact in the following way. Let’s write S(P)
for an fol sentence that contains the (possibly complex) sentence P as a
component part, and S(Q) for the result of substituting Q for P in S(P). Then
if P and Q are logically equivalent:

P⇔Q

it follows that S(P) and S(Q) are also logically equivalent:

Section 4.5
118 / The Logic of Boolean Connectives

S(P) ⇔ S(Q)
This is known as the principle of substitution of logical equivalents.
We won’t prove this principle at the moment, because it requires a proof
by induction, a style of proof we get to in a later chapter. But the observation
allows us to use a few simple equivalences to do some pretty amazing things.
For example, using only the two DeMorgan laws and double negation, we can
take any sentence built up with ∧, ∨, and ¬, and transform it into one where
¬ applies only to atomic sentences. Another way of expressing this is that any
sentence built out of atomic sentences using the three connectives ∧, ∨, and
¬ is logically equivalent to one built from literals using just ∧ and ∨.
To obtain such a sentence, you simply drive the ¬ in, switching ∧ to ∨,
∨ to ∧, and canceling any pair of ¬’s that are right next to each other, not
negation normal form separated by any parentheses. Such a sentence is said to be in negation normal
(NNF) form or NNF. Here is an example of a derivation of the negation normal form
of a sentence. We use A, B, and C to stand for any atomic sentences of the
language.
¬((A ∨ B) ∧ ¬C) ⇔ ¬(A ∨ B) ∨ ¬¬C
⇔ ¬(A ∨ B) ∨ C
⇔ (¬A ∧ ¬B) ∨ C
In reading and giving derivations of this sort, remember that the symbol
⇔ is not itself a symbol of the first-order language, but a shorthand way of
saying that two sentences are logically equivalent. In this derivation, the first
step is an application of the first DeMorgan law to the whole sentence. The
second step applies double negation to the component ¬¬C. The final step is
an application of the second DeMorgan law to the component ¬(A ∨ B). The
sentence we end up with is in negation normal form, since the negation signs
apply only to atomic sentences.
We end this section with a list of some additional logical equivalences
that allow us to simplify sentences in useful ways. You already constructed
truth tables for most of these equivalences in Exercises 4.13-4.16 at the end
of Section 4.2.
associativity 1. (Associativity of ∧) An fol sentence P ∧ (Q ∧ R) is logically equivalent
to (P ∧ Q) ∧ R, which is in turn equivalent to P ∧ Q ∧ R. That is,
P ∧ (Q ∧ R) ⇔ (P ∧ Q) ∧ R ⇔ P ∧ Q ∧ R

2. (Associativity of ∨) An fol sentence P ∨ (Q ∨ R) is logically equivalent


to (P ∨ Q) ∨ R, which is in turn equivalent to P ∨ Q ∨ R. That is,
P ∨ (Q ∨ R) ⇔ (P ∨ Q) ∨ R ⇔ P ∨ Q ∨ R

Chapter 4
Pushing negation around / 119

3. (Commutativity of ∧) A conjunction P ∧ Q is logically equivalent to commutativity


Q ∧ P. That is,
P∧Q ⇔ Q∧P
As a result, any rearrangement of the conjuncts of an fol sentence is
logically equivalent to the original. For example, P ∧ Q ∧ R is equivalent
to R ∧ Q ∧ P.

4. (Commutativity of ∨) A conjunction P ∨ Q is logically equivalent to


Q ∨ P. That is,
P∨Q ⇔ Q∨P
As a result, any rearrangement of the disjuncts of an fol sentence is
logically equivalent to the original. For example, P ∨ Q ∨ R is equivalent
to R ∨ Q ∨ P.

5. (Idempotence of ∧) A conjunction P ∧ P is equivalent to P. That is, idempotence

P∧P ⇔ P

More generally (given Commutativity), any conjunction with a repeated


conjunct is equivalent to the result of removing all but one occurrence
of that conjunct. For example, P ∧ Q ∧ P is equivalent to P ∧ Q.

6. (Idempotence of ∨) A disjunction P ∨ P is equivalent to P. That is,

P∨P ⇔ P

More generally (given Commutativity), any disjunction with a repeated


disjunct is equivalent to the result of removing all but one occurrence
of that disjunct. For example, P ∨ Q ∨ P is equivalent to P ∨ Q.

Here is an example where we use some of these laws to show that the first
sentence in the following list is logically equivalent to the last. Once again (as
in what follows), we use A, B, and C to stand for arbitrary atomic sentences
of fol. Thus the result is in negation normal form.
(A ∨ B) ∧ C ∧ (¬(¬B ∧ ¬A) ∨ B) ⇔ (A ∨ B) ∧ C ∧ ((¬¬B ∨ ¬¬A) ∨ B)
⇔ (A ∨ B) ∧ C ∧ ((B ∨ A) ∨ B)
⇔ (A ∨ B) ∧ C ∧ (B ∨ A ∨ B)
⇔ (A ∨ B) ∧ C ∧ (B ∨ A)
⇔ (A ∨ B) ∧ C ∧ (A ∨ B)
⇔ (A ∨ B) ∧ C

Section 4.5
120 / The Logic of Boolean Connectives

chain of equivalences We call a demonstration of this sort a chain of equivalences. The first step
in this chain is justified by one of the DeMorgan laws. The second step involves
two applications of double negation. In the next step we use associativity to
remove the unnecessary parentheses. In the fourth step, we use idempotence
of ∨. The next to the last step uses commutativity of ∨, while the final step
uses idempotence of ∧.

Remember

1. Substitution of equivalents: If P and Q are logically equivalent:

P⇔Q

then the results of substituting one for the other in the context of a
larger sentence are also logically equivalent:

S(P) ⇔ S(Q)

2. A sentence is in negation normal form (NNF) if all occurrences of ¬


apply directly to atomic sentences.

3. Any sentence built from atomic sentences using just ∧, ∨, and ¬ can
be put into negation normal form by repeated application of the De-
Morgan laws and double negation.

4. Sentences can often be further simplified using the principles of asso-


ciativity, commutativity, and idempotence.

Exercises

4.31 (Negation normal form) Use Tarski’s World to open Turing’s Sentences. You will find the fol-
➶ lowing five sentences, each followed by an empty sentence position.
1. ¬(Cube(a) ∧ Larger(a, b))
3. ¬(Cube(a) ∨ ¬Larger(b, a))
5. ¬(¬Cube(a) ∨ ¬Larger(a, b) ∨ a 6= b)
7. ¬(Tet(b) ∨ (Large(c) ∧ ¬Smaller(d, e)))
9. Dodec(f) ∨ ¬(Tet(b) ∨ ¬Tet(f) ∨ ¬Dodec(f))

In the empty positions, write the negation normal form of the sentence above it. Then build
any world where all of the names are in use. If you have gotten the negation normal forms

Chapter 4
Conjunctive and disjunctive normal forms / 121

correct, each even numbered sentence will have the same truth value in your world as the odd
numbered sentence above it. Verify that this is so in your world. Submit the modified sentence
file as Sentences 4.31.
4.32 (Negation normal form) Use Tarski’s World to open the file Sextus’ Sentences. In the odd
➶ numbered slots, you will find the following sentences.
1. ¬(Home(carl) ∧ ¬Home(claire))
3. ¬[Happy(max) ∧ (¬Likes(carl, claire) ∨ ¬Likes(claire, carl))]
5. ¬¬¬[(Home(max) ∨ Home(carl)) ∧ (Happy(max) ∨ Happy(carl))]

Use Double Negation and DeMorgan’s laws to put each sentence into negation normal form in
the slot below it. Submit the modified file as Sentences 4.32.
In each of the following exercises, use associativity, commutativity, and idempotence to simplify the
sentence as much as you can using just these rules. Your answer should consist of a chain of logical
equivalences like the chain given on page 119. At each step of the chain, indicate which principle you
are using.

4.33 (A ∧ B) ∧ A 4.34 (B ∧ (A ∧ B ∧ C))


✎ ✎
4.35 (A ∨ B) ∨ (C ∧ D) ∨ A 4.36 (¬A ∨ B) ∨ (B ∨ C)
✎ ✎
4.37 (A ∧ B) ∨ C ∨ (B ∧ A) ∨ A

Section 4.6
Conjunctive and disjunctive normal forms
We have seen that with a few simple principles of Boolean logic, we can
start with a sentence and transform it into a logically equivalent sentence
in negation normal form, one where all negations occur in front of atomic
sentences. We can improve on this by introducing the so-called distributive
laws. These additional equivalences will allow us to transform sentences into
what are known as conjunctive normal form (CNF) and disjunctive normal
form (DNF). These normal forms are quite important in certain applications
of logic in computer science, as we discuss in Chapter 17. We will also use
disjunctive normal form to demonstrate an important fact about the Boolean
connectives in Chapter 7.
Recall that in algebra you learned that multiplication distributes over ad-
dition: a×(b+c) = (a×b)+(a×c). The distributive laws of logic look formally distribution

Section 4.6
122 / The Logic of Boolean Connectives

much the same. One version tells us that P ∧ (Q ∨ R) is logically equivalent to


(P ∧ Q) ∨ (P ∧ R). That is, ∧ distributes over ∨. To see that this is so, notice
that the first sentence is true if and only if P plus at least one of Q or R
is true. But a moment’s thought shows that the second sentence is true in
exactly the same circumstances. This can also be confirmed by constructing
a joint truth table for the two sentences, which you’ve already done if you did
Exercise 4.17.
In arithmetic, + does not distribute over ×. However, ∨ does distribute
over ∧. That is to say, P ∨ (Q ∧ R) is logically equivalent to (P ∨ Q) ∧ (P ∨ R),
as you also discovered in Exercise 4.18.

Remember

(The distributive laws) For any sentences P, Q, and R:

1. Distribution of ∧ over ∨: P ∧ (Q ∨ R) ⇔ (P ∧ Q) ∨ (P ∧ R)

2. Distribution of ∨ over ∧: P ∨ (Q ∧ R) ⇔ (P ∨ Q) ∧ (P ∨ R)

As you may recall from algebra, the distributive law for × over + is in-
credibly useful. It allows us to transform any algebraic expression involving +
and ×, no matter how complex, into one that is just a sum of products. For
example, the following transformation uses distribution three times.
(a + b)(c + d) = (a + b)c + (a + b)d
= ac + bc + (a + b)d
= ac + bc + ad + bd
In exactly the same way, the distribution of ∧ over ∨ allows us to transform
any sentence built up from literals by means of ∧ and ∨ into a logically
equivalent sentence that is a disjunction of (one or more) conjunctions of
(one or more) literals. That is, using this first distributive law, we can turn
any sentence in negation normal form into a sentence that is a disjunction of
disjunctive normal conjunctions of literals. A sentence in this form is said to be in disjunctive
form (DNF) normal form.
Here is an example that parallels our algebraic example. Notice that, as
in the algebraic example, we are distributing in from the right as well as the
left, even though our statement of the rule only illustrates distribution from
the left.
(A ∨ B) ∧ (C ∨ D) ⇔ [(A ∨ B) ∧ C] ∨ [(A ∨ B) ∧ D]
⇔ (A ∧ C) ∨ (B ∧ C) ∨ [(A ∨ B) ∧ D]
⇔ (A ∧ C) ∨ (B ∧ C) ∨ (A ∧ D) ∨ (B ∧ D)

Chapter 4
Conjunctive and disjunctive normal forms / 123

As you can see, distribution of ∧ over ∨ lets us drive conjunction signs


deeper and deeper, just as the DeMorgan laws allow us to move negations
deeper. Thus, if we take any sentence and first use DeMorgan (and double
negation) to get a sentence in negation normal form, we can then use this
first distribution law to get a sentence in disjunctive normal form, one in
which all the conjunction signs apply to literals.
Likewise, using distribution of ∨ over ∧, we can turn any negation normal
form sentence into one that is a conjunction of one or more sentences, each of
which is a disjunction of one or more literals. A sentence in this form is said
to be in conjunctive normal form (CNF). Here’s an example, parallel to the conjunctive normal
one given above but with ∧ and ∨ interchanged: form (CNF)

(A ∧ B) ∨ (C ∧ D) ⇔ [(A ∧ B) ∨ C] ∧ [(A ∧ B) ∨ D]
⇔ (A ∨ C) ∧ (B ∨ C) ∧ [(A ∧ B) ∨ D]
⇔ (A ∨ C) ∧ (B ∨ C) ∧ (A ∨ D) ∧ (B ∨ D)

On page 118, we showed how to transform the sentence ¬((A ∨ B) ∧ ¬C)


into one in negation normal form. The result was (¬A ∧ ¬B) ∨ C. This sen-
tence just happens to be in disjunctive normal form. Let us repeat our earlier
transformation, but continue until we get a sentence in conjunctive normal
form.

¬((A ∨ B) ∧ ¬C) ⇔ ¬(A ∨ B) ∨ ¬¬C


⇔ ¬(A ∨ B) ∨ C
⇔ (¬A ∧ ¬B) ∨ C
⇔ (¬A ∨ C) ∧ (¬B ∨ C)

It is important to remember that a sentence can count as being in both


conjunctive and disjunctive normal forms at the same time. For example, the
sentence

Home(claire) ∧ ¬Home(max)

is in both DNF and CNF. On the one hand, it is in disjunctive normal form
since it is a disjunction of one sentence (itself) which is a conjunction of two
literals. On the other hand, it is in conjunctive normal form since it is a
conjunction of two sentences, each of which is a disjunction of one literal.
In case you find this last remark confusing, here are simple tests for
whether sentences are in disjunctive normal form and conjunctive normal
form. The tests assume that the sentence has no unnecessary parentheses and
contains only the connectives ∧, ∨, and ¬.

To check whether a sentence is in DNF, ask yourself whether all the test for DNF

Section 4.6
124 / The Logic of Boolean Connectives

negation signs apply directly to atomic sentences and whether all


the conjunction signs apply directly to literals. If both answers are
yes, then the sentence is in disjunctive normal form.
test for CNF To check whether a sentence is in CNF, ask yourself whether all
the negation signs apply directly to atomic sentences and all the
disjunction signs apply directly to literals. If both answers are yes,
then the sentence is in conjunctive normal form.

Now look at the above sentence again and notice that it passes both of
these tests (in the CNF case because it has no disjunction signs).

Remember

1. A sentence is in disjunctive normal form (DNF) if it is a disjunction


of one or more conjunctions of one or more literals.

2. A sentence is in conjunctive normal form (CNF) if it is a conjunction


of one or more disjunctions of one or more literals.

3. Distribution of ∧ over ∨ allows you to transform any sentence in nega-


tion normal form into disjunctive normal form.

4. Distribution of ∨ over ∧ allows you to transform any sentence in nega-


tion normal form into conjunctive normal form.

5. Some sentences are in both CNF and DNF.

You try it
................................................................
I 1. Use Tarski’s World to open the file DNF Example. In this file you will find
two sentences. The second sentence is the result of putting the first into
disjunctive normal form, so the two sentences are logically equivalent.
I 2. Build a world in which the sentences are true. Since they are equivalent,
you could try to make either one true, but you will find the second one
easier to work on.
I 3. Play the game for each sentence, committed correctly to the truth of the
sentence. You should be able to win both times. Count the number of steps
it takes you to win.

Chapter 4
Conjunctive and disjunctive normal forms / 125

4. In general it is easier to evaluate the truth value of a sentence in disjunctive J


normal form. This comes out in the game, which takes at most three steps
for a sentence in DNF, one each for ∨, ∧, and ¬, in that order. There is
no limit to the number of steps a sentence in other forms may take.

5. Save the world you have created as World DNF 1. J


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Exercises

4.38 If you skipped the You try it section, go back and do it now. Submit the file World DNF 1.

4.39 Open CNF Sentences. In this file you will find the following conjunctive normal form sentences
➶ in the odd numbered positions, but you will see that the even numbered positions are blank.
1. (LeftOf(a, b) ∨ BackOf(a, b)) ∧ Cube(a)
3. Larger(a, b) ∧ (Cube(a) ∨ Tet(a) ∨ a = b)
5. (Between(a, b, c) ∨ Tet(a) ∨ ¬Tet(b)) ∧ Dodec(c)
7. Cube(a) ∧ Cube(b) ∧ (¬Small(a) ∨ ¬Small(b))
9. (Small(a) ∨ Medium(a)) ∧ (Cube(a) ∨ ¬Dodec(a))

In the even numbered positions you should fill in a DNF sentence logically equivalent to the
sentence above it. Check your work by opening several worlds and checking to see that each
of your sentences has the same truth value as the one above it. Submit the modified file as
Sentences 4.39.

4.40 Open More CNF Sentences. In this file you will find the following sentences in every third
➶ position.
1. ¬[(Cube(a) ∧ ¬Small(a)) ∨ (¬Cube(a) ∧ Small(a))]
4. ¬[(Cube(a) ∨ ¬Small(a)) ∧ (¬Cube(a) ∨ Small(a))]
7. ¬(Cube(a) ∧ Larger(a, b)) ∧ Dodec(b)
10. ¬(¬Cube(a) ∧ Tet(b))
13. ¬¬Cube(a) ∨ Tet(b)

The two blanks that follow each sentence are for you to first transform the sentence into negation
normal form, and then put that sentence into CNF. Again, check your work by opening several
worlds to see that each of your sentences has the same truth value as the original. When you
are finished, submit the modified file as Sentences 4.40.

Section 4.6
126 / The Logic of Boolean Connectives

In Exercises 4.41-4.43, use a chain of equivalences to convert each sentence into an equivalent sentence
in disjunctive normal form. Simplify your answer as much as possible using the laws of associativity,
commutativity, and idempotence. At each step in your chain, indicate which principle you are applying.
Assume that A, B, C, and D are literals.

4.41 C ∧ (A ∨ (B ∧ C)) 4.42 B ∧ (A ∧ B ∧ (A ∨ B ∨ (B ∧ C)))


✎ ✎

4.43 A ∧ (A ∧ (B ∨ (A ∧ C)))

Chapter 4
Chapter 5

Methods of Proof for Boolean


Logic

Truth tables give us powerful techniques for investigating the logic of the
Boolean operators. But they are by no means the end of the story. Truth
tables are fine for showing the validity of simple arguments that depend only
on truth-functional connectives, but the method has two very significant lim-
itations.
First, truth tables get extremely large as the number of atomic sentences limitations of truth
goes up. An argument involving seven atomic sentences is hardly unusual, but table methods
testing it for validity would call for a truth table with 27 = 128 rows. Testing
an argument with 14 atomic sentences, just twice as many, would take a table
containing over 16 thousand rows. You could probably get a Ph.D. in logic for
building a truth table that size. This exponential growth severely limits the
practical value of the truth table method.
The second limitation is, surprisingly enough, even more significant. Truth
table methods can’t be easily extended to reasoning whose validity depends
on more than just truth-functional connectives. As you might guess from the
artificiality of the arguments looked at in the previous chapter, this rules out
most kinds of reasoning you’ll encounter in everyday life. Ordinary reasoning
relies heavily on the logic of the Boolean connectives, make no mistake about
that. But it also relies on the logic of other kinds of expressions. Since the
truth table method detects only tautological consequence, we need a method
of applying Boolean logic that can work along with other valid principles of
reasoning.
Methods of proof, both formal and informal, give us the required exten-
sibility. In this chapter we will discuss legitimate patterns of inference that
arise when we introduce the Boolean connectives into a language, and show
how to apply the patterns in informal proofs. In Chapter 6, we’ll extend our
formal system with corresponding rules. The key advantage of proof methods
over truth tables is that we’ll be able to use them even when the validity of
our proof depends on more than just the Boolean operators.
The Boolean connectives give rise to many valid patterns of inference.
Some of these are extremely simple, like the entailment from the sentence
P ∧ Q to P. These we will refer to as valid inference steps, and will discuss

127
128 / Methods of Proof for Boolean Logic

them briefly in the first section. Much more interesting are two new methods
of proof that are allowed by the new expressions: proof by cases and proof by
contradiction. We will discuss these later, one at a time.

Section 5.1
Valid inference steps

Here’s an important rule of thumb: In an informal proof, it is always legiti-


mate to move from a sentence P to another sentence Q if both you and your
important rule of thumb “audience” (the person or people you’re trying to convince) already know
that Q is a logical consequence of P. The main exception to this rule is when
you give informal proofs to your logic instructor: presumably, your instructor
knows the assigned argument is valid, so in these circumstances, you have to
pretend you’re addressing the proof to someone who doesn’t already know
that. What you’re really doing is convincing your instructor that you see that
the argument is valid and that you could prove it to someone who did not.
The reason we start with this rule of thumb is that you’ve already learned
several well-known logical equivalences that you should feel free to use when
giving informal proofs. For example, you can freely use double negation or
idempotence if the need arises in a proof. Thus a chain of equivalences of the
sort we gave on page 119 is a legitimate component of an informal proof. Of
course, if you are asked to prove one of the named equivalences, say one of
the distribution or DeMorgan laws, then you shouldn’t presuppose it in your
proof. You’ll have to figure out a way to prove it to someone who doesn’t
already know that it is valid.
A special case of this rule of thumb is the following: If you already know
that a sentence Q is a logical truth, then you may assert Q at any point in
your proof. We already saw this principle at work in Chapter 2, when we
discussed the reflexivity of identity, the principle that allowed us to assert a
sentence of the form a = a at any point in a proof. It also allows us to assert
other simple logical truths, like excluded middle (P ∨ ¬P), at any point in a
proof. Of course, the logical truths have to be simple enough that you can be
sure your audience will recognize them.
There are three simple inference steps that we will mention here that don’t
involve logical equivalences or logical truths, but that are clearly supported
by the meanings of ∧ and ∨. First, suppose we have managed to prove a
conjunction, say P ∧ Q, in the course of our proof. The individual conjuncts
P and Q are clearly consequences of this conjunction, because there is no way
for the conjunction to be true without each conjunct being true. Thus, we

Chapter 5
Valid inference steps / 129

are justified in asserting either. More generally, we are justified in inferring,


from a conjunction of any number of sentences, any one of its conjuncts. This conjunction
inference pattern is sometimes called conjunction elimination or simplification, elimination
when it is presented in the context of a formal system of deduction. When it (simplification)
is used in informal proofs, however, it usually goes by without comment, since
it is so obvious.
Only slightly more interesting is the converse. Given the meaning of ∧, it
is clear that P ∧ Q is a logical consequence of the pair of sentences P and Q:
there is no way the latter could be true without former also being true. Thus
if we have managed to prove P and to prove Q from the same premises, then
we are entitled to infer the conjunction P ∧ Q. More generally, if we want to conjunction
prove a conjunction of a bunch of sentences, we may do so by proving each introduction
conjunct separately. In a formal system of deduction, steps of this sort are
sometimes called conjunction introduction or just conjunction. Once again,
in real life reasoning, these steps are too simple to warrant mention. In our
informal proofs, we will seldom point them out explicitly.
Finally, let us look at one valid inference pattern involving ∨. It is a simple
step, but one that strikes students as peculiar. Suppose that you have proven
Cube(b). Then you can conclude Cube(a) ∨ Cube(b) ∨ Cube(c), if you should disjunction
want to for some reason, since the latter is a consequence of the former. introduction
More generally, if you have proven some sentence P then you can infer any
disjunction that has P as one of its disjuncts. After all, if P is true, so is any
such disjunction.
What strikes newcomers to logic as peculiar about such a step is that using
it amounts to throwing away information. Why in the world would you want
to conclude P ∨ Q when you already know the more informative claim P? But
as we will see, this step is actually quite useful when combined with some
of the methods of proof to be discussed later. Still, in mathematical proofs,
it generally goes by unnoticed. In formal systems, it is dubbed disjunction
introduction, or (rather unfortunately) addition.

Matters of style

Informal proofs serve two purposes. On the one hand, they are a method of
discovery; they allow us to extract new information from information already
obtained. On the other hand, they are a method of communication; they allow
us to convey our discoveries to others. As with all forms of communication,
this can be done well or done poorly.
When we learn to write, we learn certain basic rules of punctuation, capi-
talization, paragraph structure and so forth. But beyond the basic rules, there
are also matters of style. Different writers have different styles. And it is a

Section 5.1
130 / Methods of Proof for Boolean Logic

good thing, since we would get pretty tired of reading if everyone wrote with
the very same style. So too in giving proofs. If you go on to study mathemat-
ics, you will read lots of proofs, and you will find that every writer has his or
her own style. You will even develop a style of your own.
Every step in a “good” proof, besides being correct, should have two prop-
erties. It should be easily understood and significant. By “easily understood”
we mean that other people should be able to follow the step without undue
difficulty: they should be able to see that the step is valid without having to
engage in a piece of complex reasoning of their own. By “significant” we mean
that the step should be informative, not a waste of the reader’s time.
These two criteria pull in opposite directions. Typically, the more signif-
icant the step, the harder it is to follow. Good style requires a reasonable
knowing your audience balance between the two. And that in turn requires some sense of who your
audience is. For example, if you and your audience have been working with
logic for a while, you will recognize a number of equivalences that you will
want to use without further proof. But if you or your audience are beginners,
the same inference may require several steps.

Remember

1. In giving an informal proof from some premises, if Q is already


known to be a logical consequence of sentences P1 , . . . , Pn and each of
P1 , . . . , Pn has been proven from the premises, then you may assert Q
in your proof.

2. Each step in an informal proof should be significant but easily under-


stood.

3. Whether a step is significant or easily understood depends on the


audience to whom it is addressed.

4. The following are valid patterns of inference that generally go unmen-


tioned in informal proofs:

◦ From P ∧ Q, infer P.
◦ From P and Q, infer P ∧ Q.
◦ From P, infer P ∨ Q.

Chapter 5
Proof by cases / 131

Exercises

In the following exercises we list a number of patterns of inference, only some of which are valid. For
each pattern, determine whether it is valid. If it is, explain why it is valid, appealing to the truth tables
for the connectives involved. If it is not, give a specific example of how the step could be used to get from
true premises to a false conclusion.

5.1 From P ∨ Q and ¬P, infer Q. 5.2 From P ∨ Q and Q, infer ¬P.
✎ ✎

5.3 From ¬(P ∨ Q), infer ¬P. 5.4 From ¬(P ∧ Q) and P, infer ¬Q.
✎ ✎

5.5 From ¬(P ∧ Q), infer ¬P. 5.6 From P ∧ Q and ¬P, infer Q.
✎ ✎?

Section 5.2
Proof by cases

The simple forms of inference discussed in the last section are all instances of
the principle that you can use already established cases of logical consequence
in informal proofs. But the Boolean connectives also give rise to two entirely
new methods of proof, methods that are explicitly applied in all types of
rigorous reasoning. The first of these is the method of proof by cases. In our
formal system F , this method will be called disjunction elimination, but don’t
be misled by the ordinary sounding name: it is far more significant than, say,
disjunction introduction or conjunction elimination.
We begin by illustrating proof by cases with a well-known piece of math-
ematical reasoning. The reasoning proves that there are irrational numbers b
and c such that bc is rational. First, let’s review what this means. A number
is said to be rational if it can be expressed as a fraction n/m, for integers
n and m. If it √ can’t be so expressed, then it is irrational. Thus 2 is rational
(2 = 2/1), but 2 is irrational. (We will prove this latter fact in the next sec-
tion, to illustrate proof by contradiction; for now, just take it as a well-known
truth.) Here now is our proof:

Proof: To show that there are irrational numbers b and c such that
c
√ √2
b is rational, we will consider the number 2 . We note that this
number is either rational or irrational.

Section 5.2
132 / Methods of Proof for Boolean Logic

√ √2
If 2 √ is rational, then we have found our b and c; namely, we take
b = c = 2.
√ √2
Suppose, on the other hand, that 2 is irrational. Then we take
√ √2 √
b = 2 and c = 2 and compute bc :
√ √2 √
bc = ( 2√ )√ 2
√ ( 2· 2)
= 2
√ 2
= 2
= 2
Thus, we see that in this case, too, bc is rational.
√ √2
Consequently, whether 2 is rational or irrational, we know that
there are irrational numbers b and c such that bc is rational.
What interests us here is not the result itself but the general structure of
the argument. We begin with a desired goal that we want to prove, say S, and
proof by cases a disjunction we already know, say P ∨ Q. We then show two things: that S
follows if we assume that P is the case, and that S follows if we assume that
Q is the case. Since we know that one of these must hold, we then conclude
that S must be the case. This is the pattern of reasoning known as proof by
cases.
In proof by cases, we aren’t limited to breaking into just two cases, as we
did in the example. If at any stage in a proof we have a disjunction containing
n disjuncts, say P1 ∨ . . . ∨ Pn , then we can break into n cases. In the first we
assume P1 , in the second P2 , and so forth for each disjunct. If we are able to
prove our desired result S in each of these cases, we are justified in concluding
that S holds.
Let’s look at an even simpler example of proof by cases. Suppose we want
to prove that Small(c) is a logical consequence of

(Cube(c) ∧ Small(c)) ∨ (Tet(c) ∧ Small(c))

This is pretty obvious, but the proof involves breaking into cases, as you will
notice if you think carefully about how you recognize this. For the record,
here is how we would write out the proof.
Proof: We are given

(Cube(c) ∧ Small(c)) ∨ (Tet(c) ∧ Small(c))

as a premise. We will break into two cases, corresponding to the two


disjuncts. First, assume that Cube(c) ∧ Small(c) holds. But then (by

Chapter 5
Proof by cases / 133

conjunction elimination, which we really shouldn’t even mention) we


have Small(c). But likewise, if we assume Tet(c) ∧ Small(c), then it
follows that Small(c). So, in either case, we have Small(c), as desired.

Our next example shows how the odd step of disjunction introduction
(from P infer P ∨ Q) can be used fruitfully with proof by cases. Suppose we
know that either Max is home and Carl is happy, or Claire is home and Scruffy
is happy, i.e.,

(Home(max) ∧ Happy(carl)) ∨ (Home(claire) ∧ Happy(scruffy))

We want to prove that either Carl or Scruffy is happy, that is,

Happy(carl) ∨ Happy(scruffy)

A rather pedantic, step-by-step proof would look like this:

Proof: Assume the disjunction:

(Home(max) ∧ Happy(carl)) ∨ (Home(claire) ∧ Happy(scruffy))

Then either:
Home(max) ∧ Happy(carl)
or:
Home(claire) ∧ Happy(scruffy).
If the first alternative holds, then Happy(carl), and so we have

Happy(carl) ∨ Happy(scruffy)

by disjunction introduction. Similarly, if the second alternative holds,


we have Happy(scruffy), and so

Happy(carl) ∨ Happy(scruffy)

So, in either case, we have our desired conclusion. Thus our conclu-
sion follows by proof by cases.

Arguing by cases is extremely useful in everyday reasoning. For example,


one of the authors (call him J) and his wife recently realized that their parking
meter had expired several hours earlier. J argued in the following way that
there was no point in rushing back to the car (logicians argue this way; don’t
marry one):

Section 5.2
134 / Methods of Proof for Boolean Logic

Proof: At this point, either we’ve already gotten a ticket or we


haven’t. If we’ve gotten a ticket, we won’t get another one in the
time it takes us to get to the car, so rushing would serve no purpose.
If we haven’t gotten a ticket in the past several hours, it is extremely
unlikely that we will get one in the next few minutes, so again,
rushing would be pointless. In either event, there’s no need to rush.

J’s wife responded with the following counterargument (showing that many
years of marriage to a logician has an impact):

Proof: Either we are going to get a ticket in the next few minutes or
we aren’t. If we are, then rushing might prevent it, which would be
a good thing. If we aren’t, then it will still be good exercise and will
also show our respect for the law, both of which are good things. So
in either event, rushing back to the car is a good thing to do.

J’s wife won the argument.


The validity of proof by cases cannot be demonstrated by the simple truth
table method introduced in Chapter 4. The reason is that we infer the con-
clusion S from the fact that S is provable from each of the disjuncts P and
Q. It relies on the principle that if S is a logical consequence of P, and also a
logical consequence of Q, then it is a logical consequence of P ∨ Q. This holds
because any circumstance that makes P ∨ Q true must make at least one of P
or Q true, and hence S as well, by the fact that S is a consequence of both.

Remember

Proof by cases: To prove S from P1 ∨ . . . ∨ Pn using this method, prove


S from each of P1 , . . . , Pn .

Exercises

The next two exercises present valid arguments. Turn in informal proofs of the arguments’ validity. Your
proofs should be phrased in complete, well-formed English sentences, making use of first-order sentences
as convenient, much in the style we have used above. Whenever you use proof by cases, say so. You don’t
have to be explicit about the use of simple proof steps like conjunction elimination. By the way, there is
typically more than one way to prove a given result.

Chapter 5
Proof by cases / 135

5.7 Home(max) ∨ Home(claire) 5.8 LeftOf(a, b) ∨ RightOf(a, b)


✎ ¬Home(max) ∨ Happy(carl) ✎ BackOf(a, b) ∨ ¬LeftOf(a, b)
¬Home(claire) ∨ Happy(carl) FrontOf(b, a) ∨ ¬RightOf(a, b)
SameCol(c, a) ∧ SameRow(c, b)
Happy(carl)
BackOf(a, b)

5.9 Assume the same four premises as in Exercise 5.8. Is LeftOf(b, c) a logical consequence of
➶|✎ these premises? If so, turn in an informal proof of the argument’s validity. If not, submit a
counterexample world.

5.10 Suppose Max’s favorite basketball team is the Chicago Bulls and favorite football team is the
✎ Denver Broncos. Max’s father John is returning from Indianapolis to San Francisco on United
Airlines, and promises that he will buy Max a souvenir from one of his favorite teams on the
way. Explain John’s reasoning, appealing to the annoying fact that all United flights between
Indianapolis and San Francisco stop in either Denver or Chicago. Make explicit the role proof
by cases plays in this reasoning.

5.11 Suppose the police are investigating a burglary and discover the following facts. All the doors
✎ to the house were bolted from the inside and show no sign of forced entry. In fact, the only
possible ways in and out of the house were a small bathroom window on the first floor that
was left open and an unlocked bedroom window on the second floor. On the basis of this, the
detectives rule out a well-known burglar, Julius, who weighs two hundred and fifty pounds and
is arthritic. Explain their reasoning.

5.12 In our proof that there are irrational numbers b and c where bc is rational, one of our steps
√ √2
✎ was to assert that 2 is either rational or irrational. What justifies the introduction of this
claim into our proof ?

5.13 Describe an everyday example of reasoning by cases that you have performed in the last few
✎ days.

5.14 Give an informal proof that if S is a tautological consequence of P and a tautological conse-
?
✎ quence of Q, then S is a tautological consequence of P ∨ Q. Remember that the joint truth
table for P ∨ Q and S may have more rows than either the joint truth table for P and S, or the
joint truth table for Q and S. [Hint: Assume you are looking at a single row of the joint truth
table for P ∨ Q and S in which P ∨ Q is true. Break into cases based on whether P is true or Q
is true and prove that S must be true in either case.]

Section 5.2
136 / Methods of Proof for Boolean Logic

Section 5.3
Indirect proof: proof by contradiction

One of the most important methods of proof is known as proof by contradic-


tion. It is also called indirect proof or reductio ad absurdum. Its counterpart
in F is called negation introduction.
The basic idea is this. Suppose you want to prove a negative sentence, say
indirect proof or proof ¬S, from some premises, say P1 , . . . , Pn . One way to do this is by temporarily
by contradiction assuming S and showing that a contradiction follows from this assumption. If
you can show this, then you are entitled to conclude that ¬S is a logical conse-
quence of the original premises. Why? Because your proof of the contradiction
shows that S, P1, . . . , Pn cannot all be true simultaneously. (If they were, the
contradiction would have to be true, and it can’t be.) Hence if P1 , . . . , Pn are
true in any set of circumstances, then S must be false in those circumstances.
Which is to say, if P1 , . . . , Pn are all true, then ¬S must be true as well.
Let’s look at a simple indirect proof. Assume Cube(c) ∨ Dodec(c) and
Tet(b). Let us prove ¬(b = c).

Proof: In order to prove ¬(b = c), we assume b = c and attempt


to get a contradiction. From our first premise we know that either
Cube(c) or Dodec(c). If the first is the case, then we conclude Cube(b)
by the indiscernibility of identicals, which contradicts Tet(b). But
similarly, if the second is the case, we get Dodec(b) which contra-
dicts Tet(b). So neither case is possible, and we have a contradiction.
Thus our initial assumption that b = c must be wrong. So proof by
contradiction gives us our desired conclusion, ¬(b = c). (Notice that
this argument also uses the method of proof by cases.)

Let us now give a more interesting and famous example of this method of
proof. The Greeks were shocked to discover that the square root of 2 could
not be expressed as a fraction, or, as we would put it, is irrational. The proof
of this fact proceeds via contradiction. Before we go through the proof, let’s
review some simple numerical facts that were well known to the Greeks. The
first is that any rational number can be expressed as a fraction p/q where at
least one of p and q is odd. (If not, keep dividing both the numerator and
denominator by 2 until one of them is odd.) The other fact follows from the
observation that when you square an odd number, you always get an odd
number. So if n2 is an even number, then so is n. And from this, we see that
if n2 is even, it must be divisible by 4. √
Now we’re ready for the proof that 2 is irrational.

Chapter 5
Indirect proof: proof by contradiction / 137

Proof:
√ With an eye toward getting a contradiction,√ we will assume
that 2 is rational. Thus, on this assumption, 2 can be expressed √
in the form p/q, where at least one of p and q is odd. Since p/q = 2
we can square both sides to get:
p2
=2
q2
Multiplying both sides by q 2 , we get p2 = 2q 2 . But this shows that
p2 is an even number. As we noted before, this allows us to conclude
that p is even and that p2 is divisible by 4. Looking again at the
equation p2 = 2q 2 , we see that if p2 is divisible by 4, then 2q 2 is
divisible by 4 and hence q 2 must be divisible by 2. In which case, q is
even as well. So both p and q are even, contradicting the√ fact that at
least one of them is odd. Thus, our assumption that 2 is rational
led us to a contradiction, and so we conclude that it is irrational.

In both of these examples, we used the method of indirect proof to prove a


sentence that begins with a negation. (Remember, “irrational” simply means
not rational.) You can also use this method to prove a sentence S that does not
begin with a negation. In this case, you would begin by assuming ¬S, obtain
a contradiction, and then conclude that ¬¬S is the case, which of course is
equivalent to S.
In order to apply the method of proof by contradiction, it is important
that you understand what a contradiction is, since that is what you need
to prove from your temporary assumption. Intuitively, a contradiction is any contradiction
claim that cannot possibly be true, or any set of claims which cannot all
be true simultaneously. Examples are a sentence Q and its negation ¬Q, a
pair of inconsistent claims like Cube(c) and Tet(c) or x < y and y < x, or a
single sentence of the form a 6= a. We can take the notion of a contradictory
or inconsistent set of sentences to be any set of sentences that could not all
be true in any single situation.
The symbol ⊥ is often used as a short-hand way of saying that a contra- contradiction
diction has been obtained. Different people read ⊥ as “contradiction,” “the symbol (⊥)
absurd,” and “the false,” but what it means is that a conclusion has been
reached which is logically impossible, or that several conclusions have been
derived which, taken together, are impossible.
Notice that a sentence S is a logical impossibility if and only if its negation
¬S is logically necessary. This means that any method we have of demonstrat-
ing that a sentence is logically necessary also demonstrates that its negation
is logically impossible, that is, a contradiction. For example, if a truth table
shows that ¬S is a tautology, then we know that S is a contradiction.

Section 5.3
138 / Methods of Proof for Boolean Logic

Similarly, the truth table method gives us a way of showing that a col-
lection of sentences are mutually contradictory. Construct a joint truth table
tt-contradictory for P1 , . . . , Pn . These sentences are tt-contradictory if every row has an F as-
signed to at least one of the sentences. If the sentences are tt-contradictory,
we know they cannot all be true at once, simply in virtue of the meanings
of the truth functional connectives out of which they are built. We have al-
ready mentioned one such example: any pair of sentences, one of which is the
negation of the other.
The method of proof by contradiction, like proof by cases, is often encoun-
tered in everyday reasoning, though the derived contradiction is sometimes
left implicit. People will often assume a claim for the sake of argument and
then show that the assumption leads to something else that is known to be
false. They then conclude the negation of the original claim. This sort of rea-
soning is in fact an indirect proof: the inconsistency becomes explicit if we
add the known fact to our set of premises.
Let’s look at an example of this kind of reasoning. Imagine a defense
attorney presenting the following summary to the jury:
The prosecution claims that my client killed the owner of the KitKat
Club. Assume that they are correct. You’ve heard their own experts
testify that the murder took place at 5:15 in the afternoon. We also
know the defendant was still at work at City Hall at 4:45, according
to the testimony of five co-workers. It follows that my client had to
get from City Hall to the KitKat Club in 30 minutes or less. But
to make that trip takes 35 minutes under the best of circumstances,
and police records show that there was a massive traffic jam the day
of the murder. I submit that my client is innocent.
Clearly, reasoning like this is used all the time: whenever we assume some-
thing and then rule out the assumption on the basis of its consequences.
Sometimes these consequences are not contradictions, or even things that we
know to be false, but rather future consequences that we consider unaccept-
able. You might for example assume that you will go to Hawaii for spring
break, calculate the impact on your finances and ability to finish the term
papers coming due, and reluctantly conclude that you can’t make the trip.
When you reason like this, you are using the method of indirect proof.

Remember

Proof by contradiction: To prove ¬S using this method, assume S and


prove a contradiction ⊥.

Chapter 5
Indirect proof: proof by contradiction / 139

Exercises

In the following exercises, decide whether the displayed argument is valid. If it is, turn in an infor-
mal proof, phrased in complete, well-formed English sentences, making use of first-order sentences as
convenient. Whenever you use proof by cases or proof by contradiction, say so. You don’t have to be
explicit about the use of simple proof steps like conjunction elimination. If the argument is invalid, con-
struct a counterexample world in Tarski’s World. (Argument 5.16 is valid, and so will not require a
counterexample.)

5.15 b is a tetrahedron. 5.16 Max or Claire is at home but either


➶|✎ c is a cube. ✎ Scruffy or Carl is unhappy.
Either c is larger than b or else they Either Max is not home or Carl is
are identical. happy.
Either Claire is not home or Scruffy is
b is smaller than c.
unhappy.
Scruffy is unhappy.

5.17 Cube(a) ∨ Tet(a) ∨ Large(a) 5.18 Cube(a) ∨ Tet(a) ∨ Large(a)


➶|✎ ¬Cube(a) ∨ a = b ∨ Large(a) ➶|✎ ¬Cube(a) ∨ a = b ∨ Large(a)
¬Large(a) ∨ a = c ¬Large(a) ∨ a = c
¬(c = c ∧ Tet(a)) ¬(c = c ∧ Tet(a))
a= b∨a= c ¬(Large(a) ∨ Tet(a))

5.19 Consider the following sentences.


✎ 1. Folly was Claire’s pet at 2 pm or at 2:05 pm.
2. Folly was Max’s pet at 2 pm.
3. Folly was Claire’s pet at 2:05 pm.

Does (3) follow from (1) and (2)? Does (2) follow from (1) and (3)? Does (1) follow from (2)
and (3)? In each case, give either a proof of consequence, or describe a situation that makes the
premises true and the conclusion false. You may assume that Folly can only be one person’s
pet at any given time.

5.20 Suppose it is Friday night and you are going out with your boyfriend. He wants to see a romantic
✎ comedy, while you want to see the latest Wes Craven slasher movie. He points out that if he
watches the Wes Craven movie, he will not be able to sleep because he can’t stand the sight of
blood, and he has to take the MCAT test tomorrow. If he does not do well on the MCAT, he
won’t get into medical school. Analyze your boyfriend’s argument, pointing out where indirect
proof is being used. How would you rebut his argument?

Section 5.3
140 / Methods of Proof for Boolean Logic

5.21 Describe an everyday example of an indirect proof that you have used in the last few days.

5.22 Prove that indirect proof is a tautologically valid method of proof. That is, show that if
✎? P1 , . . . , Pn , S is tt-contradictory, then ¬S is a tautological consequence of P1 , . . . , Pn .

In the next three exercises we ask you to prove simple facts about the natural numbers. We do not expect
you to phrase the proofs in fol. You will have to appeal to basic facts of arithmetic plus the definitions
of even and odd number. This is OK, but make these appeals explicit. Also make explicit any use of proof
by contradiction.

5.23 Assume that n2 is 5.24 Assume that n + m 5.25 Assume that n2 is


✎? odd. Prove that n is ✎? is odd. Prove that ✎? divisible by 3. Prove
odd. n × m is even. that n2 is divisible
by 9.

5.26 A good way to make sure you understand a proof is to try to generalize it. Prove that 3 is
✎?? irrational. [Hint: You will need to figure out some facts about divisibility by 3 that parallel the
facts we used about even and odd, for example, the fact expressed in Exercise 5.25.] Can you
generalize these two results?

Section 5.4
Arguments with inconsistent premises
What follows from an inconsistent set of premises? If you look back at our
definition of logical consequence, you will see that every sentence is a conse-
quence of such a set. After all, if the premises are contradictory, then there
are no circumstances in which they are all true. Thus, there are no circum-
stances in which the premises are true and the conclusion is false. Which is
to say, in any situation in which the premises are all true (there aren’t any
always valid of these!), the conclusion will be true as well. Hence any argument with an
inconsistent set of premises is trivially valid. In particular, if one can establish
a contradiction ⊥ on the basis of the premises, then one is entitled to assert
any sentence at all.
This often strikes students as a very odd method of reasoning, and for very
good reason. For recall the distinction between a valid argument and a sound
one. A sound argument is a valid argument with true premises. Even though
any argument with an inconsistent set of premises is valid, no such argument
is sound, since there is no way the premises of the argument can all be true.
For this reason, an argument with an inconsistent set of premises is not worth

Chapter 5
Arguments with inconsistent premises / 141

much on its own. After all, the reason we are interested in logical consequence never sound
is because of its relation to truth. If the premises can’t possibly be true, then
even knowing that the argument is valid gives us no clue as to the truth or
falsity of the conclusion. An unsound argument gives no more support for its
conclusion than an invalid one.
In general, methods of proof don’t allow us to show that an argument
is unsound. After all, the truth or falsity of the premises is not a matter of
logic, but of how the world happens to be. But in the case of arguments with
inconsistent premises, our methods of proof do give us a way to show that at
least one of the premises is false (though we might not know which one), and
hence that the argument is unsound. To do this, we prove that the premises
are inconsistent by deriving a contradiction.
Suppose, for example, you are given a proof that the following argument
is valid:

Home(max) ∨ Home(claire)
¬Home(max)
¬Home(claire)
Home(max) ∧ Happy(carl)

While it is true that this conclusion is a consequence of the premises, your


reaction should not be to believe the conclusion. Indeed, using proof by cases
we can show that the premises are inconsistent, and hence that the argument
is unsound. There is no reason to be convinced of the conclusion of an unsound
argument.

Remember

A proof of a contradiction ⊥ from premises P1 , . . . , Pn (without addi-


tional assumptions) shows that the premises are inconsistent. An argu-
ment with inconsistent premises is always valid, but more importantly,
always unsound.

Exercises

5.27 Give two different proofs that the premises of the above argument are inconsistent. Your first
✎ should use proof by cases but not DeMorgan’s law, while your second can use DeMorgan but
not proof by cases.

Section 5.4
Chapter 6

Formal Proofs and Boolean


Logic

natural deduction The deductive system F is what is known as a system of natural deduction.
Such systems are intended to be models of the valid principles of reasoning
used in informal proofs. In this chapter, we will present the inference rules of
F that correspond to the informal principles of Boolean reasoning discussed in
the previous chapter. You will easily recognize the rules as formal counterparts
of some of the principles we’ve already discussed.
Although natural deduction systems like F are meant to model informal
reasoning, they are also designed to be relatively spare or “stripped down”
versions of such reasoning. For example, we told you that in giving an informal
proof, you can always presuppose steps that you and your audience already
know to be logically valid. So if one of the equivalence laws is not at issue
in a proof, you can simply apply it in a single step of your informal proof.
However, in F we will give you a very elegant but restricted collection of
inference rules that you must apply in constructing a formal proof. Many of
the valid inference steps that we have seen (like the DeMorgan Laws) are not
allowed as single steps; they must be justified in terms of more basic steps.
The advantage to this “lean and mean” approach is that it makes it easier to
prove results about the deductive system, since the fewer the rules, the simpler
the system. For example, one of the things we can prove is that anything you
could demonstrate with a system that contained rules for all of the named
logical equivalences of Chapter 4 can be proved in the leaner system F .
Systems of natural deduction like F use two rules for each connective,
one that allows us to prove statements containing the symbol, and one that
allows us to prove things from statements containing the symbol. The former
introduction and are called introduction rules since they let us introduce these symbols into
elimination rules proofs. By contrast, the latter are called elimination rules. This is similar to
our treatment of the identity predicate in Chapter 2. If you go on to study
proof theory in more advanced logic courses, you will see that that this elegant
pairing of rules has many advantages over systems that include more inference
steps as basic.
The formal rules of F are all implemented in the program Fitch, allowing
you to construct formal proofs much more easily than if you had to write
them out by hand. Actually, Fitch’s interpretation of the introduction and

142
Conjunction rules / 143

elimination rules is a bit more generous in spirit than F. It doesn’t allow you
to do anything that F wouldn’t permit, but there are cases where Fitch will
let you do in one step what might take several in F. Also, many of Fitch’s
rules have “default applications” that can save you a lot of time. If you want rule defaults
the default use of some rule, all you have to do is specify the rule and cite
the step or steps you are applying it to; Fitch will then fill in the appropriate
conclusion for you. At the end of each section below we’ll explain the default
uses of the rules introduced in that section.

Section 6.1
Conjunction rules

The simplest principles to formalize are those that involve the conjunction
symbol ∧. These are the rules of conjunction elimination and conjunction
introduction.

Conjunction elimination

The rule of conjunction elimination allows you to assert any conjunct Pi of a


conjunctive sentence P1 ∧ . . . ∧ Pi ∧ . . . ∧ Pn that you have already derived
in the proof. (Pi can, by the way, be any conjunct, including the first or the
last.) You justify the new step by citing the step containing the conjunction.
We abbreviate this rule with the following schema:

Conjunction Elimination (∧ Elim):

P1 ∧ . . . ∧ Pi ∧ . . . ∧ Pn
..
.
. Pi

You try it
................................................................
1. Open the file Conjunction 1. There are three sentences that you are asked J
to prove. They are shown in the goal strip at the bottom of the proof
window as usual.

2. The first sentence you are to prove is Tet(a). To do this, first add a new J
step to the proof and write the sentence Tet(a).

Section 6.1
144 / Formal Proofs and Boolean Logic

I 3. Next, go to the popup Rule? menu and under the Elimination Rules,
choose ∧.
I 4. If you try to check this step, you will see that it fails, because you have
have not yet cited any sentences in support of the step. In this example,
you need to cite the single premise in support. Do this and then check the
step.
I 5. You should be able to prove each of the other sentences similarly, by means
of a single application of ∧ Elim. When you have proven these sentences,
check your goals and save the proof as Proof Conjunction 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Conjunction introduction

The corresponding introduction rule, conjunction introduction, allows you to


assert a conjunction P1 ∧ . . . ∧ Pn provided you have already established each
of its constituent conjuncts P1 through Pn . We will symbolize this rule in the
following way:

Conjunction Introduction (∧ Intro):

P1

Pn
..
.
. P1 ∧ . . . ∧ Pn

In this rule, we have used the notation:

P1

Pn

to indicate that each of P1 through Pn must appear in the proof before you
can assert their conjunction. The order in which they appear does not matter,
and they do not have to appear one right after another. They just need to
appear somewhere earlier in the proof.
Here is a simple example of our two conjunction rules at work together. It
is a proof of C ∧ B from A ∧ B ∧ C.

Chapter 6
Conjunction rules / 145

1. A ∧ B ∧ C
2. B ∧ Elim: 1
3. C ∧ Elim: 1
4. C ∧ B ∧ Intro: 3, 2

Let’s try our hand using both conjunction rules in Fitch.

You try it
................................................................
1. Open the file Conjunction 2. We will help you prove the two sentences J
requested in the goals. You will need to use both of the conjunction rules
in each case.

2. The first goal is Medium(d) ∧ ¬Large(c). Add a new step and enter this J
sentence. (Remember that you can copy the sentence from the goal pane
and paste it into the new step. It’s faster than typing it in.)

3. Above the step you just created, add two more steps, typing one of the J
conjuncts in each. If you can prove these, then the conclusion will follow
by ∧ Intro. Show this by choosing this rule at the conjunction step and
citing the two conjuncts in support.

4. Now all you need to do is prove each of the conjuncts. This is easily done J
using the rule ∧ Elim at each of these steps. Do this, cite the appropriate
support sentences, and check the proof. The first goal should check out.

5. Prove the second goal sentence similarly. Once both goals check out, save J
your proof as Proof Conjunction 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Default and generous uses of the ∧ rules

As we said, Fitch is generous in its interpretation of the inference rules of F.


For example, Fitch considers the following to be an acceptable use of ∧ Elim:

17. Tet(a) ∧ Tet(b) ∧ Tet(c) ∧ Tet(d)


..
.
26. Tet(d) ∧ Tet(b) ∧ Elim: 17

Section 6.1
146 / Formal Proofs and Boolean Logic

What we have done here is pick two of the conjuncts from step 17 and assert
the conjunction of these in step 26. Technically, F would require us to de-
rive the two conjuncts separately and, like Humpty Dumpty, put them back
together again. Fitch does this for us.
Since Fitch lets you take any collection of conjuncts in the cited sentence
and assert their conjunction in any order, Fitch’s interpretation of ∧ Elim
allows you to prove that conjunction is “commutative.” In other words, you
can use it to take a conjunction and reorder its conjuncts however you please:

13. Tet(a) ∧ Tet(b)


..
.
21. Tet(b) ∧ Tet(a) ∧ Elim: 13

You try it
................................................................
I 1. Open the file Conjunction 3. Notice that there are two goals. The first goal
asks you to prove Tet(c) ∧ Tet(a) from the premise. Strictly speaking, this
would take two uses of ∧ Elim followed by one use of ∧ Intro. However,
Fitch lets you do this with a single use of ∧ Elim. Try this and then check
the step.
I 2. Verify that the second goal sentence also follows by a single application of
Fitch’s rule of ∧ Elim. When you have proven these sentences, check your
goals and save the proof as Proof Conjunction 3.
I 3. Next try out other sentences to see whether they follow from the given
sentence by ∧ Elim. For example, does Tet(c) ∧ Small(a) follow? Should
it?

I 4. When you are satisfied you understand conjunction elimination, close the
file, but don’t save the changes you made in step 3.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The ∧ Intro rule implemented in Fitch is also less restrictive than our dis-
cussion of the formal rule might suggest. First of all, Fitch does not care about
the order in which you cite the supporting sentences. Second, if you cite a sen-
tence, that sentence can appear more than once as a conjunct in the concluding
sentence. For example, you can use this rule to conclude Cube(a) ∧ Cube(a)
from the sentence Cube(a), if you want to for some reason.

Chapter 6
Conjunction rules / 147

Both of the conjunction rules have default uses. If at a new step you cite
a conjunction and specify the rule as ∧ Elim, then when you check the step default uses of
(or choose Check Proof), Fitch will fill in the blank step with the leftmost conjunction rules
conjunct in the cited sentence. If you cite several sentences and apply ∧ Intro,
Fitch will fill in the conjunction of those steps, ordering conjuncts in the same
order they were cited.

You try it
................................................................
1. Open the file Conjunction 4. J

2. Move the focus to the first blank step, the one immediately following the J
premises. Notice that this step has a rule specified, as well as a support
sentence cited. Check the step to see what default Fitch generates.

3. Then, focus on each successive step, try to predict what the default will J
be, and check the step. (The last two steps give different results because
we entered the support steps in different orders.)

4. When you have checked all the steps, save your proof as Proof Conjunc- J
tion 4.

5. Feel free to experiment with the rule defaults some more, to see when they J
are useful.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
One final point: In applying conjunction introduction, you will sometimes
have to be careful about parentheses, due to our conventions about dropping parentheses and
outermost parentheses. If one of the conjuncts is itself a conjunction, then conjunction rules
of course there is no need to add any parentheses before forming the larger
conjunction, unless you want to. For example, the following are both correct
applications of the rule. (The first is what Fitch’s default mechanism would
give you.)
Correct: 1. A ∧ B
2. C
3. (A ∧ B) ∧ C ∧ Intro: 1, 2

Correct: 1. A ∧ B
2. C
3. A ∧ B ∧ C ∧ Intro: 1, 2

Section 6.1
148 / Formal Proofs and Boolean Logic

However, if one of the conjuncts is a disjunction (or some other complex


sentence), to prevent ambiguity you may need to reintroduce the parentheses
that you omitted before. Thus the first of the following is a correct proof, but
the second contains a faulty application of conjunction introduction, since it
concludes with an ambiguous sentence.
Correct: 1. A ∨ B
2. C
3. (A ∨ B) ∧ C ∧ Intro: 1, 2

Wrong: 1. A ∨ B
2. C
3. A ∨ B ∧ C ∧ Intro: 1, 2

Section 6.2
Disjunction rules
We know: the conjunction rules were boring. Not so the disjunction rules,
particularly disjunction elimination.

Disjunction introduction

The rule of disjunction introduction allows you to go from a sentence Pi to


any disjunction that has Pi among its disjuncts, say P1 ∨ . . . ∨ Pi ∨ . . . ∨ Pn .
In schematic form:

Disjunction Introduction (∨ Intro):

Pi
..
.
. P1 ∨ . . . ∨ Pi ∨ . . . ∨ Pn

Once again, we stress that Pi may be the first or last disjunct of the conclusion.
Further, as with conjunction introduction, some thought ought to be given to
whether parentheses must be added to Pi to prevent ambiguity.
As we explained in Chapter 5, disjunction introduction is a less peculiar
rule than it may at first appear. But before we look at a sensible example of
how it is used, we need to have at our disposal the second disjunction rule.

Chapter 6
Disjunction rules / 149

Disjunction elimination

We now come to the first rule that corresponds to what we called a method
of proof in the last chapter. This is the rule of disjunction elimination, the
formal counterpart of proof by cases. Recall that proof by cases allows you
to conclude a sentence S from a disjunction P1 ∨ . . . ∨ Pn if you can prove
S from each of P1 through Pn individually. The form of this rule requires us
to discuss an important new structural feature of the Fitch-style system of
deduction. This is the notion of a subproof.
A subproof, as the name suggests, is a proof that occurs within the context subproofs
of a larger proof. As with any proof, a subproof generally begins with an as-
sumption, separated from the rest of the subproof by the Fitch bar. But the
assumption of a subproof, unlike a premise of the main proof, is only temporar- temporary
ily assumed. Throughout the course of the subproof itself, the assumption acts assumptions
just like an additional premise. But after the subproof, the assumption is no
longer in force.
Before we give the schematic form of disjunction elimination, let’s look at
a particular proof that uses the rule. This will serve as a concrete illustration
of how subproofs appear in F.

1. (A ∧ B) ∨ (C ∧ D)

2. A ∧ B
3. B ∧ Elim: 2
4. B ∨ D ∨ Intro: 3
5. C ∧ D
6. D ∧ Elim: 5
7. B ∨ D ∨ Intro: 6
8. B ∨ D ∨ Elim: 1, 2–4, 5–7

With appropriate replacements for A, B, C, and D, this is a formalization


of the proof given on page 133. It contains two subproofs. One of these runs
from line 2 to 4, and shows that B ∨ D follows if we (temporarily) assume
A ∧ B. The other runs from line 5 to 7, and shows that the same conclu-
sion follows from the assumption C ∧ D. These two proofs, together with the
premise (A ∧ B) ∨ (C ∧ D), are just what we need to apply the method of proof
by cases—or as we will now call it, the rule of disjunction elimination.
Look closely at this proof and compare it to the informal proof given
on page 133 to see if you can understand what is going on. Notice that the

Section 6.2
150 / Formal Proofs and Boolean Logic

assumption steps of our two subproofs do not have to be justified by a rule any
more than the premise of the larger “parent” proof requires a justification.
This is because we are not claiming that these assumptions follow from what
comes before, but are simply assuming them to show what follows from their
supposition. Notice also that we have used the rule ∨ Intro twice in this proof,
since that is the only way we can derive the desired sentence in each subproof.
Although it seems like we are throwing away information when we infer B ∨ D
from the stronger claim B, when you consider the overall proof, it is clear that
B ∨ D is the strongest claim that follows from the original premise.
We can now state the schematic version of disjunction elimination.

Disjunction Elimination (∨ Elim):

P1 ∨ . . . ∨ Pn
..
.
P1
..
.
S

Pn
..
.
S
..
.
. S

What this says is that if you have established a disjunction P1 ∨. . . ∨Pn , and
you have also shown that S follows from each of the disjuncts P1 through Pn ,
then you can conclude S. Again, it does not matter what order the subproofs
appear in, or even that they come after the disjunction. When applying the
rule, you will cite the step containing the disjunction, plus each of the required
subproofs.
Let’s look at another example of this rule, to emphasize how justifications
involving subproofs are given. Here is a proof showing that A follows from the
sentence (B ∧ A) ∨ (A ∧ C).

Chapter 6
Disjunction rules / 151

1. (B ∧ A) ∨ (A ∧ C)

2. B ∧ A
3. A ∧ Elim: 2
4. A ∧ C
5. A ∧ Elim: 4
6. A ∨ Elim: 1, 2–3, 4–5

The citation for step 6 shows the form we use when citing subproofs. The
citation “n–m” is our way of referring to the subproof that begins on line n
and ends on line m.
Sometimes, in using disjunction elimination, you will find it natural to use
the reiteration rule introduced in Chapter 3. For example, suppose we modify
the above proof to show that A follows from (B ∧ A) ∨ A.

1. (B ∧ A) ∨ A

2. B ∧ A
3. A ∧ Elim: 2
4. A
5. A Reit: 4
6. A ∨ Elim: 1, 2–3, 4–5

Here, the assumption of the second subproof is A, exactly the sentence we


want to prove. So all we need to do is repeat that sentence to get the subproof
into the desired form. (We could also just give a subproof with one step, but
it is more natural to use reiteration in such cases.)

You try it
................................................................
1. Open the file Disjunction 1. In this file, you are asked to prove J

Medium(c) ∨ Large(c)
from the sentence

(Cube(c) ∧ Large(c)) ∨ Medium(c)

Section 6.2
152 / Formal Proofs and Boolean Logic

We are going to step you through the construction of the following proof:

1. (Cube(c) ∧ Large(c)) ∨ Medium(c)

2. Cube(c) ∧ Large(c)
3. Large(c) ∧ Elim: 2
4. Medium(c) ∨ Large(c) ∨ Intro: 3
5. Medium(c)
6. Medium(c) ∨ Large(c) ∨ Intro: 5
7. Medium(c) ∨ Large(c) ∨ Elim: 1, 2–4, 5–6
I 2. To use ∨ Elim in this case, we need to get two subproofs, one for each
of the disjuncts in the premise. It is a good policy to begin by specifying
both of the necessary subproofs before doing anything else. To start a
subproof, add a new step and choose New Subproof from the Proof
menu. Fitch will indent the step and allow you to enter the sentence you
want to assume. Enter the first disjunct of the premise, Cube(c) ∧ Large(c),
as the assumption of this subproof.

I 3. Rather than work on this subproof now, let’s specify the second case before
we forget what we’re trying to do. To do this, we need to end the first
subproof and start a second subproof after it. You end the current subproof
by choosing End Subproof from the Proof menu. This will give you a
new step outside of, but immediately following the subproof.

I 4. Start your second subproof at this new step by choosing New Subproof
from the Proof menu. This time type the other disjunct of the premise,
Medium(c). We have now specified the assumptions of the two cases we
need to consider. Our goal is to prove that the conclusion follows in both
of these cases.
I 5. Go back to the first subproof and add a step following the assumption. (Fo-
cus on the assumption step of the subproof and choose Add Step After
from the Proof menu.) In this step use ∧ Elim to prove Large(c). Then
add another step to that subproof and prove the goal sentence, using ∨
Intro. In both steps, you will have to cite the necessary support sentences.
I 6. After you’ve finished the first subproof and all the steps check out, move
the focus slider to the assumption step of the second subproof and add a
new step. Use ∨ Intro to prove the goal sentence from your assumption.

Chapter 6
Disjunction rules / 153

7. We’ve now derived the goal sentence in both of the subproofs, and so are J
ready to add the final step of our proof. While focussed on the last step of
the second subproof, choose End Subproof from the Proof menu. Enter
the goal sentence into this new step.

8. Specify the rule in the final step as ∨ Elim. For support, cite the two J
subproofs and the premise. Check your completed proof. If it does not
check out, compare your proof carefully with the proof displayed above.
Have you accidentally gotten one of your subproofs inside the other one?
If so, delete the misplaced subproof by focusing on the assumption and
choosing Delete Step from the Proof menu. Then try again.

9. When the entire proof checks out, save it as Proof Disjunction 1. J


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Default and generous uses of the ∨ rules

There are a couple of ways in which Fitch is more lenient in checking ∨ Elim
than the strict form of the rule suggests. First, the sentence S does not have
to be the last sentence in the subproof, though usually it will be. S simply has
to appear on the “main level” of each subproof, not necessarily as the very
last step. Second, if you start with a disjunction containing more than two
disjuncts, say P ∨ Q ∨ R, Fitch doesn’t require three subproofs. If you have
one subproof starting with P and one starting with Q ∨ R, or one starting
with Q and one starting with P ∨ R, then Fitch will still be happy, as long as
you’ve proven S in each of these cases.
Both disjunction rules have default applications, though they work rather default uses of
differently. If you cite appropriate support for ∨ Elim (i.e., a disjunction disjunction rules
and subproofs for each disjunct) and then check the step without typing a
sentence, Fitch will look at the subproofs cited and, if they all end with the
same sentence, insert that sentence into the step. If you cite a sentence and
apply ∨ Intro without typing a sentence, Fitch will insert the cited sentence
followed by ∨, leaving the insertion point after the ∨ so you can type in the
rest of the disjunction you had in mind.

You try it
................................................................
1. Open the file Disjunction 2. The goal is to prove the sentence J

(Cube(b) ∧ Small(b)) ∨ (Cube(b) ∧ Large(b))

Section 6.2
154 / Formal Proofs and Boolean Logic

The required proof is almost complete, though it may not look like it.
I 2. Focus on each empty step in succession, checking the step so that Fitch
will fill in the default sentence. On the second empty step you will have to
finish the sentence by typing in the second disjunct, (Cube(b) ∧ Large(b)),
of the goal sentence. (If the last step does not generate a default, it is
because you have not typed the right thing in the ∨ Intro step.)
I 3. When you are finished, see if the proof checks out. Do you understand the
proof? Could you have come up with it on your own?
I 4. Save your completed proof as Proof Disjunction 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Exercises

6.1 If you skipped any of the You try it sections, go back and do them now. Submit the files Proof
➶ Conjunction 1, Proof Conjunction 2, Proof Conjunction 3, Proof Conjunction 4, Proof Disjunction
1, and Proof Disjunction 2.

6.2 Open the file Exercise 6.2, which contains an incomplete formal proof. As it stands, none of
➶ the steps check out, either because no rule has been specified, no support steps cited, or no
sentence typed in. Provide the missing pieces and submit the completed proof.

Use Fitch to construct formal proofs for the following arguments. You will find Exercise files for each
argument in the usual place. As usual, name your solutions Proof 6.x.

6.3 a= b∧b = c∧c = d 6.4 (A ∧ B) ∨ C


➶ ➶
a= c∧b = d C∨B

6.5 A ∧ (B ∨ C) 6.6 (A ∧ B) ∨ (A ∧ C)
➶ ➶
(A ∧ B) ∨ (A ∧ C) A ∧ (B ∨ C)

Section 6.3
Negation rules
Last but not least are the negation rules. It turns out that negation introduc-
tion is our most interesting and complex rule.

Chapter 6
Negation rules / 155

Negation elimination

The rule of negation elimination corresponds to a very trivial valid step, from
¬¬P to P. Schematically:

Negation Elimination (¬ Elim):

¬¬P
..
.
. P

Negation elimination gives us one direction of the principle of double nega-


tion. You might reasonably expect that our second negation rule, negation
introduction, would simply give us the other direction. But if that’s what you
guessed, you guessed wrong.

Negation introduction

The rule of negation introduction corresponds to the method of indirect proof


or proof by contradiction. Like ∨ Elim, it involves the use of a subproof, as
will the formal analogs of all nontrivial methods of proof. The rule says that
if you can prove a contradiction ⊥ on the basis of an additional assumption
P, then you are entitled to infer ¬P from the original premises. Schematically:

Negation Introduction (¬ Intro):

P
..
.

. ¬P

There are different ways of understanding this rule, depending on how


we interpret the contradiction symbol ⊥. Some authors interpret it simply
as shorthand for any contradiction of the form Q ∧ ¬Q. If we construed the
schema that way, we wouldn’t have to say anything more about it. But we
will treat ⊥ as a symbol in its own right, to be read “contradiction.” This
has several advantages that will become apparent when you use the rule. The
one disadvantage is that we need to have rules about this special symbol. We
introduce these rules next.

Section 6.3
156 / Formal Proofs and Boolean Logic

⊥ Introduction

The rule of ⊥ Introduction (⊥ Intro) allows us to obtain the contradiction


symbol if we have established an explicit contradiction in the form of some
sentence P and its negation ¬P.
⊥ Introduction (⊥ Intro):

P
..
.
¬P
..
.
. ⊥

Ordinarily, you will only apply ⊥ Intro in the context of a subproof, to


show that the subproof’s assumption leads to a contradiction. The only time
you will be able to derive ⊥ in your main proof (as opposed to a subproof) is
when the premises of your argument are themselves inconsistent. In fact, this
formal proofs of is how we give a formal proof that a set of premises is inconsistent. A formal
inconsistency proof of inconsistency is a proof that derives ⊥ at the main level of the proof.
Let’s try out the rules of ⊥ Intro and ¬ Intro to see how they work.

You try it
................................................................
I 1. To illustrate these rules, we will show you how to prove ¬¬A from A.
This is the other direction of double negation. Use Fitch to open the file
Negation 1.
I 2. We will step you through the construction of the following simple proof.

1. A

2. ¬A
3. ⊥ ⊥ Intro: 1, 2
4. ¬¬A ¬ Intro: 2–3
I 3. To construct this proof, add a step immediately after the premise. Turn it
into a subproof by choosing New Subproof from the Proof menu. Enter
the assumption ¬A.
I 4. Add a new step to the subproof and enter ⊥, changing the rule to ⊥ Intro.
Cite the appropriate steps and check the step.

Chapter 6
Negation rules / 157

5. Now end the subproof and enter the final sentence, ¬¬A, after the sub- J
proof. Specify the rule as ¬ Intro, cite the preceding subproof and check
the step. Your whole proof should now check out.

6. Notice that in the third line of your proof you cited a step outside the J
subproof, namely the premise. This is legitimate, but raises an important
issue. Just what steps can be cited at a given point in a proof? As a first
guess, you might think that you can cite any earlier step. But this turns
out to be wrong. We will explain why, and what the correct answer is, in
the next section.

7. Save your proof as Proof Negation 1. J


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The contradiction symbol ⊥ acts just like any other sentence in a proof. In
particular, if you are reasoning by cases and derive ⊥ in each of your subproofs,
then you can use ∨ Elim to derive ⊥ in your main proof. For example, here
is a proof that the premises A ∨ B, ¬A, and ¬B are inconsistent.
1. A ∨ B
2. ¬A
3. ¬B

4. A
5. ⊥ ⊥ Intro: 4, 2
6. B
7. ⊥ ⊥ Intro: 6, 3
8. ⊥ ∨ Elim: 1, 4–5, 6–7

The important thing to notice here is step 8, where we have applied ∨


Elim to extract the contradiction symbol from our two subproofs. This is
clearly justified, since we have shown that whichever of A or B holds, we
immediately arrive at a contradiction. Since the premises tell us that one or
the other holds, the premises are inconsistent.

Other ways of introducing ⊥

The rule of ⊥ Intro recognizes only the most blatant contradictions, those
where you have established a sentence P and its negation ¬P. What if in the
course of a proof you come across an inconsistency of some other form? For

Section 6.3
158 / Formal Proofs and Boolean Logic

example, suppose you manage to derive a single tt-contradictory sentence


like ¬(A ∨ ¬A), or the two sentences ¬A ∨ ¬B and A ∧ B, which together form
a tt-contradictory set?
It turns out that if you can prove any tt-contradictory sentence or sen-
tences, the rules we’ve already given you will allow you to prove ⊥. It may
take a fair amount of effort and ingenuity, but it is possible. We’ll eventually
prove this, but for now you’ll have to take our word for it.
One way to check whether some sentences are tt-contradictory is to try to
introducing ⊥ derive ⊥ from them using a single application of Taut Con. In other words,
with Taut Con enter ⊥, cite the sentences, and choose Taut Con from the Rule? menu. If
Taut Con tells you that ⊥ follows from the cited sentences, then you can be
sure that it is possible to prove this using just the introduction and elimination
rules for ∧, ∨, ¬, and ⊥.
Of course, there are other forms of contradiction besides tt-contradictions.
For example, suppose you manage to prove the three sentences Cube(b), b = c,
and ¬Cube(c). These sentences are not tt-contradictory, but you can see
that a single application of = Elim will give you the tt-contradictory pair
Cube(c) and ¬Cube(c). If you suspect that you have derived some sentences
whose inconsistency results from the Boolean connectives plus the identity
introducing ⊥ predicate, you can check this using the FO Con mechanism, since FO Con
with FO Con understands the meaning of =. If FO Con says that ⊥ follows from the cited
sentences (and if those sentences do not contain quantifiers), then you should
be able to prove ⊥ using just the introduction and elimination rules for =, ∧,
∨, ¬, and ⊥.
The only time you may arrive at a contradiction but not be able to prove
⊥ using the rules of F is if the inconsistency depends on the meanings of
predicates other than identity. For example, suppose you derived the contra-
diction n < n, or the contradictory pair of sentences Cube(b) and Tet(b). The
rules of F give you no way to get from these sentences to a contradiction of
the form P and ¬P, at least without some further premises.
introducing ⊥ What this means is that in Fitch, the Ana Con mechanism will let you
with Ana Con establish contradictions that can’t be derived in F . Of course, the Ana Con
mechanism only understands predicates in the blocks language (and even
there, it excludes Adjoins and Between). But it will allow you to derive ⊥
from, for example, the two sentences Cube(b) and Tet(b). You can either do
this directly, by entering ⊥ and citing the two sentences, or indirectly, by
using Ana Con to prove, say, ¬Cube(b) from Tet(b).

Chapter 6
Negation rules / 159

You try it
................................................................
1. Open Negation 2 using Fitch. In this file you will find an incomplete proof. J
As premises, we have listed a number of sentences, several groups of which
are contradictory.
2. Focus on each step that contains the ⊥ symbol. You will see that various J
sentences are cited in support of the step. Only one of these steps is an
application of the ⊥ Intro rule. Which one? Specify the rule for that step
as ⊥ Intro and check it.
3. Among the remaining steps, you will find one where the cited sentences J
form a tt-contradictory set of sentences. Which one? Change the justifi-
cation at that step to Taut Con and check the step. Since it checks out,
we assure you that you can derive ⊥ from these same premises using just
the Boolean rules.
4. Of the remaining steps, the supports of two are contradictory in view of the J
meaning of the identity symbol =. Which steps? Change the justification
at those step to FO Con and check the steps. To derive ⊥ from these
premises, you would need the identity rules (in one case = Elim, in the
other = Intro).
5. Verify that the remaining steps cannot be justified by any of the rules ⊥ J
Intro, Taut Con or FO Con. Change the justification at those steps to
Ana Con and check the steps.
6. Save your proof as Proof Negation 2. (Needless to say, this is a formal proof J
of inconsistency with a vengeance!)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

⊥ Elimination

As we remarked earlier, if in a proof, or more importantly in some subproof,


you are able to establish a contradiction, then you are entitled to assert any
fol sentence P whatsoever. In our formal system, this is modeled by the rule
of ⊥ Elimination (⊥ Elim).
⊥ Elimination (⊥ Elim):


..
.
. P

Section 6.3
160 / Formal Proofs and Boolean Logic

The following You try it section illustrates both of the ⊥ rules. Be sure
to go through it, as it presents a proof tactic you will have several occasions
to use.

You try it
................................................................
I 1. It often happens in giving proofs using ∨ Elim that one really wants
to eliminate one or more of the disjuncts, because they contradict other
assumptions. The form of the ∨ Elim rule does not permit this, though.
The proof we will construct here shows how to get around this difficulty.

I 2. Using Fitch, open the file Negation 3. We will use ∨ Elim and the two ⊥
rules to prove P from the premises P ∨ Q and ¬Q.
I 3. Start two subproofs, the first with assumption P, the second with assump-
tion Q. Our goal is to establish P in both subproofs.

I 4. In the first subproof, we can simply use reiteration to repeat the assump-
tion P.
I 5. In the second subproof, how will we establish P? In an informal proof,
we would simply eliminate this case, because the assumption contradicts
one of the premises. In a formal proof, though, we must establish our goal
sentence P in both subproofs, and this is where ⊥ Elim is useful. First use
⊥ Intro to show that this case is contradictory. You will cite the assumed
sentence Q and the second premise ¬Q. Once you have ⊥ as the second
step of this subproof, use ⊥ Elim to establish P in this subproof.

I 6. Since you now have P in both subproofs, you can finish the proof using ∨
Elim. Complete the proof.
I 7. Save your proof as Proof Negation 3.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
It turns out that we do not really need the rule of ⊥ Elim. You can prove
any sentence from a contradiction without it; it just takes longer. Suppose, for
example, that you have established a contradiction at step 17 of some proof.
Here is how you can introduce P at step 21 without using ⊥ Elim.

Chapter 6
Negation rules / 161

17. ⊥
18. ¬P
19. ⊥ Reit: 17
20. ¬¬P ¬ Intro: 18–19
21. P ¬ Elim: 20

Still, we include ⊥ Elim to make our proofs shorter and more natural.

Default and generous uses of the ¬ rules

The rule of ¬ Elim allows you to take off two negation signs from the front of
a sentence. Repeated uses of this rule would allow you to remove four, six, or
indeed any even number of negation signs. For this reason, the implementation
of ¬ Elim in Fitch allows you to remove any even number of negation signs
in one step.
Both of the negation rules have default applications. In a default application default uses of
of ¬ Elim, Fitch will remove as many negation signs as possible from the front negation rules
of the cited sentences (the number must be even, of course) and insert the
resulting sentence at the ¬ Elim step. In a default application of ¬ Intro,
the inserted sentence will be the negation of the assumption step of the cited
subproof.

You try it
................................................................
1. Open the file Negation 4. First look at the goal to see what sentence we J
are trying to prove. Then focus on each step in succession and check the
step. Before moving to the next step, make sure you understand why the
step checks out and, more important, why we are doing what we are doing
at that step. At the empty steps, try to predict which sentence Fitch will
provide as a default before you check the step.
2. When you are done, make sure you understand the completed proof. Save J
your file as Proof Negation 4.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations

Exercises

6.7 If you skipped any of the You try it sections, go back and do them now. Submit the files
➶ Proof Negation 1, Proof Negation 2, Proof Negation 3, and Proof Negation 4.

Section 6.3
162 / Formal Proofs and Boolean Logic

6.8 (Substitution) In informal proofs, we allow you to substitute logically equivalent sentences
➶ for one another, even when they occur in the context of a larger sentence. For example, the
following inference results from two uses of double negation, each applied to a part of the whole
sentence:
P ∧ (Q ∨ ¬¬R)
¬¬P ∧ (Q ∨ R)

How would we prove this using F, which has no substitution rule? Open the file Exercise 6.8,
which contains an incomplete formal proof of this argument. As it stands, none of the proof’s
steps check out, because no rules or support steps have been cited. Provide the missing justi-
fications and submit the completed proof.

Evaluate each of the following arguments. If the argument is valid, use Fitch to give a formal proof using
the rules you have learned. If it not valid, use Tarski’s World to construct a counterexample world. In
the last two proofs you will need to use Ana Con to show that certain atomic sentences contradict one
another to introduce ⊥. Use Ana Con only in this way. That is, your use of Ana Con should cite
exactly two atomic sentences in support of an introduction of ⊥. If you have difficulty with any of these
exercises, you may want to skip ahead and read Section 6.5.

6.9 Cube(b) 6.10 Cube(a) ∨ Cube(b)


➶ ¬(Cube(c) ∧ Cube(b)) ➶ ¬(Cube(c) ∧ Cube(b))
¬Cube(c) ¬Cube(c)

6.11 Dodec(e) 6.12 Dodec(e)


➶ Small(e) ➶ ¬Small(e)
¬Dodec(e) ∨ Dodec(f) ∨ Small(e) ¬Dodec(e) ∨ Dodec(f) ∨ Small(e)
Dodec(f) Dodec(f)

6.13 Dodec(e) 6.14 SameRow(b, f) ∨ SameRow(c, f)


➶ Large(e) ➶ ∨ SameRow(d, f)
¬Dodec(e) ∨ Dodec(f) ∨ Small(e) ¬SameRow(c, f)
FrontOf(b, f)
Dodec(f)
¬(SameRow(d, f) ∧ Cube(f))
¬Cube(f)

Chapter 6
The proper use of subproofs / 163

In the following two exercises, determine whether the sentences are consistent. If they are, use Tarski’s
World to build a world where the sentences are both true. If they are inconsistent, use Fitch to give a
proof that they are inconsistent (that is, derive ⊥ from them). You may use Ana Con in your proof,
but only applied to literals (that is, atomic sentences or negations of atomic sentences).

6.15 ¬(Larger(a, b) ∧ Larger(b, a)) 6.16 Smaller(a, b) ∨ Smaller(b, a)


➶ ¬SameSize(a, b) ➶ SameSize(a, b)

Section 6.4
The proper use of subproofs

Subproofs are the characteristic feature of Fitch-style deductive systems. It


is important that you understand how to use them properly, since if you are
not careful, you may “prove” things that don’t follow from your premises. For
example, the following formal proof looks like it is constructed according to
our rules, but it purports to prove that A ∧ B follows from (B ∧ A) ∨ (A ∧ C),
which is clearly not right.

1. (B ∧ A) ∨ (A ∧ C)

2. B ∧ A
3. B ∧ Elim: 2
4. A ∧ Elim: 2
5. A ∧ C
6. A ∧ Elim: 5
7. A ∨ Elim: 1, 2–4, 5–6
8. A ∧ B ∧ Intro: 7, 3

The problem with this proof is step 8. In this step we have used step
3, a step that occurs within an earlier subproof. But it turns out that this
sort of justification—one that reaches back inside a subproof that has already
ended—is not legitimate. To understand why it’s not legitimate, we need to
think about what function subproofs play in a piece of reasoning.
A subproof typically looks something like this:

Section 6.4
164 / Formal Proofs and Boolean Logic

P
..
.
Q
R
..
.
S
T
..
.

Subproofs begin with the introduction of a new assumption, in this exam-


discharging ple R. The reasoning within the subproof depends on this new assumption,
assumptions by ending together with any other premises or assumptions of the parent proof. So in
subproofs our example, the derivation of S may depend on both P and R. When the
subproof ends, indicated by the end of the vertical line that ties the subproof
together, the subsequent reasoning can no longer use the subproof’s assump-
tion, or anything that depends on it. We say that the assumption has been
discharged or that the subproof has been ended.
When an assumption has been discharged, the individual steps of its sub-
proof are no longer accessible. It is only the subproof as a whole that can be
cited as justification for some later step. What this means is that in justifying
the assertion of T in our example, we could cite P, Q, and the subproof as a
whole, but we could not cite individual items in the subproof like R or S. For
these steps rely on assumptions we no longer have at our disposal. Once the
subproof has been ended, they are no longer accessible.
This, of course, is where we went wrong in step 8 of the fallacious proof
given earlier. We cited a step in a subproof that had been ended, namely,
step 3. But the sentence at that step, B, had been proven on the basis of the
assumption B ∧ A, an assumption we only made temporarily. The assumption
is no longer in force at step 8, and so cannot be used at that point.
This injunction does not prevent you from citing, from within a subproof,
items that occur earlier outside the subproof, as long as they do not occur in
subproofs that ended before that step. For example, in the schematic proof
given above, the justification for S could well include the step that contains Q.
This observation becomes more pointed when you are working in a sub-
proof of a subproof. We have not yet seen any examples where we needed to
have subproofs within subproofs, but such examples are easy to come by. Here
is one, which is a proof of one direction of the first DeMorgan law.

Chapter 6
The proper use of subproofs / 165

1. ¬(P ∧ Q)

2. ¬(¬P ∨ ¬Q)

3. ¬P
4. ¬P ∨ ¬Q ∨ Intro: 3
5. ⊥ ⊥ Intro: 4, 2
6. ¬¬P ¬ Intro: 3–5
7. P ¬ Elim: 6
8. ¬Q
9. ¬P ∨ ¬Q ∨ Intro: 8
10. ⊥ ⊥ Intro: 9, 2
11. ¬¬Q ¬ Intro: 8–10
12. Q ¬ Elim: 11
13. P∧Q ∧ Intro: 7, 12
14. ¬(P ∧ Q) Reit: 1
15. ⊥ ⊥ Intro: 13, 14
16. ¬¬(¬P ∨ ¬Q) ¬ Intro: 2–15
17. ¬P ∨ ¬Q ¬ Elim: 16

Notice that the subproof 2–15 contains two subproofs, 3–5 and 8–10. In
step 5 of subproof 3–5, we cite step 2 from the parent subproof 2–15. Similarly,
in step 10 of the subproof 8–10, we cite step 2. This is legitimate since the
subproof 2–15 has not been ended by step 10. While we did not need to in
this proof, we could in fact have cited step 1 in either of the sub-subproofs.
Another thing to note about this proof is the use of the Reiteration rule at
step 14. We did not need to use Reiteration here, but did so just to illustrate
a point. When it comes to subproofs, Reiteration is like any other rule: when
you use it, you can cite steps outside of the immediate subproof, if the proofs
that contain the cited steps have not yet ended. But you cannot cite a step
inside a subproof that has already ended. For example, if we replaced the
justification for step 15 with “Reit: 10,” then our proof would no longer be
correct.
As you’ll see, most proofs in F require subproofs inside subproofs—what
we call nested subproofs. To create such a subproof in Fitch, you just choose nested subproofs
New Subproof from the Proof menu while you’re inside the first subproof.
You may already have done this by accident in constructing earlier proofs. In
the exercises that follow, you’ll have to do it on purpose.

Section 6.4
166 / Formal Proofs and Boolean Logic

Remember

◦ In justifying a step of a subproof, you may cite any earlier step con-
tained in the main proof, or in any subproof whose assumption is still
in force. You may never cite individual steps inside a subproof that
has already ended.

◦ Fitch enforces this automatically by not permitting the citation of


individual steps inside subproofs that have ended.

Exercises

6.17 Try to recreate the following “proof” using Fitch.



1. (Tet(a) ∧ Large(c)) ∨ (Tet(a) ∧ Dodec(b))

2. Tet(a) ∧ Large(c)
3. Tet(a) ∧ Elim: 2
4. Tet(a) ∧ Dodec(b)
5. Dodec(b) ∧ Elim: 4
6. Tet(a) ∧ Elim: 4
7. Tet(a) ∨ Elim: 1, 2–3, 4–6
8. Tet(a) ∧ Dodec(b) ∧ Intro: 7, 5

What step won’t Fitch let you perform? Why? Is the conclusion a consequence of the premise?
Discuss this example in the form of a clear English paragraph, and turn your paragraph in to
your instructor.

Use Fitch to give formal proofs for the following arguments. You will need to use subproofs within
subproofs to prove these.

6.18 A∨B 6.19 A∨B 6.20 A∨B


➶ ➶ ¬B ∨ C ➶ A∨C
A ∨ ¬¬B
A∨C A ∨ (B ∧ C)

Chapter 6
Strategy and tactics / 167

Section 6.5
Strategy and tactics

Many students try constructing formal proofs by blindly piecing together a se-
quence of steps permitted by the introduction and elimination rules, a process
no more related to reasoning than playing solitaire. This approach occasion-
ally works, but more often than not it will fail—or at any rate, make it harder
to find a proof. In this section, we will give you some advice about how to
go about finding proofs when they don’t jump right out at you. The advice
consists of two important strategies and an essential maxim.
Here is the maxim: Always keep firmly in mind what the sentences in your an important maxim
proof mean! Students who pay attention to the meanings of the sentences avoid
innumerable pitfalls, among them the pitfall of trying to prove a sentence that
doesn’t really follow from the information given. Your first step in trying to
construct a proof should always be to convince yourself that the claim made
by the conclusion is a consequence of the premises. You should do this even if
the exercise tells you that the argument is valid and simply asks you to find a
proof. For in the process of understanding the sentences and recognizing the
argument’s validity, you will often get some idea how to prove it.
After you’re convinced that the argument is indeed valid, the first strategy
for finding a formal proof is to try giving an informal proof, the kind you might try informal proof
use to convince a fellow classmate. Often the basic structure of your informal
reasoning can be directly formalized using the rules of F. For example, if
you find yourself using an indirect proof, then that part of the reasoning will
probably require negation introduction in F . If you use proof by cases, then
you’ll almost surely formalize the proof using disjunction elimination.
Suppose you have decided that the argument is valid, but are having trou-
ble finding an informal proof. Or suppose you can’t see how your informal
proof can be converted into a proof that uses just the rules of F. The second
strategy is helpful in either of these cases. It is known as “working backwards.” working backwards
What you do is look at the conclusion and see what additional sentence or
sentences would allow you to infer that conclusion. Then you simply insert
these steps into your proof, not worrying about exactly how they will be jus-
tified, and cite them in support of your goal sentence. You then take these
intermediate steps as new goals and see if you can prove them. Once you do,
your proof will be complete.
Let’s work through an example that applies both of these strategies. Sup-
pose you are asked to give a formal proof of the argument:

Section 6.5
168 / Formal Proofs and Boolean Logic

¬P ∨ ¬Q
¬(P ∧ Q)

You’ll recognize this as an application of one of the DeMorgan laws, so you


know it’s valid. But when you think about it (applying our maxim) you may
find that what convinces you of its validity is the following observation, which
is hard to formalize: if the premise is true, then either P or Q is false, and
that will make P ∧ Q false, and hence the conclusion true. Though this is
a completely convincing argument, it is not immediately clear how it would
translate into the introduction and elimination rules of F .
Let’s try working backwards to see if we can come up with an informal
proof that is easier to formalize. Since the conclusion is a negation, we could
prove it by assuming P ∧ Q and deriving a contradiction. So let’s suppose
P ∧ Q and take ⊥ as our new goal. Now things look a little clearer. For the
premise tells us that either ¬P or ¬Q is true, but either of these cases directly
contradicts one of the conjuncts of our assumption. So proof by cases will
allow us to derive a contradiction. For the record, here is how we would state
this as an informal proof:

Proof: We are given ¬P ∨ ¬Q and want to prove ¬(P ∧ Q). For pur-
poses of reductio, we will assume P ∧ Q and attempt to derive a con-
tradiction. There are two cases to consider, since we are given that
either ¬P or ¬Q is true. But each of these contradicts the assump-
tion P ∧ Q: ¬P contradicts the first conjunct and ¬Q contradicts the
second. Consequently, our assumption leads to a contradiction, and
so our proof is complete.

In the following, we lead you through the construction of a formal proof


that models this informal reasoning.

You try it
................................................................
I 1. Open the file Strategy 1. Begin by entering the desired conclusion in a new
step of the proof. We will construct the proof working backwards, just
like we found our informal proof. Add a step before the conclusion you’ve
entered so that your proof looks something like this:

1. ¬P ∨ ¬Q
2. . . . Rule?
3. ¬(P ∧ Q) Rule?

Chapter 6
Strategy and tactics / 169

2. The main method used in our informal proof was reductio, which corre- J
sponds to negation introduction. So change the blank step into a subproof
with the assumption P ∧ Q and the contradiction symbol at the bottom.
Also add a step in between these to remind you that that’s where you still
need to fill things in, and enter your justification for the final step, so you
remember why you added the subproof. At this point your proof should
look roughly like this:

1. ¬P ∨ ¬Q

2. P ∧ Q
3. . . . Rule?
4. ⊥ Rule?
5. ¬(P ∧ Q) ¬ Intro: 2–4

3. Our informal proof showed that there was a contradiction whichever of ¬P J


or ¬Q was the case. The formal counterpart of proof by cases is disjunction
elimination, so the next step is to fill in two subproofs, one assuming ¬P,
the other assuming ¬Q, and both concluding with ⊥. Make sure you put
in the justification for the step where you apply ∨ Elim, and it’s a good
idea to add empty steps to remind yourself where you need to continue
working. Here’s what your proof should look like now:

1. ¬P ∨ ¬Q

2. P ∧ Q

3. ¬P
4. . . . Rule?
5. ⊥ Rule?
6. ¬Q
7. . . . Rule?
8. ⊥ Rule?
9. ⊥ ∨ Elim: 1, 3–5, 6–8
10. ¬(P ∧ Q) ¬ Intro: 2–9

4. Filling in the remaining steps is easy. Finish your proof as follows: J

Section 6.5
170 / Formal Proofs and Boolean Logic

1. ¬P ∨ ¬Q

2. P ∧ Q

3. ¬P
4. P ∧ Elim: 2
5. ⊥ ⊥ Intro: 4, 3
6. ¬Q
7. Q ∧ Elim: 2
8. ⊥ ⊥ Intro: 7, 6
9. ⊥ ∨ Elim: 1, 3–5, 6–8
10. ¬(P ∧ Q) ¬ Intro: 2–9

I 5. Save your proof as Proof Strategy 1.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Working backwards can be a very useful technique, since it often allows
you to replace a complex goal with simpler ones or to add new assumptions
pitfalls of working from which to reason. But you should not think that the technique can be
backwards applied mechanically, without giving it any thought. Each time you add new
intermediate goals, whether they are sentences or subproofs, it is essential
that you stop and check whether the new goals are actually reasonable. If
they don’t seem plausible, you should try something else.
Here’s a simple example of why this constant checking is so important.
Suppose you were asked to prove the sentence A ∨ C from the given sentence
(A ∧ B) ∨ (C ∧ D). Working backwards you might notice that if you could
prove A, from this you could infer the desired conclusion by the rule ∨ Intro.
Sketched in, your partial proof would look like this:
1. (A ∧ B) ∨ (C ∧ D)
2. A Rule?
3. A ∨ C ∨ Intro

The problem with this is that A does not follow from the given sentence,
and no amount of work will allow you to prove that it does. If you didn’t no-
tice this from the outset, you could spend a lot of time trying to construct an
impossible proof! But if you notice it, you can try a more promising approach.
(In this case, disjunction elimination is clearly the right way to go.) Work-
ing backwards, though a valuable tactic, is no replacement for good honest
thinking.

Chapter 6
Strategy and tactics / 171

When you’re constructing a formal proof in Fitch, you can avoid trying
to prove an incorrect intermediate conclusion by checking the step with Taut
Con. In the above example, for instance, if you use Taut Con at the second checking with Con
step, citing the premise as support, you would immediately find that it is mechanisms
hopeless to try to prove A from the given premise.
Many of the problems in this book ask you to determine whether an argu-
ment is valid and to back up your answer with either a proof of consequence
or a counterexample, a proof of non-consequence. You will approach these
problems in much the same way we’ve described, first trying to understand
the claims involved and deciding whether the conclusion follows from the
premises. If you think the conclusion does not follow, or really don’t have a
good hunch one way or the other, try to find a counterexample. You may
succeed, in which case you will have shown the argument to be invalid. If you
cannot find a counterexample, trying to find one often gives rise to insights
about why the argument is valid, insights that can help you find the required
proof.
We can summarize our strategy advice with a seven step procedure for
approaching problems of this sort.

Remember

In assessing the validity of an argument, use the following method:

1. Understand what the sentences are saying.

2. Decide whether you think the conclusion follows from the premises.

3. If you think it does not follow, or are not sure, try to find a counterex-
ample.

4. If you think it does follow, try to give an informal proof.

5. If a formal proof is called for, use the informal proof to guide you in
finding one.

6. In giving consequence proofs, both formal and informal, don’t forget


the tactic of working backwards.

7. In working backwards, though, always check that your intermediate


goals are consequences of the available information.

One final warning: One of the nice things about Fitch is that it will give
you instant feedback about whether your proof is correct. This is a valuable

Section 6.5
172 / Formal Proofs and Boolean Logic

using Fitch as a crutch learning tool, but it can be misused. You should not use Fitch as a crutch,
trying out rule applications and letting Fitch tell you if they are correct. If
you do this, then you are not really learning the system F . One way to check
up on yourself is to write a formal proof out on paper every now and then. If
you try this and find you can’t do it without Fitch’s help, then you are using
Fitch as a crutch, not a learning tool.

Exercises

6.21 If you skipped the You try it section, go back and do it now. Submit the file Proof Strategy 1.

6.22 Give a formal proof mirroring the in- 6.23 Give an informal proof that might have
➶ formal proof on page 136 of ¬(b = c) ✎ been used by the authors in construct-
from the premises Cube(c) ∨ Dodec(c) ing the formal proof shown on page 165.
and Tet(b). You may apply Ana Con
to literals in establishing ⊥.

In each of the following exercises, give an informal proof of the validity of the indicated argument. (You
should never use the principle you are proving in your informal proof, for example in Exercise 6.24,
you should not use DeMorgan in your informal proof.) Then use Fitch to construct a formal proof that
mirrors your informal proof as much as possible. Turn in your informal proofs to your instructor and
submit the formal proof in the usual way.

6.24 ¬(A ∨ B) 6.25 ¬A ∧ ¬B


➶|✎ ➶|✎
¬A ∧ ¬B ¬(A ∨ B)

6.26 A ∨ (B ∧ C) 6.27 (A ∧ B) ∨ (C ∧ D)
➶|✎ ¬B ∨ ¬C ∨ D ➶|✎ (B ∧ C) ∨ (D ∧ E)
A∨D C ∨ (A ∧ E)

In each of the following exercises, you should assess whether the argument is valid. If it is, use Fitch to
construct a formal proof. You may use Ana Con but only involving literals and ⊥. If it is not valid,
use Tarski’s World to construct a counterexample.

6.28 Cube(c) ∨ Small(c) 6.29 Larger(a, b) ∨ Larger(a, c)


➶ Dodec(c) ➶ Smaller(b, a) ∨ ¬Larger(a, c)
Small(c) Larger(a, b)

Chapter 6
Proofs without premises / 173

6.30 ¬(¬Cube(a) ∧ Cube(b)) 6.31 Dodec(b) ∨ Cube(b)


➶ ¬(¬Cube(b) ∨ Cube(c)) ➶ Small(b) ∨ Medium(b)
¬(Small(b) ∧ Cube(b))
Cube(a)
Medium(b) ∧ Dodec(b)

6.32 Dodec(b) ∨ Cube(b)


➶ Small(b) ∨ Medium(b)
¬Small(b) ∧ ¬Cube(b))
Medium(b) ∧ Dodec(b)

Section 6.6
Proofs without premises

Not all proofs begin with the assumption of premises. This may seem odd,
but in fact it is how we use our deductive system to show that a sentence is
a logical truth. A sentence that can be proven without any premises at all is
necessarily true. Here’s a trivial example of such a proof, one that shows that demonstrating
a = a ∧ b = b is a logical truth. logical truth

1. a = a = Intro
2. b = b = Intro
3. a = a ∧ b = b ∧ Intro: 1, 2

The first step of this proof is not a premise, but an application of = Intro.
You might think that any proof without premises would have to start with
this rule, since it is the only one that doesn’t have to cite any supporting steps
earlier in the proof. But in fact, this is not a very representative example of
such proofs. A more typical and interesting proof without premises is the
following, which shows that ¬(P ∧ ¬P) is a logical truth.

Section 6.6
174 / Formal Proofs and Boolean Logic

1. P ∧ ¬P
2. P ∧ Elim: 1
3. ¬P ∧ Elim: 1
4. ⊥ ⊥ Intro: 2, 3
5. ¬(P ∧ ¬P) ¬ Intro: 1–4

Notice that there are no assumptions above the first horizontal Fitch bar,
indicating that the main proof has no premises. The first step of the proof is
the subproof ’s assumption. The subproof proceeds to derive a contradiction,
based on this assumption, thus allowing us to conclude that the negation
of the subproof’s assumption follows without the need of premises. In other
words, it is a logical truth.
When we want you to prove that a sentence is a logical truth, we will use
Fitch notation to indicate that you must prove this without assuming any
premises. For example the above proof shows that the following “argument”
is valid:

¬(P ∧ ¬P)

We close this section with the following reminder:

Remember

A proof without any premises shows that its conclusion is a logical truth.

Exercises

6.33 (Excluded Middle) Open the file Exercise 6.33. This contains an incomplete proof of the law
➶ of excluded middle, P ∨ ¬P. As it stands, the proof does not check out because it’s missing
some sentences, some support citations, and some rules. Fill in the missing pieces and submit
the completed proof as Proof 6.33. The proof shows that we can derive excluded middle in F
without any premises.

Chapter 6
Proofs without premises / 175

In the following exercises, assess whether the indicated sentence is a logical truth in the blocks language.
If so, use Fitch to construct a formal proof of the sentence from no premises (using Ana Con if
necessary, but only applied to literals). If not, use Tarski’s World to construct a counterexample. (A
counterexample here will simply be a world that makes the purported conclusion false.)

6.34 6.35
➶ ➶
¬(a = b ∧ Dodec(a) ∧ ¬Dodec(b)) ¬(a = b ∧ Dodec(a) ∧ Cube(b))

6.36 6.37
➶ ➶
¬(a = b ∧ b = c ∧ a 6= c) ¬(a 6= b ∧ b 6= c ∧ a = c)

6.38

¬(SameRow(a, b) ∧ SameRow(b, c) ∧ FrontOf(c, a))

6.39

¬(SameCol(a, b) ∧ SameCol(b, c) ∧ FrontOf(c, a))

The following sentences are all tautologies, and so should be provable in F . Although the informal proofs
are relatively simple, F makes fairly heavy going of them, since it forces us to prove even very obvious
steps. Use Fitch to construct formal proofs. You may want to build on the proof of Excluded Middle
given in Exercise 6.33. Alternatively, with the permission of your instructor, you may use Taut Con,
but only to justify an instance of Excluded Middle. The Grade Grinder will indicate whether you used
Taut Con or not.

6.40 6.41
➶? ➶?
A ∨ ¬(A ∧ B) (A ∧ B) ∨ ¬A ∨ ¬B

6.42
➶?
¬A ∨ ¬(¬B ∧ (¬A ∨ B))

Section 6.6
Chapter 7

Conditionals
There are many logically important constructions in English besides the Boolean
connectives. Even if we restrict ourselves to words and phrases that connect
two simple indicative sentences, we still find many that go beyond the Boolean
operators. For example, besides saying:
Max is home and Claire is at the library,

and

Max is home or Claire is at the library,


we can combine these same atomic sentences in the following ways, among
others:

Max is home if Claire is at the library,


Max is home only if Claire is at the library,
Max is home if and only if Claire is at the library,
Max is not home nor is Claire at the library,
Max is home unless Claire is at the library,
Max is home even though Claire is at the library,
Max is home in spite of the fact that Claire is at the library,
Max is home just in case Claire is at the library,
Max is home whenever Claire is at the library,
Max is home because Claire is at the library.
And these are just the tip of the iceberg. There are also constructions that
combine three atomic sentences to form new sentences:

If Max is home then Claire is at the library, otherwise Claire is


concerned,

and constructions that combine four:

If Max is home then Claire is at the library, otherwise Claire is


concerned unless Carl is with him,

and so forth.
Some of these constructions are truth functional, or have important truth-
functional uses, while others do not. Recall that a connective is truth func-
tional if the truth or falsity of compound statements made with it is completely

176
Material conditional symbol: → / 177

determined by the truth values of its constituents. Its meaning, in other words,
can be captured by a truth table.
Fol does not include connectives that are not truth functional. This is non-truth-functional
not to say that such connectives aren’t important, but their meanings tend to connectives
be vague and subject to conflicting interpretations. The decision to exclude
them is analogous to our assumption that all the predicates of fol have precise
interpretations.
Whether or not a connective in English can be, or always is, used truth
functionally is a tricky matter, about which we’ll have more to say later in
the chapter. Of the connectives listed above, though, there is one that is very
clearly not truth functional: the connective because. This is not hard to prove.

Proof: To show that the English connective because is not truth


functional, it suffices to find two possible circumstances in which the
sentence Max is home because Claire is at the library would have
different truth values, but in which its constituents, Max is home
and Claire is at the library, have the same truth values.
Why? Well, suppose that the meaning of because were captured by a
truth table. These two circumstances would correspond to the same
row of the truth table, since the atomic sentences have the same
values, but in one circumstance the sentence is true and in the other
it is false. So the purported truth table must be wrong, contrary to
our assumption.
For the first circumstance, imagine that Max learned that Claire
would be at the library, hence unable to feed Carl, and so rushed
home to feed him. For the second circumstance, imagine that Max
is at home, expecting Claire to be there too, but she unexpectedly
had to go the library to get a reference book for a report. In both
circumstances the sentences Max is home and Claire is at the library
are true. But the compound sentence Max is home because Claire
is at the library is true in the first, false in the second.

The reason because is not truth functional is that it typically asserts some
sort of causal connection between the facts described by the constituent sen-
tences. This is why our compound sentence was false in the second situation:
the causal connection was missing.
In this chapter, we will introduce two new truth-functional connectives,
known as the material conditional and the material biconditional, both stan-
dard features of fol. It turns out that, as we’ll show at the end of the chapter,
these new symbols do not actually increase the expressive power of fol. They

Section 7.1
178 / Conditionals

do, however, make it much easier to say and prove certain things, and so are
valuable additions to the language.

Section 7.1
Material conditional symbol: →
The symbol → is used to combine two sentences P and Q to form a new
sentence P → Q, called a material conditional. The sentence P is called the
antecedent of the conditional, and Q is called the consequent of the conditional.
We will discuss the English counterparts of this symbol after we explain its
meaning.

Semantics and the game rule for the conditional


The sentence P → Q is true if and only if either P is false or Q is true (or
both). This can be summarized by the following truth table.
P Q P→Q
t t T
truth table for → t f F
f t T
f f T

A second’s thought shows that P → Q is really just another way of saying


¬P ∨ Q. Tarski’s World in fact treats the former as an abbreviation of the
game rule for → latter. In particular, in playing the game, Tarski’s World simply replaces a
statement of the form P → Q by its equivalent ¬P ∨ Q.

Remember

1. If P and Q are sentences of fol, then so is P → Q.

2. The sentence P → Q is false in only one case: if the antecedent P is


true and the consequent Q is false. Otherwise, it is true.

English forms of the material conditional


We can come fairly close to an adequate English rendering of the material
if . . . then conditional P → Q with the sentence If P then Q. At any rate, it is clear that

Chapter 7
Material conditional symbol: → / 179

this English conditional, like the material conditional, is false if P is true and
Q is false. Thus, we will translate, for example, If Max is home then Claire is
at the library as:
Home(max) → Library(claire)
In this course we will always translate if. . . then. . . using →, but there
are in fact many uses of the English expression that cannot be adequately
expressed with the material conditional. Consider, for example, the sentence,

If Max had been at home, then Carl would have been there too.

This sentence can be false even if Max is not in fact at home. (Suppose the
speaker mistakenly thought Carl was with Max, when in fact Claire had taken
him to the vet.) But the first-order sentence,

Home(max) → Home(carl)

is automatically true if Max is not at home. A material conditional with a


false antecedent is always true.
We have already seen that the connective because is not truth functional
since it expresses a causal connection between its antecedent and consequent.
The English construction if. . . then. . . can also be used to express a sort of
causal connection between antecedent and consequent. That’s what seems to
be going on in the above example. As a result, many uses of if. . . then. . .
in English just aren’t truth functional. The truth of the whole depends on
something more than the truth values of the parts; it depends on there being
some genuine connection between the subject matter of the antecedent and
the consequent.
Notice that we started with the truth table for → and decided to read
it as if. . . then. . . . What if we had started the other way around, looking for
a truth-functional approximation of the English conditional? Could we have
found a better truth table to go with if. . . then. . . ? The answer is clearly no.
While the material conditional is sometimes inadequate for capturing sub-
tleties of English conditionals, it is the best we can do with a truth-functional
connective. But these are controversial matters. We will take them up further
in Section 7.3.

Necessary and sufficient conditions

Other English expressions that we will translate using the material conditional
P → Q include: P only if Q, Q provided P, and Q if P. Notice in particular only if, provided
that P only if Q is translated P → Q, while P if Q is translated Q → P. To

Section 7.1
180 / Conditionals

understand why, we need to think carefully about the difference between only
if and if.
necessary condition In English, the expression only if introduces what is called a necessary
condition, a condition that must hold in order for something else to obtain.
For example, suppose your instructor announces at the beginning of the course
that you will pass the course only if you turn in all the homework assignments.
Your instructor is telling you that turning in the homework is a necessary
condition for passing: if you don’t do it, you won’t pass. But the instructor is
not guaranteeing that you will pass if you do turn in the homework: clearly,
there are other ways to fail, such as skipping the tests and getting all the
homework problems wrong.
The assertion that you will pass only if you turn in all the homework
really excludes just one possibility: that you pass but did not turn in all the
homework. In other words, P only if Q is false only when P is true and Q is
false, and this is just the case in which P → Q is false.
Contrast this with the assertion that you will pass the course if you turn
in all the homework. Now this is a very different kettle of fish. An instructor
who makes this promise is establishing a very lax grading policy: just turn in
the homework and you’ll get a passing grade, regardless of how well you do
on the homework or whether you even bother to take the tests!
sufficient condition In English, the expression if introduces what is called a sufficient condition,
one that guarantees that something else (in this case, passing the course) will
obtain. Because of this an English sentence P if Q must be translated as
Q → P. The sentence rules out Q being true (turning in the homework) and
P being false (failing the course).

Other uses of →

In fol we also use → in combination with ¬ to translate sentences of the form


unless Unless P, Q or Q unless P. Consider, for example, the sentence Claire is at the
library unless Max is home. Compare this with the sentence Claire is at the
library if Max is not home. While the focus of these two sentences is slightly
different, a moment’s thought shows that they are false in exactly the same
circumstances, namely, if Claire is not at the library, yet Max is not home
(say they are both at the movies). More generally, Unless P, Q or Q unless
P are true in the same circumstances as Q if not P, and so are translated as
¬P → Q. A good way to remember this is to whisper if not whenever you see
unless. If you find this translation of unless counterintuitive, be patient. We’ll
say more about it in Section 7.3.
It turns out that the most important use of → in first-order logic is not
in connection with the above expressions at all, but rather with universally

Chapter 7
Biconditional symbol: ↔ / 181

quantified sentences, sentences of the form All A’s are B’s and Every A is a
B. The analogous first-order sentences have the form:
For every object x (A(x) → B(x))
This says that any object you pick will either fail to be an A or else be a B.
We’ll learn about such sentences in Part II of this book.
There is one other thing we should say about the material conditional,
which helps explain its importance in logic. The conditional allows us to reduce
the notion of logical consequence to that of logical truth, at least in cases reducing logical
where we have only finitely many premises. We said that a sentence Q is a consequence to
consequence of premises P1 , . . . , Pn if and only if it is impossible for all the logical truth
premises to be true while the conclusion is false. Another way of saying this
is that it is impossible for the single sentence (P1 ∧ . . . ∧ Pn ) to be true while
Q is false.
Given the meaning of →, we see that Q is a consequence of P1 , . . . , Pn if
and only if it is impossible for the single sentence

(P1 ∧ . . . ∧ Pn ) → Q
to be false, that is, just in case this conditional sentence is a logical truth. Thus,
one way to verify the tautological validity of an argument in propositional
logic, at least in theory, is to construct a truth table for this sentence and see
whether the final column contains only true. In practice, this method is not
very practical, since the truth tables quickly get too large to be manageable.

Remember

1. The following English constructions are all translated P → Q: If P


then Q; Q if P; P only if Q; and Provided P, Q.

2. Unless P, Q and Q unless P are translated ¬P → Q.

3. Q is a logical consequence of P1 , . . . , Pn if and only if the sentence


(P1 ∧ . . . ∧ Pn ) → Q is a logical truth.

Section 7.2
Biconditional symbol: ↔
Our final connective is called the material biconditional symbol. Given any
sentences P and Q there is another sentence formed by connecting these by

Section 7.2
182 / Conditionals

means of the biconditional: P ↔ Q. A sentence of the form P ↔ Q is true if


and only if P and Q have the same truth value, that is, either they are both
true or both false. In English this is commonly expressed using the expression
if and only if if and only if. So, for example, the sentence Max is home if and only if Claire
is at the library would be translated as:
Home(max) ↔ Library(claire)
iff Mathematicians and logicians often write “iff” as an abbreviation for “if
and only if.” Upon encountering this, students and typesetters generally con-
clude it’s a spelling mistake, to the consternation of the authors. But in fact it
just in case is shorthand for the biconditional. Mathematicians also use “just in case” as a
way of expressing the biconditional. Thus the mathematical claims n is even
iff n2 is even, and n is even just in case n2 is even, would both be translated
as:
Even(n) ↔ Even(n2 )
This use of “just in case” is, we admit, one of the more bizarre quirks of
mathematicians, having nothing much to do with the ordinary meaning of
this phrase. In this book, we use the phrase in the mathematician’s sense,
just in case you were wondering.
An important fact about the biconditional symbol is that two sentences
P and Q are logically equivalent if and only if the biconditional formed from
them, P ↔ Q, is a logical truth. Another way of putting this is to say that
P ⇔ Q is true if and only if the fol sentence P ↔ Q is logically necessary.
So, for example, we can express one of the DeMorgan laws by saying that the
following sentence is a logical truth:
¬(P ∨ Q) ↔ (¬P ∧ ¬Q)
↔ vs. ⇔ This observation makes it tempting to confuse the symbols ↔ and ⇔. This
temptation must be resisted. The former is a truth-functional connective of
fol, while the latter is an abbreviation of “is logically equivalent to.” It is
not a truth-functional connective and is not an expression of fol.

Semantics and the game rule for ↔


The semantics for the biconditional is given by the following truth table.
P Q P↔Q
t t T
truth table for ↔ t f F
f t F
f f T

Chapter 7
Biconditional symbol: ↔ / 183

Notice that the final column of this truth table is the same as that for
(P → Q) ∧ (Q → P). (See Exercise 7.3 below.) For this reason, logicians often
treat a sentence of the form P ↔ Q as an abbreviation of (P → Q) ∧ (Q → P).
Tarski’s World also uses this abbreviation in the game. Thus, the game rule game rule for ↔
for P ↔ Q is simple. Whenever a sentence of this form is encountered, it is
replaced by (P → Q) ∧ (Q → P).

Remember

1. If P and Q are sentences of fol, then so is P ↔ Q.

2. The sentence P ↔ Q is true if and only if P and Q have the same truth
value.

Exercises

For the following exercises, use Boole to determine whether the indicated pairs of sentences are tauto-
logically equivalent. Feel free to have Boole build your reference columns and fill them out for you. Don’t
forget to indicate your assessment.

7.1 A → B and ¬A ∨ B. 7.2 ¬(A → B) and A ∧ ¬B.


➶ ➶
7.3 A ↔ B and (A → B) ∧ (B → A). 7.4 A ↔ B and (A ∧ B) ∨ (¬A ∧ ¬B).
➶ ➶
7.5 (A ∧ B) → C and A → (B ∨ C). 7.6 (A ∧ B) → C and A → (B → C).
➶ ➶
7.7 A → (B → (C → D)) and 7.8 A ↔ (B ↔ (C ↔ D)) and
➶ ((A → B) → C) → D. ➶ ((A ↔ B) ↔ C) ↔ D.

7.9 (Just in case) Prove that the ordinary (nonmathematical) use of just in case does not express
✎ a truth-functional connective. Use as your example the sentence Max went home just in case
Carl was hungry.

7.10 (Evaluating sentences in a world) Using Tarski’s World, run through Abelard’s Sentences, eval-
➶ uating them in Wittgenstein’s World. If you make a mistake, play the game to see where you
have gone wrong. Once you have gone through all the sentences, go back and make all the false
ones true by changing one or more names used in the sentence. Submit your edited sentences
as Sentences 7.10.

Section 7.2
184 / Conditionals

7.11 (Describing a world) Launch Tarski’s World and choose Hide Labels from the Display menu.
➶ Then, with the labels hidden, open Montague’s World. In this world, each object has a name,
and no object has more than one name. Start a new sentence file where you will describe some
features of this world. Check each of your sentences to see that it is indeed a sentence and that
it is true in this world.
1. Notice that if c is a tetrahedron, then a is not a tetrahedron. (Remember, in this world
each object has exactly one name.) Use your first sentence to express this fact.
2. However, note that the same is true of b and d. That is, if b is a tetrahedron, then d
isn’t. Use your second sentence to express this.
3. Finally, observe that if b is a tetrahedron, then c isn’t. Express this.
4. Notice that if a is a cube and b is a dodecahedron, then a is to the left of b. Use your
next sentence to express this fact.
5. Use your next sentence to express the fact that if b and c are both cubes, then they
are in the same row but not in the same column.
6. Use your next sentence to express the fact that b is a tetrahedron only if it is small.
[Check this sentence carefully. If your sentence evaluates as false, then you’ve got the
arrow pointing in the wrong direction.]
7. Next, express the fact that if a and d are both cubes, then one is to the left of the
other. [Note: You will need to use a disjunction to express the fact that one is to the
left of the other.]
8. Notice that d is a cube if and only if it is either medium or large. Express this.
9. Observe that if b is neither to the right nor left of d, then one of them is a tetrahedron.
Express this observation.
10. Finally, express the fact that b and c are the same size if and only if one is a tetrahedron
and the other is a dodecahedron.

Save your sentences as Sentences 7.11. Now choose Show Labels from the Display menu.
Verify that all of your sentences are indeed true. When verifying the first three, pay particular
attention to the truth values of the various constituents. Notice that sometimes the conditional
has a false antecedent and sometimes a true consequent. What it never has is a true antecedent
and a false consequent. In each of these three cases, play the game committed to true. Make
sure you understand why the game proceeds as it does.

7.12 (Translation) Translate the following English sentences into fol. Your translations will use all
➶ of the propositional connectives.
1. If a is a tetrahedron then it is in front of d.
2. a is to the left of or right of d only if it’s a cube.
3. c is between either a and e or a and d.
4. c is to the right of a, provided it (i.e., c) is small.

Chapter 7
Biconditional symbol: ↔ / 185

5. c is to the right of d only if b is to the right of c and left of e.


6. If e is a tetrahedron, then it’s to the right of b if and only if it is also in front of b.
7. If b is a dodecahedron, then if it isn’t in front of d then it isn’t in back of d either.
8. c is in back of a but in front of e.
9. e is in front of d unless it (i.e., e) is a large tetrahedron.
10. At least one of a, c, and e is a cube.
11. a is a tetrahedron only if it is in front of b.
12. b is larger than both a and e.
13. a and e are both larger than c, but neither is large.
14. d is the same shape as b only if they are the same size.
15. a is large if and only if it’s a cube.
16. b is a cube unless c is a tetrahedron.
17. If e isn’t a cube, either b or d is large.
18. b or d is a cube if either a or c is a tetrahedron.
19. a is large just in case d is small.
20. a is large just in case e is.

Save your list of sentences as Sentences 7.12. Before submitting the file, you should complete
Exercise 7.13.

7.13 (Checking your translations) Open Bolzano’s World. Notice that all the English sentences from
➶ Exercise 7.12 are true in this world. Thus, if your translations are accurate, they will also be
true in this world. Check to see that they are. If you made any mistakes, go back and fix them.
Remember that even if one of your sentences comes out true in Bolzano’s World, it does not
mean that it is a proper translation of the corresponding English sentence. If the translation is
correct, it will have the same truth value as the English sentence in every world. So let’s check
your translations in some other worlds.
Open Wittgenstein’s World. Here we see that the English sentences 3, 5, 9, 11, 12, 13, 14,
and 20 are false, while the rest are true. Check to see that the same holds of your translations.
If not, correct your translations (and make sure they are still true in Bolzano’s World).
Next open Leibniz’s World. Here half the English sentences are true (1, 2, 4, 6, 7, 10, 11, 14,
18, and 20) and half false (3, 5, 8, 9, 12, 13, 15, 16, 17, and 19). Check to see that the same
holds of your translations. If not, correct your translations.
Finally, open Venn’s World. In this world, all of the English sentences are false. Check to
see that the same holds of your translations and correct them if necessary.
There is no need to submit any files for this exercise, but don’t forget to submit Sentences
7.12.

Section 7.2
186 / Conditionals

7.14 (Figuring out sizes and shapes) Open Euler’s Sentences. The nine sentences in this file uniquely
➶ determine the shapes and sizes of blocks a, b, and c. See if you can figure out the solution just
by thinking about what the sentences mean and using the informal methods of proof you’ve
already studied. When you’ve figured it out, submit a world in which all of the sentences are
true.

7.15 (More sizes and shapes) Start a new sentence file and use it to translate the following English
➶? sentences.
1. If a is a tetrahedron, then b is also a tetrahedron.
2. c is a tetrahedron if b is.
3. a and c are both tetrahedra only if at least one of them is large.
4. a is a tetrahedron but c isn’t large.
5. If c is small and d is a dodecahedron, then d is neither large nor small.
6. c is medium only if none of d, e, and f are cubes.
7. d is a small dodecahedron unless a is small.
8. e is large just in case it is a fact that d is large if and only if f is.
9. d and e are the same size.
10. d and e are the same shape.
11. f is either a cube or a dodecahedron, if it is large.
12. c is larger than e only if b is larger than c.

Save these sentences as Sentences 7.15. Then see if you can figure out the sizes and shapes of
a, b, c, d, e, and f . You will find it helpful to approach this problem systematically, filling in
the following table as you reason about the sentences:

a b c d e f
Shape:
Size:

When you have filled in the table, use it to guide you in building a world in which the twelve
English sentences are true. Verify that your translations are true in this world as well. Submit
both your sentence file and your world file.

7.16 (Name that object) Open Sherlock’s World and Sherlock’s Sentences. You will notice that none
➶? of the objects in this world has a name. Your task is to assign the names a, b, and c in such a
way that all the sentences in the list come out true. Submit the modified world as World 7.16.

7.17 (Building a world) Open Boolos’ Sentences. Submit a world in which all five sentences in this
➶?? file are true.

Chapter 7
Conversational implicature / 187

7.18 Using the symbols introduced in Table 1.2, page 30, translate the following sentences into fol.
➶ Submit your translations as a sentence file.
1. If Claire gave Folly to Max at 2:03 then Folly belonged to her at 2:00 and to him at
2:05.
2. Max fed Folly at 2:00 pm, but if he gave her to Claire then, Folly was not hungry five
minutes later.
3. If neither Max nor Claire fed Folly at 2:00, then she was hungry.
4. Max was angry at 2:05 only if Claire fed either Folly or Scruffy five minutes before.
5. Max is a student if and only if Claire is not.

7.19 Using Table 1.2 on page 30, translate the following into colloquial English.
✎ 1. (Fed(max, folly, 2:00) ∨ Fed(claire, folly, 2:00)) → Pet(folly)
2. Fed(max, folly, 2:30) ↔ Fed(claire, scruffy, 2:00)
3. ¬Hungry(folly, 2:00) → Hungry(scruffy, 2:00)
4. ¬(Hungry(folly, 2:00) → Hungry(scruffy, 2:00))

7.20 Translate the following into fol as best you can. Explain any predicates and function symbols
✎ you use, and any shortcomings in your first-order translations.
1. If Abe can fool Stephen, surely he can fool Ulysses.
2. If you scratch my back, I’ll scratch yours.
3. France will sign the treaty only if Germany does.
4. If Tweedledee gets a party, so will Tweedledum, and vice versa.
5. If John and Mary went to the concert together, they must like each other.

7.21 (The monkey principle) One of the stranger uses of if. . . then. . . in English is as a roundabout
➶|✎ way to express negation. Suppose a friend of yours says If Keanu Reeves is a great actor, then
I’m a monkey’s uncle. This is simply a way of denying the antecedent of the conditional, in
this case that Keanu Reeves is a great actor. Explain why this works. Your explanation should
appeal to the truth table for →, but it will have to go beyond that. Turn in your explanation
and also submit a Boole table showing that A → ⊥ is equivalent to ¬A.

Section 7.3
Conversational implicature

In translating from English to fol, there are many problematic cases. For
example, many students resist translating a sentence like Max is home unless
Claire is at the library as:

¬Library(claire) → Home(max)

Section 7.3
188 / Conditionals

These students usually think that the meaning of this English sentence would
be more accurately captured by the biconditional claim:

¬Library(claire) ↔ Home(max)

The reason the latter seems natural is that when we assert the English sen-
tence, there is some suggestion that if Claire is at at the library, then Max is
not at home.
To resolve problematic cases like this, it is often useful to distinguish be-
tween the truth conditions of a sentence, and other things that in some sense
follow from the assertion of the sentence. To take an obvious case, suppose
someone asserts the sentence It is a lovely day. One thing you may conclude
from this is that the speaker understands English. This is not part of what
the speaker said, however, but part of what can be inferred from his saying it.
The truth or falsity of the claim has nothing to do with the speaker’s linguistic
abilities.
The philosopher H. P. Grice developed a theory of what he called con-
versational implicature to help sort out the genuine truth conditions of a
sentence from other conclusions we may draw from its assertion. These other
conversational conclusions are what Grice called implicatures. We won’t go into this theory
implicatures in detail, but knowing a little bit about it can be a great aid in translation,
so we present an introduction to Grice’s theory.
Suppose we have an English sentence S that someone asserts, and we
are trying to decide whether a particular conclusion we draw is part of the
meaning of S or, instead, one of its implicatures. Grice pointed out that if
cancelling implicatures the conclusion is part of the meaning, then it cannot be “cancelled” by some
further elaboration by the speaker. Thus, for example, the conclusion that
Max is home is part of the meaning of an assertion of Max and Claire are
home, so we can’t cancel this conclusion by saying Max and Claire are home,
but Max isn’t home. We would simply be contradicting ourselves.
Contrast this with the speaker who said It is a lovely day. Suppose he
had gone on to say, perhaps reading haltingly from a phrase book: Do you
speak any French? In that case, the suggestion that the speaker understands
English is effectively cancelled.
A more illuminating use of Grice’s cancellability test concerns the expres-
sion either. . . or. . . . Recall that we claimed that this should be translated into
fol as an inclusive disjunction, using ∨. We can now see that the suggestion
that this phrase expresses exclusive disjunction is generally just a conversa-
tional implicature. For example, if the waiter says You can have either soup or
salad, there is a strong suggestion that you cannot have both. But it is clear
that this is just an implicature, since the waiter could, without contradicting

Chapter 7
Conversational implicature / 189

himself, go on to say And you can have both, if you want. Had the original
either. . . or. . . expressed the exclusive disjunction, this would be like saying
You can have soup or salad but not both, and you can have both, if you want.
Let’s go back now to the sentence Max is at home unless Claire is at the
library. Earlier we denied that the correct translation was

¬Library(claire) ↔ Home(max)

which is equivalent to the conjunction of the correct translation

¬Library(claire) → Home(max)

with the additional claim

Library(claire) → ¬Home(max)

Is this second claim part of the meaning of the English sentence, or is it


simply a conversational implicature? Grice’s cancellability test shows that it
is just an implicature. After all, it makes perfectly good sense for the speaker
to go on to say On the other hand, if Claire is at the library, I have no idea
where Max is. This elaboration takes away the suggestion that if Claire is at
the library, then Max isn’t at home.
Another common implicature arises with the phrase only if, which people
often construe as the stronger if and only if. For example, suppose a father
tells his son, You can have dessert only if you eat all your lima beans. We’ve
seen that this is not a guarantee that if the child does eat his lima beans he
will get dessert, since only if introduces a necessary, not sufficient, condition.
Still it is clear that the father’s assertion suggests that, other things equal, the
child can have dessert if he eats the dreaded beans. But again, the suggestion
can be cancelled. Suppose the father goes on to say: If you eat the beans, I’ll
check to see if there’s any ice cream left. This cancels the implication that
dessert is guaranteed.

Remember

If the assertion of a sentence carries with it a suggestion that could be


cancelled (without contradiction) by further elaboration by the speaker,
then the suggestion is a conversational implicature, not part of the content
of the original claim.

Section 7.3
190 / Conditionals

Exercises

7.22 Suppose Claire asserts the sentence Max managed to get Carl home. Does this logically imply,
✎ or just conversationally implicate, that it was hard to get Carl home? Justify your answer.

7.23 Suppose Max asserts the sentence We can walk to the movie or we can drive. Does his assertion
✎ logically imply, or merely implicate, that we cannot both walk and drive? How does this differ
from the soup or salad example?

7.24 Consider the sentence Max is home in spite of the fact that Claire is at the library. What would
✎? be the best translation of this sentence into fol? Clearly, whether you would be inclined to use
this sentence is not determined simply by the truth values of the atomic sentences Max is home
and Claire is at the library. This may be because in spite of the fact is, like because, a non-truth-
functional connective, or because it carries, like but, additional conversational implicatures. (See
our discussion of because earlier in this chapter and the discussion of but in Chapter 3.) Which
explanation do you think is right? Justify your answer.

Section 7.4
Truth-functional completeness
We now have at our disposal five truth-functional connectives, one unary
(¬), and four binary (∧, ∨, →, ↔). Should we introduce any more? Though
we’ve seen a few English expressions that can’t be expressed in fol, like
because, these have not been truth functional. We’ve also run into others, like
neither. . . nor. . . , that are truth functional, but which we can easily express
using the existing connectives of fol.
The question we will address in the current section is whether there are any
truth-functional connectives that we need to add to our language. Is it possible
that we might encounter an English construction that is truth functional but
which we cannot express using the symbols we have introduced so far? If so,
this would be an unfortunate limitation of our language.
How can we possibly answer this question? Well, let’s begin by thinking
about binary connectives, those that apply to two sentences to make a third.
How many binary truth-functional connectives are possible? If we think about
the possible truth tables for such connectives, we can compute the total num-
ber. First, since we are dealing with binary connectives, there are four rows
in each table. Each row can be assigned either true or false, so there are
24 = 16 ways of doing this. For example, here is the table that captures the
truth function expressed by neither. . . nor. . . .

Chapter 7
Truth-functional completeness / 191

P Q Neither P nor Q
t t F
t f F
f t F
f f T

Since there are only 16 different ways of filling in the final column of such
a table, there are only 16 binary truth functions, and so 16 possible binary
truth-functional connectives. We could look at each of these tables in turn
and show how to express the truth function with existing connectives, just as
we captured neither P nor Q with ¬(P ∨ Q). But there is a more general and
systematic way to show this.
Suppose we are thinking about introducing a binary truth-functional con-
nective, say ?. It will have a truth table like the following, with one of the
values true or false in each row.
P Q P?Q
t t 1st value
t f 2nd value
f t 3rd value
f f 4th value
If all four values are false, then we can clearly express P ? Q with the
sentence P ∧ ¬P ∧ Q ∧ ¬Q. So suppose at least one of the values is true.
How can we express P ? Q? One way would be this. Let C1 , . . . , C4 stand for
the following four conjunctions:

C1 = (P ∧ Q)
C2 = (P ∧ ¬Q)
C3 = (¬P ∧ Q)
C4 = (¬P ∧ ¬Q)

Notice that sentence C1 will be true if the truth values of P and Q are as
specified in the first row of the truth table, and that if the values of P and Q
are anything else, then C1 will be false. Similarly with C2 and the second row
of the truth table, and so forth. To build a sentence that gets the value true
in exactly the same rows as P ? Q, all we need do is take the disjunction of the
appropriate C’s. For example, if P ? Q is true in rows 2 and 4, then C2 ∨ C4 is
equivalent to this sentence.
What this shows is that all binary truth functions are already expressible
using just the connectives ¬, ∧, and ∨. In fact, it shows that they can be ex-
pressed using sentences in disjunctive normal form, as described in Chapter 4.

Section 7.4
192 / Conditionals

It’s easy to see that a similar procedure allows us to express all possible
unary truth functions. A unary connective, say \, will have a truth table like
this:
P \P
t 1st value
f 2nd value

If both of the values under \P are false, then we can express it using the
sentence P ∧ ¬P. Otherwise, we can express \P as a disjunction of one or
more of the following:
C1 = P
C2 = ¬P
C1 will be included as one of the disjuncts if the first value is true, and C2
will be included if the second value is true. (Of course, in only one case will
there be more than one disjunct.)
Once we understand how this procedure is working, we see that it will
apply equally well to truth-functional connectives of any arity. Suppose, for
example, that we want to express the ternary truth-functional connective
defined by the following truth table:
P Q R ♣(P, Q, R)
t t t T
t t f T
t f t F
t f f F
f t t T
f t f F
f f t T
f f f F

A fairly good English translation of ♣(P, Q, R) is if P then Q, else R. When


we apply the above method to express this connective, we get the following
sentence:

(P ∧ Q ∧ R) ∨ (P ∧ Q ∧ ¬R) ∨ (¬P ∧ Q ∧ R) ∨ (¬P ∧ ¬Q ∧ R)

More generally, if ♠ expresses an n-ary connective, then we can use this


procedure to get a sentence that is tautologically equivalent to ♠(P1 , . . . , Pn ).
First, we define the conjunctions C1 , . . . , C2n that correspond to the 2n rows
of the truth table. We then form a disjunction D that contains Ck as a disjunct
if and only if the k th row of the truth table has the value true. (If all rows

Chapter 7
Truth-functional completeness / 193

contain false, then we let D be P1 ∧ ¬P1 .) For reasons we’ve already noted,
this disjunction is tautologically equivalent to ♠(P1 , . . . , Pn ).
We have just sketched a proof that any truth function, of any arity what-
soever, can be expressed using just the Boolean connectives ¬, ∧, and ∨. This
is a sufficiently interesting fact that it deserves to be highlighted. We’ll say
that a set of connectives is truth-functionally complete if the connectives in truth-functional
the set allow us to express any truth function. We can then highlight this completeness
important fact as a theorem:
Theorem The Boolean connectives ¬, ∧, and ∨ are truth-functionally com-
plete.
A theorem is just a conclusion the author finds particularly interesting or is
particularly proud of having proven.
There are other collections of operators that are truth-functionally com-
plete. In fact, we could get rid of either ∧ or ∨ without losing truth-functional
completeness. For example, P ∨ Q can be expressed using just ¬ and ∧ as
follows:
¬(¬P ∧ ¬Q)
What this means is that we could get rid of all the occurrences of ∨ in our
sentences in favor of ¬ and ∧. Alternatively, we could get rid of ∧ in favor
of ¬ and ∨, as you’ll see in Exercise 7.25. Of course either way, the resulting
sentences would be much longer and harder to understand.
We could in fact be even more economical in our choice of connectives.
Suppose we used P ↓ Q to express neither P nor Q. It turns out that the
connective ↓ is, all by itself, truth-functionally complete. To see this, notice
that ¬P can be expressed as:

P↓P

which says neither P nor P, and P ∧ Q can be expressed as:

(P ↓ P) ↓ (Q ↓ Q)

which says neither not P nor not Q. Thus in theory we could use just this one
truth-functional connective and express anything we can now express using
our current five.
There are two disadvantages to economizing on connectives. First, as we’ve disadvantages of
already said, the fewer connectives we have, the harder it is to understand our economy