0% found this document useful (0 votes)
76 views112 pages

Intro to Propositional Logic

This document provides a recap of propositional logic and introduces semantic trees as a method for checking validity in propositional logic. It defines key concepts such as: - Syntax and semantics of propositional logic - Interpretations and valuation functions - Validity and logical consequence - Semantic trees as a method to search for counterexamples to validity by unpacking logical consequences and branching on disjunctions, with the goal of generating a contradiction to close all branches. An example semantic tree is provided to demonstrate how this method can be used to show that a propositional formula is valid by closing all branches.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views112 pages

Intro to Propositional Logic

This document provides a recap of propositional logic and introduces semantic trees as a method for checking validity in propositional logic. It defines key concepts such as: - Syntax and semantics of propositional logic - Interpretations and valuation functions - Validity and logical consequence - Semantic trees as a method to search for counterexamples to validity by unpacking logical consequences and branching on disjunctions, with the goal of generating a contradiction to close all branches. An example semantic tree is provided to demonstrate how this method can be used to show that a propositional formula is valid by closing all branches.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1

Logic 2: Modal Logics – Week 1


Propositional Logic: A Recap
⋅ Vocabulary of Propositional Logic (PL):

⋅ Sentence Letters: p, q, r ... Since there is an infinite number of


⋅ Logical Connectives: ¬, ∨
sentence letters, if we run out of letters,
we will simply use superscripts: p1 , p2 ,
⋅ Parentheses: (, ) p3 ... pn .

⋅ Syntax of PL:

(a) Every sentence letter is a well-formed formula (wff).


(b) If f and y are wffs, then (f ∨ y) and ¬f are also wffs.
(c) Nothing else is a wff.

⋅ Additional Connectives: We define the additional standard con-


nectives in terms of ‘¬’ and ‘∨’: Keeping the primitive connectives to a
minimum is beneficial if various meta-
⋅ f ∧ y =def ¬(¬f ∨ ¬y) logical results such as soundness and
completeness are to be proved. However,
⋅ f → y =def ¬f ∨ y since we are not going to do this in

⋅ f ↔ y =def (f → y) ∧ (y → f)
this class, this is purely for illustrative
purposes.

⋅ Semantics of PL:

In PL, meaning is explicated in terms of truth. In general, a (total)


assignment of truth values to sentence letters is referred to as an
interpretation (or a model). An interpretation is also referred to as
a model. That is, a model for PL con-
sists of a domain of truth values and,
definition of an interpretation: where S is the set of all propositional
A PL-interpretation is a total function I that assigns to each variables, a (total) function I: S � {0,1}.
sentence letter the value 1 or 0. This function is standardly called the
interpretation function

⋅ So, interpretations may look as follows:

⋅ I(p) = 1 ⋅ I ′ (p) = 0 ⋅ I ′′ (p) = 0


⋅ I(q) = 0 ⋅ I ′ (q) = 1 ⋅ I ′′ (q) = 1
⋅ I(r) = 1 ⋅ I ′ (r) = 1 ⋅ I ′′ (r) = 0
... ... ...
⋅ I(pn ) = 0 ⋅ I ′ (pn ) = 0 ⋅ I ′′ (pn ) = 0

⋅ The connectives in PL are truth functional, cf. the truth tables


below, so given an interpretation I, the truth value of any complex
wff is determined by the truth tables for the relevant connectives.

¬ ∨ 1 0
1 0 1 1 0
0 1 0 0 0
2

⋅ However, for full generality, we include a formal definition of a


function, a valuation function V, that assigns truth values to com-
plex formulas as a function of the truth values of the sentence
letters, viz. as a function of the values determined by the interpre-
tation function I.

⋅ definition of a valuation
For any PL-interpretation I, the valuation function V is the
function that assigns to each wff either 1 or 0, and which is such
that for any sentence letter a and wff f and y:
⋅ V I (a) = I(a)
⋅ V I (¬f) = 1 iff V I (f) = 0
⋅ V I (f ∨ y) = 1 iff V I (f) = 1 or V I (y) = 1
⋅ And accordingly ...
⋅ V I (f ∧ y) = 1 iff V I (f) = 1 and V I (y) = 1
⋅ V I (f → y) = 1 iff V I (f) = 0 or V I (y) = 1
⋅ V I (f ↔ y) = 1 iff V I (f) = V I (y)

⋅ Next, we define validity in PL:

definition of validity in pl
A wff f is PL-valid iff for every PL-interpretation I, V I (f) = 1. That a wff f is PL-valid is typically
written as ‘�pl f’. The PL subscript is
⋅ And logical consequence: ignored when it is obvious the language
in question is PL.

logical consequence in pl
A wff f is a PL-logical consequence of a set of wffs G iff for That a wff f is a logical consequence
every PL-interpretation I, if V I (g) = 1 for each g such that g ∈ of a set of wffs G is typically written
as ‘G �pl f’. Again, the PL subscript is
G, then V I (f) = 1. ignored when it is obvious the language
in question is PL.
⋅ That is, a wff f is a PL-logical consequence of another set of wffs
iff whenever the set of wffs are true, f is true too. In other words,
on this (semantic) conception of logical consequence, logical con-
sequence is truth preservation (while keeping the meaning of the
connectives fixed).

⋅ Note that if f is PL-valid, it is a PL- logical consequence of any G


including when G = �, i.e. when G is empty.

⋅ Establishing Validity: Semantic Trees/Tableaux


There are several methods for checking whether a formula is valid.
For example, one method is truth tables. In a truth table it is easily
checked whether a formula is true under every interpretation (i.e.
in every row for the main connective). To illustrate, consider the
wff in (1).

(1) (p → q) → (¬q → ¬p)


3

⋅ This wff is valid and this can be demonstrated by constructing a


truth table: If we were being absolutely precise, the
formulas containing negation in the last
(p → → → ¬p)
column should be values both for the
p q q) (¬q sentence itself (the atomic formula) and
1 1 1 1 1 1 0 1 0 for the complex formula including the
negation.
1 0 1 0 0 1 1 0 0
0 1 0 1 1 1 0 1 1
0 0 0 1 0 1 1 1 1

⋅ However, this method becomes cumbersome quite quickly, espe-


cially as the number of atomic formulas increases.

⋅ A less cumbersome method is so-called semantic trees or semantic


tableaux. The purpose of constructing a semantic tree is to search
for a counterexample. If the wff is valid, the search is going to lead
to a contradiction. Here is an example of the process:

⋅ One starts by Negating the Target Formula (NTF).

1. ¬�(p → q) → (¬q → ¬p)� NTF ⇐� In the column on the right, we indicate


the line that justifies the extension of
the tree.
⋅ Next, the NTF has to be “unpacked” into its logical conse-
quences. Since the NTF in line 1. is a false conditional, then
given the semantics for ‘¬’ and →’, this means that the an-
tecedent of the conditional must be true and that the conse-
quent must be false. So, we unpack the wff by adding to the
tree as follows:

2. (p → q) l.1
3. ¬(¬q → ¬p) l.1

⋅ Next, we need to unpack the formulas in lines 2 and 3. In line 3


we have another false conditional, so we unpack this using the
same procedure, viz. antecedent true, consequent false.

4. ¬q l.3
5. ¬¬p l.3

⋅ Given the semantics for ‘¬’, we unpack 5. as follows: Since: ¬¬p �pl p

6. p l.5

⋅ Now, we need to unpack the formula in line 2. There are two


options sufficient for this conditional to be true: either the an-
tecedent is false or the consequent is true. Since we do not know
which, we must consider both options. As a result, the tree now
branches into the two relevant options, namely p and ¬q.
4

7.

¬p q

⋅ But notice that each branch (starting from the bottom and work-
ing your way to the top) will now contain either p and ¬p or
q and ¬q. In other words, by working our way up the branch,
we can generate a contradiction. Whenever a branch leads to a
contradiction, we say that the branch closes. If all branches of a
tree close, this shows that when the target formula is negated,
it inevitably leads to a contradiction. So, the negation of the
negation of the target formula must be true. And so, the target
formula is valid.
⋅ The full derivation tree looks as follows:

1. ¬�(p → q) → (¬q → ¬p)� NTF In general, you should include line



p→q
numbers on the left and an indication
2. l.1

of the line that justifies the move on the
3. ¬(¬q → ¬p) l.1 right. The checkmarks are only meant
for bookkeeping — to be added as the
4. ¬q l.3

relevant formulas are unpacked.
5. ¬¬p l.3
6. p l.4

7. ¬p q l.2
× ×

⋅ By contrast, if a target formula is invalid, the tree will not close.


For illustration, consider the formula below:

(2) (p → q) → (p → r)

⋅ The semantic tree for this wff looks as follows:

1. ¬�(p → q) → (p → r)� NTF



2. p→q l.1

3. ¬(p → r) l.1
4. p l.3
5. ¬r l.3

6. ¬p q l.2
× ↑

⋅ The full set of contruction rules for semantic trees is given below. These rules can, of course, be mechan-
ically applied, but it is important that
you take the time to understand why the
Construction Rules for Semantic Trees in PL rules are as they are.
5

¬¬f f∧y ¬(f ∧ y)

f f ¬f ¬y
y

(f ∨ y) ¬(f ∨ y) (f → y)
exercise: Show that the following wffs
are valid by constructing semantic trees:

(a) ¬(p ∧ q) → (¬p ∨ ¬q)


(b) ¬(p → (q → p)) → ¬(r → q)
f y ¬f ¬f y (c) (r → (r → (q ∧ p))) → (q → (p ∨ q))
¬y

¬(f → y) (f ↔ y) ¬(f ↔ y)

f f ¬f f ¬f
¬y y ¬y ¬y y

⋅ Invalid Formulas

⋅ If a wff is valid, it is true under every interpretation, so a wff


will be invalid only if there is an interpretation (i.e. an assign-
ment of truth values to the relevant atomic formulas) such that
the wff is false. Hence, to demonstrate that a formula is false,
one must demonstrate that such an interpretation exists. This is also standardly referred to as

⋅ Consider again the example in (2) above. This is invalid, be-


constructing a “countermodel”.

cause there is an interpretation where the sentence is false,


namely the following:
⋅ I(p) = 1 ⋅ I(q) = 1 ⋅ I(r) = 0
⋅ Given this interpretation, we get the following valuations:
⋅ V I (p → q) = 1
⋅ V I (p → r) = 0
⋅ And hence,
⋅ V I ((p → q) → (p → r)) = 0
⋅ Notice that while a specific interpretation must be determined
in order to establish that a wff is invalid, such an interpretation
can be easily established by using a semantic tree. In particular,
by simply working your way up an open branch of the tree, the
relevant interpretation is easily determined, cf. above.
6

Modal Propositional Logic


⋅ Modal Propositional Logic (MPL) is an extension of propositional
(PL) that allows us to characterize the validity and invalidity of
arguments with modal premises or conclusions.

⋅ Specifically, modal logic is intended to help account for the valid-


ity of arguments that involve statements such as (3)–(7). Later, we will use the modal framework
to also consider statements such as
(a)-(e) below.
(3) It is necessary that f.
(a) It will be that p.
(4) It is must be that f. (b) a knows that p.
(5) It is possible that f. (c) a believes that p.
(6) It might be that f. (d) It is obligatory that p.
(e) It is permitted that p.
(7) It may be that f.

⋅ In modal propositional logic (MPL), the logical vocabulary of That is, ‘�’ and ‘�’ are operators that
propositional logic (PL) is enriched with two monadic operators, can be combined with a wff, i.e. ‘�f’
and ‘�f’. When a modal operator is
namely ‘�’ and ‘�’, which combine with any atomic or complex combined with a sentence f, we refer
PL formula. to f as the embedded sentence or the
embedded formula.
⋅ The meaning of ‘�’ is roughly ‘it is necessary that’ (or ‘it must
be that’ or ‘necessarily’) and the meaning of � is roughly ‘it is
possible that’ (or ‘it might be that’ or ‘possibly’). Moreover, the
meaning of ‘�’ and ‘�’ is explicated in terms of possible worlds
(more on this in a bit).

⋅ Simplifying somewhat, the meaning of ‘�f’ and ‘�f’ is going to


be the following:

‘�f’ = 1 iff ‘f’ = 1 in every possible world.


‘�f’ = 1 iff ‘f’ = 1 in some possible world.

⋅ So ‘�’ and ‘�’ are essentially monadic operators that function as Specifically, you can think of � as a uni-
quantifiers over possible worlds. versal quantifier over possible worlds
and � as an existential quantifier (more
⋅ When ‘�’ and ‘�’ are added to PL, we will be able to translate
on this later).

various modal sentences in the following way:

(8) It’s necessary that water is H2 O � �p


(9) If water is H2 O, then bread must contain H2 O. � �(p → q)
(10) Possibly, water is not H2 O, but necessarily, it is. � �¬p ∧ �p
(11) Necessarily, water is either H2 O or it might be XYZ. � �(p ∨ �r)

⋅ In order for this to make sense, we need to make various changes


to our initial PL system. Most importantly, we need to amend the
models to contain possible worlds and make formulas true or false
relative to such worlds.

⋅ But before engaging in this project, it is worth considering why go-


ing to all this trouble is really needed. For example, why couldn’t
we just stick with PL and introduce � and � as truth functional Remember, the truth of a formula con-
taining a truth functional connective is
determined entirely by the truth values
of the arguments of the connective.
7

connectives (on the model of negation)?

⋅ The Problem with a Truth-Functional Analysis of � and �:


A connective is truth functional under the following condition:
Whenever it is combined with one or more atomic sentences to
form a new (complex) sentence f, the truth value of f is a function
of the truth values of its component sentence(s).

⋅ It is now easily demonstrated that a truth functional analysis of


modal expressions is inadequate. Consider the sentences below:

(12) Brian Rabern is in Quebec.


(13) It’s possible that Brian Rabern is in Quebec.
(14) 4 is a prime number.
(15) It’s possible that 4 is a prime number.

⋅ Let’s start by looking at ‘�’: What should the truth table for �
look like?

f 1 0
�f ? ?

⋅ If (12) is true, i.e. if it is true that Brian Rabern is in Quebec,


then (13) is intuitively true too. Similarly, if (14) had been true,
then (15) would have been true too (as it happens, (15) is not
true). Hence, predicting that �f is true whenever f is true
seems correct. So, we fill out the truth table accordingly.

f 1 0
�f 1 ?

⋅ But what about when the embedded sentence is false? Let’s


consider both options:

false: This leads to the prediction that (13) is false. However that
seems incorrect. It seems perfectly (metaphysically) possible that
Brian is in Quebec.
true: This leads to the prediction that (15) is true. However that
is clearly incorrect. It is impossible for the number 4 to be prime
(given the definition of a prime number).

⋅ In conclusion, simply treating � as a truth functional connective is


not going to work in general.

⋅ What about ‘�’:

f 1 0
�f ? ?

⋅ Suppose that (13) is true. Does this mean that it’s necessarily true?
Clearly not. So, that ‘f’ is true shouldn’t automatically entail that
‘�f’ is true too. So,
8

f 1 0
�f 0? ?

⋅ But this raises an obvious problem, namely that one cannot con-
clude that ‘�f’ is false simply from the observation that ‘f’ is true.
For example, the sentence ‘bachelors are unmarried’ is true, but it
is also necessarily true. In conclusion, there is no adequate way of
filling out the truth table above.

⋅ In short, we need something more complex than a truth functional


analysis to correctly capture the meaning of expressions such as ‘it
is necessary that’ and ‘it is possible that’.

⋅ Vocabulary for Modal Propositional Logic (MPL):


The vocabulary of MPL is identical to PL except that we add the
two modal operators, namely ‘�’ and ‘�’ to the inventory of expres-
sions. However, we define ‘�’ is defined in terms of ‘�’.

⋅ Sentence Letters: p, q, r, ...


⋅ Connectives: ¬, ∨, ∧, →, ↔. Here, we just pretend that we have
⋅ Parentheses: (, ). defined ‘∧’, ‘→’, and ‘↔’ in terms of ‘¬’
and ‘∨’.
⋅ Modal Operators: �.

⋅ Additional Modal Operator: We define � as follows: Girle also introduces other operators
such as ‘∇’ as a monadic operator
⋅ �f =def ¬�¬f for contingency where ∇f =df (�f ∧
�¬f), ‘�’ as a dyadic operator for strict
implication where (f � y) =def �(f →
y), and ‘○’ as a dyadic operator for
compatibility where (f ○ y) =def �(f ∧
⋅ Syntax of MPL y). I will mostly ignore these.

⋅ Every sentence letter is a well-formed formula (wff).


⋅ If f and y are wffs, then ¬f, (f ∨ y), (f ∧ y), (f → y), and (f ↔ y) are also wffs.
⋅ If f is a wff, then �f and �f are also wffs.
⋅ Nothing else is a wff.

⋅ Semantics

⋅ The key difference between PL and MPL is that in MPL, truth


values are relativized to possible worlds. Whereas in PL, when
giving an interpretation, the interpretation function I is a func-
tion from sentence letters to truth values, in MPL we need a
slight more complex function, namely a function from pairs of
sentence letters and possible worlds to truth values.
⋅ And so, for the semantics for MPL, we need something beyond
the simple interpretations we used in PL.

⋅ kripke models
As is standard, we use a Kripke model. A Kripke model M is a Named after its founder, Saul Kripke.
tuple �F, I� consisting of a frame F = �W, R� and an interpreta-
tion function I:
9

⋅ W is the set of all possible worlds, viz. W = {w1 , w2 , w3 , ... }


⋅ R is a binary (accessibility) relation defined on W, viz. R ⊆ That is, R is a set of pairs of worlds,
W×W e.g. {�w1 ,w2 �, �w2 ,w3 �, �w3 ,w1 �}.

⋅ I is a 2-place function that assigns 0 or 1 to pairs of sentence


letters (p, q, r ... ) and worlds (w1 , w2 ... wn ). I.e. let S be the
set of all sentence letters p, q, r, ... then: I: (S×W) �→ {0,1}
⋅ The frame F provides the structure of the model, viz. the space
of possible worlds W and the accessibility relations R ⊆ W×W
among the worlds. We are going to talk a lot more about

⋅ The interpretation function I assigns semantic values to all the


accessibility relations later.

sentence letters relative to worlds.

⋅ Next, as we did in PL, we need to define a valuation function V


which permits us to determine the truth values of both simple and
complex sentences.

Valuation Function
Where a model M = �F, I�, a valuation function V for M, V M ,
is defined as the 2-place function that assigns 0 or 1 to each wff
relative to each w ∈ W, subject to the following constraints:

⋅ Where a is any sentence letter and f and y are wffs and w is


any member of W:

V M (a,w) = 1 iff I(a,w) = 1


V M (¬f, w) = 1 iff V M (f, w) = 0
V M (f ∨ y, w) = 1 iff V M (f,w) = 1 or V M (y,w) = 1
V M (�f, w) = 1 iff For all w′ ∈ W, if R(w, w′ ), then V M (f,w′ ) = 1

⋅ Given the definitions of the connectives and ‘�’, we get the


following results:

V M (f ∧ y, w) = 1 iff V M (f,w) = 1 and V M (y,w) = 1


V M (f → y, w) = 1 iff V M (f,w) = 0 or V M (y,w) = 1
V M (f ↔ y, w) = 1 iff V M (f,w) = V M (y, w)
V M (�f, w) = 1 iff There is a w′ ∈ W such that R(w,w′ ) ∧ V M (f, w′ ) = 1

⋅ Validity and Logical Consequence


The next step is to define validity and logical consequence for
MPL. However, since formulas are not true or false simplicter, the
definition of validity needs to be slightly more complicated than it
was for PL.

definition of validity in an mpl-model:


An MPL-wff f is valid in MPL-model M where M = �F, I� and
where F = �W, R� iff for every w ∈ W, V M (f,w) = 1.
10

⋅ So, that’s validity in a model M. What about validity more gen-


erally? Well, that’s going to depend on the modal system in ques-
tion. There are several modal systems each of which give different
results with respect to which wffs are valid.

⋅ We are going to start by looking at the “normal” modal systems,


namely K, T, B, S4, and S5.

⋅ For these, we can define validity and logical consequence as fol-


lows:

definition of mpl-validity
An MPL-wff is valid in D (where D is either K, T, B, S4, and S5)
iff it is valid in every D-model.
definition of logical consequence in MPL
MPL-wff f is a logical consequence in system D of a set of mpl-
wffs G iff for every D-model �W, R, I� and each w ∈ W, if V M (g,
w) = 1 for each g ∈ G, then V M (f, w) = 1

⋅ Next, we will look at semantic trees and countermodels for the


simplest of the five systems, namely S5.
1

Logic 2: Modal Logics – Week 2


Semantic Trees for S5
⋅ Remember that ‘�’ was defined as the dual of ‘�’, i.e.

�f =def ¬�¬f

⋅ This is analogous to the way in which the existential quantifier is


the dual of the universal. And so, for similar reasons, the follow- I.e. ∃xf =def ¬∀x¬f
ing equivalences hold:
⋅ ¬�f ↔ �¬f
⋅ ¬�f ↔ �¬f
⋅ The aim for today is to establish the rules for constructing se-
mantic trees for the simplest of the normal modal systems S5. In
general, I will follow Girle (2009) in using the following notation:
⋅ ‘f(w) = 1’ means f is true in world w.
⋅ ‘f(w) = 0’ means f is false in world w.
⋅ These conditions determine the following tree construction rules
which we refer to as (MN)-rules (Modal Negation rules).

√ √
1. ¬�f (w) 1. ¬�f (w)
⋮ ⋮ ⋮ ⋮
n �¬f (w) l.1, (MN) n �¬f (w) l.1, (MN)

⋅ Next, we introduce rules specifically for standard modal formulas.


It’s important to note that these rules are specific to the modal
system S5. For this reason, we refer to these rules as the (�S5)-rule
and the (�S5)-rule:

�S5-rule 1. �f (w)
NB! For the (�S5) rule, it is crucial that
⋮ ⋮ the world v introduced in line n is new
to that part of the tree.
n f (v) l.1, (�S5)

new world to path


�S5-rule 1. �f (w) NB! For the (�S5) rule, the world v
⋮ ⋮ introduced in line n can be any world
including w.
n f (v) l.1, (�S5)

any world

⋅ The construction rules introduced in lecture 1, remain basically


the same although they need to be to be revised to account for the
2

fact that formulas are true/false relative to worlds. So, we revise


as follows:

Propositional Logic Tree Rules (PTR) for MPL

¬¬f (w) f ∧ y (w) ¬(f ∧ y) (w)

f (w) f (w) ¬f (w) ¬y (w)


y (w)

(f ∨ y) (w) ¬(f ∨ y) (w) (f → y) (w)

f (w) y (w) ¬f (w) ¬f (w) y (w)


¬y (w)

¬(f → y) (w) (f ↔ y) (w) ¬(f ↔ y) (w)

f (w) f (w) ¬f (w) f (w) ¬f (w)


¬y (w) y (w) ¬y (w) ¬y (w) y (w)

⋅ Note that the MN and PTR rules are all “single-world” rules. That
is, applying any of these rules never lead to a change of evaluation
world.
⋅ Let’s now put these construction rules to work. We will start by
showing that the set of premises {�(p → q), ¬�q} entails the con-
clusion ¬p:

(A1) {�(p → q), ¬�q} �S5 ¬p


3

⋅ Semantic Tree for (A1)



1. �(p → q) (w) Premise

2. ¬�q (w) Premise

3. ¬¬p (w) Negated Conclusion
4. p (w) l.3

5. �¬q (w) l.2, MN
6. ¬q (w) 5, �S5

7. (p → q) (w) 1, �S5

8. ¬p (w) q (w) l.7


× ×

⋅ Let’s look at another one:

(1) �S5 �p → ��p

⋅ Proof.

1. ¬(�p → ��p) (w) NTF

2. �p (w) l.1
√ ¬(�p → ��p)
3. ¬��p (w) l.1

4. �¬�p (w) l.3, MN
√ w
5. ¬�p (v) l.4, �S5
√ v
6. �¬p (v) l.5, MN u
7. ¬p (u) l.7, �S5
8. p (u) l.2, �S5 p, ¬p

×
⋅ Counterexamples and Countermodels in MPL
In MPL, formulas are true relative to worlds, so an atomic sentence
might have one truth value relative to one world and a different
truth value relative to another.

w v u �
p 1 0 0
q 0 1 0
r 1 1 1

⋅ Moreover, remember that for the normal modal systems in MPL,


validity is defined as follows: a formula f is valid iff it is true in
every world of every model of the system. So, in order to establish When we start considering accessibility
a counterexample to a formula, one needs only demonstrate that relations, it will be easier to appreciate
the difference between an S5-model and
the formula is false in at least one world of a relevant model. For models for other normal systems.
example, if the system in question in S5, then a counterexample
requires an S5 model.
⋅ Let’s consider an example.
4

(2) �S5 �p → �p
⋅ A countermodel to this formula requires a model M where the
antecedent (‘�p’) is true and that the consequent (‘�p’) is false.
⋅ A countermodel contains the following ingredients:
(a) A specification of a set of possible worlds W.
(b) A specification of the relevant accessibility relation(s).
(c) A specification of an interpretation function taking pairs of
atomic sentences and worlds as arguments and truth values as
outputs.
⋅ To simplify, we ignore ignore (b) for the present example. Here is a
countermodel to (2):
S5-model M: W: {w, v}
I: {��p,w�, 0�, ��p, v�, 1�}
⋅ In this model, ‘p’ is true at v, but false at w. This suffices to gener-
ate a counterexample to (2). If ‘p’ is true at v, then ‘�p’ is true at
w. However, since ‘p’ is false at w, ‘�p’ is false at w.
⋅ As is the case in PL, one can search for a countermodel using a
semantic tree.

1. ¬(�p → �p) (w) NTF

�p
v
2. (w) l.1 ¬(�p → �p)

3. ¬�p (w) l.1 p

�¬p
w
4. (w) l.3, MN
5. p (v) l.2, �S5
¬p l.4, �S5
u
6. (u)
↑ ¬p

⋅ Note, there is no contradiction between l.5 and l.6 as these formu-


las are not evaluated at the same worlds.

World-Generators and World-Fillers


⋅ Above we assumed the rules �S5 and �S5. These can be charac-
terized as world-generator and world-filler rules respectively.
⋅ The rule �S5 is a world-generator rule since whenever we en-
counter a true possibility statement, we unpack the embedded
formula in a new world to the path.
⋅ By contrast, �S5 is a world-generator rule since whenever we
encounter a true necessity statement, we unpack this by adding
the embedded formula to one (or more) already existing worlds
to the path.
⋅ The �S5 rule is however completely unrestricted: It allows us to
unpack the embedded formula in any world whatsoever.
⋅ So, one way of changing the logic is to restrict the world-filler rule.
We will now consider a variety of ways of doing this.
5

Accessibility
⋅ In general, we will say that a world w has access to another world Accessibility relations are sometimes
v only if v has somehow been generated from w. To indicate that a annotated as follows: ‘Rwv’ or ‘R(w, v)’.

world w has access to another world v, we write wRv.


⋅ Moreover, we refine the rules for ‘�’ and ‘�’ in the following way:
⋅ If �f(w) = 1, then there is a world v accessible from w such that
‘f(v)’ = 1.
⋅ If �f(w) = 1, then for all worlds v accessible from w ‘f(v)’ = 1.
⋅ And this yields the following general truth conditions:
⋅ �f(w) = 1 iff ∃v(wRv ∧ f(v) = 1)
⋅ �f(w) = 1 iff ∀v(wRv → f(v) = 1)
⋅ Now, to keep track of the relevant accessibility relations in a tree,
we need to modify the modal tree rules:


�R-rule 1. �f (w) NB! For the (�R) rule, it is still crucial
⋮ ⋮ that the world v introduced in line n is
new to that part of the tree.
n wRv
n+1 f (v) l.1, (�R)

new world to path


�R-rule 1. �f (w) NB! For the (�R) rule, notice that
antecedent access to some world is
2. wRv required in order to “discharge” the box.
⋮ ⋮
n f (v) l.1-2, (�R)

Other Modal Systems


⋅ So far we have introduced the following construction rules:

⋅ PTR: Propositional Logic Tree Rules for MPL (cf. p.2)


⋅ MN: Modal Negation Rules (cf. p.1)
⋅ �S5, �S5 (cf. p.1)
⋅ �R, �R (cf. p.5)

⋅ The tree construction rules for every modal system is going to


include PTR and MN (as these are single-world rules). So, the tree
rules for the system S5 (S5Tr) is going to be the union of PTR,
MN, and the S5-rules: viz. (S5Tr) = PTr ∪ MN ∪ {�S5, �S5}

⋅ We will now consider the standard range of systems weaker than


S5, namely K, T, S4, and B.
6

System K
⋅ The tree rules for K (KTr) is simply PTR ∪ MN ∪ {�R, �R}.

⋅ It is noticeable just how much weaker this system is than S5. For
example in K some of even the most simple modal statements that
it might seem should be valid are invalid. Consider, for example,
(3) below.

(3) �p → p

⋅ To construct a countermodel for this wff in K, let’s start by con-


structing a tree:

1. ¬(�p → p) (w) NTF
2. �p (w) l.1
3. ¬p (w) l.1

⋅ When we only have the �R-rules are our disposal, we can only Notice that �f(w) is true when there
discharge a box if there is an antecedently accessible world. And are no accessible worlds from w. This
follows from the semantics for �f(w):
since there is no such world available, we get stuck almost imme-
�f(w) = 1 iff ∀v(wRv → f(w) = 1)
diately.
⋅ So, a simple countermodel will look as follows:
Since the antecedent of this conditional
is false whenever w has access to no
K-model M: W: {w} worlds, the conditional is true, and so
�f(w) is true.
R: �
I: {��p,w�, 0�}
⋅ In other words, this means that in the system K, one cannot even
prove that if it is necessary that f, then f is true.

System T
⋅ One way to increase the power of the system (which will ensure Reflexivity is formally speaking a
the validity of ‘�f → f’) is to stipulate specific properties of the property of relations. Specifically,
if a relation R is reflexive, then the
accessibility relation. For example, we might stipulate that the following holds: ∀x(Rxx)
accessibility relation is reflexive:

(Refl.) ⋮ ⋮
n wRw Refl.

for any w in the path

⋅ If we add Refl. to KTr we get the modal system T. The tree rules
for T are thus the following: (TTr) = PTR ∪ MN ∪ {�R, �R, Refl}
⋅ It’s easy to see that with TTr, the formula in (3) is valid:
7


1. ¬(�p → p) (w) NTF

2. �p (w) l.1
3. ¬p (w) l.1
4. wRw Refl.
5. p (w) �R
×
⋅ Hence, (3) is valid in T, but invalid in K.

System S4
⋅ To get the system S4, we impose the further requirement that the Transitivity is formally speaking a
accessibility be transitive: property of relations. Specifically,
if a relation R is transitive, then the
following holds:
(Trans.) 1. wRv ∀x∀y∀z�(Rxy ∧ Ryz) → Rxz�
2. vRu
⋮ ⋮
n wRu l.1, l.2, Trans.

⋅ The tree rules for S4 are thus as follows:


(S4Tr) = PTR ∪ MN ∪ {�R, �R, Refl, Trans}
⋅ Transitivity is needed in order for the formula in (4) to come out
valid:

(4) �p → ��p

⋅ Let’s look at the proof.



1. ¬(�p → ��p) (w) NTF

2. �p (w) l.1 ¬(�p → ��p)

3. ¬��p (w) l.1

�¬�p
w
4. (w) l.3, MN
l.4, �R
v
5. wRv
√ u
6. ¬�p (v) l.4, �R

7. �¬p (v) l.6, MN trans p, ¬p
.
8. vRu l.7, �R
9. ¬p (u) l.7, �R
10. wRu l.5, l.8, Trans
11. p (u) l.2, l.10, �R
×

⋅ Notice that Trans is crucial here. Without this rule, it would not
be possible to discharge the box in line 2 in u which is needed to
generate a contradiction.
⋅ So, (3) is valid in S4, but invalid in K and T.
8

System B
⋅ If assume that the accessibility relation is symmetric (in addition to This system is also sometimes referred
being reflexive), we end up with the system B. to as Br. It’s named after its inventor,
logician E.J. Brouwer.
⋅ When symmetry is assumed, we get the following tree rule: Symmetry is formally speaking a prop-
erty of relations. Specifically, if a rela-
(Symm.) 1. wRv tion R is symmetric, then the following
holds: ∀x∀y(Rxy → Ryx)
⋮ ⋮
n vRw l.1, Symm.

⋅ The formula which characterizes the system B is the wff in (5).

(5) p → ��p

⋅ That (5) is valid in B can be proved as follows:



1. ¬(p → ��p) (w) NTF
2. p (w) l.1

3. ¬��p (w) l.1 ¬(p → ��p)

4. �¬�p (w) l.3, MN w
5. wRv l.4, �R

v
p, ¬p
6. ¬�p (v) l.4, �R symm

.

7. �¬p (v) l.6, MN


8. vRw l.5, Symm.
9. ¬p (w) l.7, l.8, �R
×

⋅ The tree rules for B are thus the following:


(BTr) = PTR ∪ MN ∪ {�R, �R, Refl., Symm.}

Back to System S5
⋅ If the accessibility relation is assumed to be reflexive, transitive, A relation that is reflexive, transitive,
and symmetric, we then end up with the system with which we and symmetric is also known as an
equivalence relation.
started S5.
⋅ So, instead of characterizing S5 in terms of special dedicated rules
for � and �, we can think of S5 simply as a system that includes
the following rules:
(S5Tr) = PTR ∪ MN ∪ {�R, �R, Refl., Trans., Symm.}
⋅ Hence, in S5, the formulas in (3), (4), and (5) are all valid:
⋅ �S5 �p → p (Refl.)
⋅ �S5 �p → ��p (Trans.)
⋅ �S5 p → ��p (Symm.)
⋅ Moreover, in S5 the following is also valid (which is not valid in
any other system):
⋅ �S5 �p → ��p exercise: Prove this.
9

Frames
⋅ This brings us back to the notion of a frame F which was intro- Strictly speaking R is a subset of the
duced when we talked about Kripke models. Remember, a frame F cartesian product of W: R ⊆ (W×W).

is a pair of a set of worlds W and an accessibility relation R.


⋅ So, if the accessibility relation R in F is reflexive, we say that it
is a T-frame.
⋅ And if the accessibility relation R in F is reflexive and transitive,
we say that it is an S4-frame.
⋅ And If the accessibility relation R in F is reflexive and symmet-
ric, we say that it is a B-frame.
⋅ And, finally, if the accessibility relation R in F is reflexive, tran-
sitive, and symmetric, we say that it is an S5-frame.

The Relation between the Normal Modal Systems


⋅ The tree rules for each of the normal modal systems thus look as
follows:

KTr = PTR ∪ MN ∪ {�R, �R}


TTr = PTR ∪ MN ∪ {�R, �R, Refl.}
S4Tr = PTR ∪ MN ∪ {�R, �R, Refl., Trans.}
BTr = PTR ∪ MN ∪ {�R, �R, Refl., Symm.}
S5Tr = PTR ∪ MN ∪ {�R, �R, Refl., Trans., Symm.}

⋅ It follows that from these relations between the normal modal


systems that: These inferential relations between
the systems can be diagramatically
⋅ All K-valid formulas are T-valid. represented as follows:

⋅ All T-valid formulas are S4-valid. reflexive, transitive

⋅ All T-valid formulas are B-valid. S4


⋅ All S4-valid formulas are S5-valid.
⋅ All B-valid formulas are S5-valid. reflexive,
K T S5 transitive,
symmetric
reflexive

reflexive, symmetric
1

Logic 2: Modal Logics – Week 4


Natural Deduction in PL
⋅ A proof in natural deduction consists of a sequence of sentences
which are either premises, assumptions, or derived from previous
sentences via various inference rules or replacement rules. The last
line of a sequence serves as the conclusion of the proof.
⋅ Every line in a natural deduction proof must be numbered on the
left and given a justification on the right. For example, a proof of
(1) would look as follows:

(1) p, (p → q), (q → r) �PL r

1 p premise

2 p→q premise

3 q→r premise

4 q MP, 1, 2

5 r MP, 3, 4

⋅ This proof of r relies on the premises in lines 1, 2, and 3, and on


applications of an inference, namely modus ponens. So, this is Various inference rules, including
a proof that from the premises in 1 to 3 and the rule of modus modus ponens, are explicated below

ponens, one can derive the conclusion r.


⋅ Another example:

(2) (p → q), (q → r), ¬r �PL ¬p

1 p→q premise

2 q→r premise

3 ¬r premise

4 ¬q MT, 2, 3

5 ¬p MT, 1, 2

⋅ This proof relies on the premises in line 1 and 2, and on applica-


tions of the inference rule of modus tollens (cf. below).
⋅ Different natural deduction systems will assume different basic in-
ference and replacement rules. We are going to be quite generous
in this regard and assume a fairly extensive list of these.
⋅ Replacement Rules: A replacement is a rule that allows the re-
placement of one formula for another. For example, the two so-
called DeMorgan replacement rules look as follows:
2

rule name abbreviation


(¬f ∨ ¬y) :: ¬(f ∧ y) DeMorgan (DeM)
(¬f ∧ ¬y) :: ¬(f ∨ y) DeMorgan (DeM)

⋅ These rules simply say that the formulas flanking the ‘::’ may be
substituted for one another at any point in a proof. When using
a replacement rule, one must however, as usual, cite the line on
which the replacement relies and the name of the rule. Here is an
example:

(3) (¬p ∧ ¬q), (¬(p ∨ q) → ¬q) �PL ¬q

1 ¬p ∧ ¬q premise

2 ¬(p ∨ q) → ¬q premise

3 ¬(p ∨ q) DeM, 1

4 q MP, 2, 3

⋅ Here is the list of replacement rules we will assume: If we were being more rigorous, we
would prove for each of these replace-
ment rules that they follow from more
⋅ Replacement Rules basic rules of inference.

¬¬f :: f
For example, we can prove the com-
double negation (DN) mutativity of conjunction using only
(f ∧ y) :: (y ∧ f) commutativity for ∧ (Com) the inference rules Conj and Simp, cf.
below.
(f ∨ y) :: (y ∨ f) commutativity for ∨ (Com)
exercise (easy): Prove the commutativ-
(f ∧ (y ∧ c)) :: (f ∧ y) ∧ c) associativity for ∧ (Assoc) ity of conjunction.
(f ∨ (y ∨ c)) :: (f ∨ y) ∨ c) associativity for ∨ (Assoc) exercise (harder): Prove the commuta-
(f ∧ (y ∨ c)) :: ((f ∧ y) ∨ (f ∧ c))
tivity of disjunction.
distribution (Dist)
(f ∨ (y ∧ c)) :: ((f ∨ y) ∧ (f ∨ c)) distribution (Dist)
(¬f ∨ ¬y) :: ¬(f ∧ y) DeMorgan (DeM)
(¬f ∧ ¬y) :: ¬(f ∨ y) DeMorgan (DeM)
(f ∨ f) :: f idempotence (Idem)
(f ∧ f) :: f idempotence (Idem)
(f → y) :: (¬f ∨ y) material implication (IMP)
(f → y) :: (¬y → ¬f) contraposition (Cont)
((f ∧ y) → c) :: (f → (y → c)) exportation (Exp)
(f → (y → c)) :: (y → (f → c)) permutation (Per)
(f ↔ y) :: ((f → y) ∧ (y → f)) equivalence (Equiv)
(f ↔ y) :: ((f ∧ y) ∨ (¬f ∧ ¬y)) equivalence (Equiv)
f :: (f ∧ (y ∨ ¬y)) taut. conj. (TConj)
f :: (f ∨ (y ∧ ¬y)) taut. disj. (TDisj)
3

⋅ Inference Rules: An inference rule is a rule permitting the infer-


ence of a formula (or formulas) on the basis of another formula (or
formulas). For example, the rule of modus ponens says that if f and
(f → y) are lines in a proof, one may then add y as a line too.

⋅ We will assume the following inference rules:

(f → y) (f → y)
f ¬y

y ¬f
modus ponens (MP) modus tollens (MT)

(f ∧ y) f
y
f
y (f ∧ y)
simplication (Simp) conjunction (Conj)

(f ∨ y) (f ∨ y)
f f ¬f ¬y

(f ∨ y) (y ∨ f) y f
addition (Add) disjunctive syllogism (DS)

(f → y)
(f → y) (c → d)
(y → c) (f ∨ c)

(f → c) (y ∨ d)
hypothetical syllogism (HS) constructive dilemma (CD)

1 f assumption 1 f assumption

2 ⋮ 2 ⋮

3 y 3 �

4 f→y CP 4 ¬f RAA

conditional proof reductio ad absurdum


4

⋅ Conditional Proofs
One of the inference rules above is the rule a conditional proof. A
conditional proof is just that, i.e. a proof of a conditional state-
ment.
⋅ Conditional proofs ordinarily make use of assumptions which are
importantly different from premises. An assumption is, in a sense,
a temporary premise. That is, it is a line in a proof that serves to
derive certain consequences, but which (in contrast to a premise)
must be “discharged” (i.e. discarded) before the end of the proof.
This is to ensure that the conclusion of the proof does not rely on
mere assumptions.
⋅ One way to discharge an assumption is by using conditional proof
(CP). This rule says that if from an assumption f, you can derive
y (say, by using various replacement and inference rules), you
have then given a conditional proof (a proof with a conditional
conclusion), namely that (f → y) follows from no assumptions.
⋅ Since discharging assumptions is crucial, keeping track of these is
imperative. To do this, we use the following notation: When mak-
ing an assumption, we add an indented vertical line (to indicate
that we are starting a subproof that makes use of an assumption).
Moreover, we underline the relevant assumption to indicate that
this what we are currently aiming to discharge before returning to
the main proof.
⋅ Let’s consider an example. This is effectively a proof of the infer-
ence rule hypothetical syllogism, so we
(4) (p → q), (q → r) �PL (p → r) will assume that we are not allowed to
use this rule for this proof.

1 p→q premise

2 q→r premise

3 p assumption

4 q MP, 1, 3

5 r MP, 2, 4

6 p→r CP, 3, 5

⋅ Proofs using assumptions can also be used to prove formulas from


no premises (or the empty set of premises). For example:

(5) �PL p → (p ∨ q)

1 p assumption

2 p∨q Add, 1

3 p → (p ∨ q) CP, 1, 2
5

⋅ This proof shows that from the assumption p, one can derive (p ∨
q) using the inference rule addition. Consequently, we are licensed
to conclude that (p → (p ∨ q)) is a tautology, i.e. it follows from the
empty set of premises (viz. no premises and no assumptions).

⋅ Indirect Proofs
Another method of proof is indirect proofs. An indirect proof pro- Indirect proofs are also commonly
ceeds by assuming the negation of the target formula and then referred to as proof by contradiction or
simply reductios.
demonstrating (using replacement and inference rules) that this
leads to a contradiction. From the derivation of a contradiction,
one then infers the negation of the initial assumption (which
thereby discharges the assumption), and this then shows that the
target formula is true.
⋅ Here is an example:

(6) �PL (p ∨ ¬p)

1 ¬(p ∨ ¬p) assumption

2 ¬p ∧ ¬¬p DeM, 1

3 ¬p Simp., 2

4 ¬¬p Simp., 2

5 � 3, 4

6 ¬¬(p ∨ ¬p) RAA, 1, 5

7 (p ∨ ¬p) DN, 6

⋅ Also, it is important to note that strictly speaking any conclusion


will follow from a contradiction. This is easily proved.

(7) (p ∧ ¬p) �PL q

1 p ∧ ¬p premise

2 ¬q assumption

3 p Simp., 1

4 ¬p Simp., 1

5 � 3, 4

6 ¬¬q RAA, 2, 5

7 q DN, 6
6

Natural Deduction in MPL


⋅ Next, we need to extend the natural deduction system for PL to
MPL. However, we will only consider natural deduction systems
for the logics T, S4, and S5.

⋅ We start by assuming the following modal replacement rules and


modal rules of inference.

⋅ Modal Replacement Rules


¬�f :: �¬f modal negation (MN)
¬�f :: �¬f modal negation (MN)
¬�¬f :: �f modal negation (MN)
¬�¬f :: �f modal negation (MN)

⋅ Modal Rules of Inference

⋅ In addition to the inference rules for PL above, we assume the


following modal inference rules:
Note: The MR-rules are system specific:

f �f For a derivation in T, one may only use


MRT.

�f
For a derivation in S4, one may only
f use MRT and MRS4.
possibility introduction (PI) modal reiteration T (MRT)
For a derivation in S5, one may use
MRT, MRS4, and MRS5.

�f �f

�f �f
modal reiteration (MRS4) modal reiteration S5 (MRS5)

⋅ Null Assumption Proofs


In addition to the modal inference rules, one extra ingredient is
needed, namely the notion of null assumption proof.

⋅ A null assumption proof has the following features:

1. There is no assumption (or a null assumption).


2. Every formula inside the scope of a null assumption must be
deduced from preceeding formulas inside the scope of that null
assumption unless it is deduced by modal reiteration for the
appropriate modal logic.
3. The proof ends with a discharge of the null assumption line.
7

4. After the discharge of the null assumption line where the for-
mula immediately before the discharge is f, the next formula is
�f.

⋅ This yields the following inference rule for null assumption proofs.

1 null assumption

2 ⋮

3 f

4 �f NI

necessity introduction

⋅ In other words, in combination with the MR-rules above, a null


assumption proof allows us to adjoin boxes to formulas.

⋅ Let’s consider a couple of simple examples of derivations that use


the modal inference rules.

(8) �T ��(p → ¬q) → (¬p ∨ ¬q)

1 ��(p → ¬q) assumption

2 �(p → ¬q) MRT, 1

3 p → ¬q MRT, 2

4 (¬p ∨ ¬q) IMP, 3

5 ��(p → ¬q) → (¬p ∨ ¬q) CP, 1–4

(9) �T �p → �¬¬p

1 �p assumption

2 null assumption

3 p MRT, 1

4 ¬¬p DN, 3

5 �¬¬p NI, 2–4

6 �p → �¬¬p CP, 1–5


8

(10) �S5 �p → ����p

1 �p assumption

2 ��p PI, 1

3 ���p PI, 2

4 null assumption

5 ���p MRS5, 3

6 ����p NI, 4, 5

7 �p → ����p CP, 1–5

⋅ One thing that might not be entirely obvious is how, given the
rules available, we are supposed to prove “bare” necessities such
as (11) and (12).

(11) �(p → p)
(12) �(p → (p ∨ q))

⋅ Remember to introduce a �, we need to use a null hypothesis


proof, but since the only lines in a null hypothesis proof must be
derived from previous lines (within the null assumption proof) or
by MRT rules, it is not obvious what to do here.
⋅ The solution that we will adopt here is to assume that it is permit-
ted to have a line in a null assumption proof that is not derived
from previous lines or by MRT rules given that the line has been de-
rived by doing a subproof within the null assumption proof and that the
line relies on no assumptions.
⋅ Here is an example.

(13) �T �(p → p)

1 null assumption

2 p assumption

3 ¬p assumption

4 � 2, 3

5 ¬¬p RAA, 2, 3

6 p DN, 5

7 p→p CP, 2–6

8 �(p → p) NI, 1, 7
1

Logic 2: Modal Logics – Week 5


Predicate Logic – A Recap
⋅ We start today with a recap of the syntax and semantics of First
Order Logic (fol). First we need a vocabulary (or a lexicon) for
our formal language L.

Primitive Vocabulary of L
⋅ The language L contains the following primitive expression types.

⋅ Connectives: ¬, ∧, ∨, →, ↔
⋅ Individual Variables: x, y, z, ... We will add numerical superscripts to

variables and constants when we need
Individual Constants: a, b, c, ... more than the alphabet provides. We
⋅ Predicate Constants: F1 , F2 , F3 ... R1 , R2 , R3 ... will use F for 1-place predicates, and

⋅ Quantifiers: ∀, ∃
R for relations. However, it is assumed
that our vocabulary also contains
⋅ Parentheses: (, ) predicates of higher arity.

⋅ Variables and constants are both referred to as terms.

Syntax of L
⋅ Next, we state the syntactic rules of L, i.e. the rules that determine
whether a string of L is a well formed formula (wff).

1. If P is an n-place predicate and a1 ...an are terms, then Pa1 ...an


is a well-formed formula (wff).
2. If f and y are wffs, and a is a variable, then the following are
wffs:

¬f � (f ∧ y) � (f ∨ y) � (f → y) � (f ↔ y) � ∀af � ∃af

3. Only strings formed on the basis of 1. and 2. are wffs.

Variable Binding
⋅ A formula is closed if and only if all of its variables are bound. The
notion of binding is defined as follows.

binding: An occurrence of a variable a in a wff f is


bound in f iff a is within an occurrence of some wff of
the form ∀ay or ∃ay within f. Otherwise a is free.

⋅ For example, in the formulas below, x is bound but y is free.


∃xFy � ∀xxRy � ∀x(Fx ∧ Fy) � ∀x∃yFx ∧ Fy
2

⋅ Notice that both open and closed formulas count as wffs.


⋅ Finally, notice that ∀ and ∃ are duals.

∀af ⇔ ¬∃a¬f � ∃af ⇔ ¬∀a¬f

Semantics and Models for L


⋅ Next, we need a semantics for L, i.e. a compositional method for
determining the conditions for the wffs of L. This requires a model
for L.
⋅ A model M is an ordered pair �D, I�, where:
⋅ D is a non-empty set (the domain).
⋅ I is an interpretation function which satisfies the following two
conditions:
1. if a is a constant, then I(a) ∈ D.
2. if P is an n-place predicate, then I(P) is an n-place relation
over D.
⋅ In short, the model provides an extension (i.e. meaning) of the
non-logical constants, viz. individual constants and predicate
constants.
⋅ Next, we require a recursive definition of truth for the wffs of L,
but since our vocabulary now includes both variables and quan-
tifiers, we need to say something about the interpretation of these
expressions.

Variables in L
⋅ A variable assignment g for a model M is a function from vari-
ables in the object language L to objects in the domain D. I.e. let G
be the set of variables, then:

g: G �→ D

⋅ Here is an example of a variable assignment g:

� �→ �
� x Pluto �
� �→ �
� y The Dugald Stewart Building �
g=� �
� �→ �
� z Theresa May �
� ⋮ ⋮ �
� �
⋅ It is important to distinguish between the interpretation of con-
stants and variables: Constants are interpreted relative to the in-
terpretation function I in M whereas variables are interpreted
relative to some variable assignment g for M.
⋅ Let V M,g (a) be the valuation of a term a relative to M and g. If so,

I(a) if a is a constant
V M,g (a) = �
g(a) if a is a variable
3

Valuations and Truth-in-a-Model


⋅ More generally, a valuation function V for a model M and some
variable assignment g is a function which assigns to each wff ei-
ther 0 or 1 under the following constraints.

For any n-place predicate P and any terms a1 ...an ,


V M,g (Pa1 ...an ) = 1 iff �[a1 ]M,g , ... , [an ]M,g � ∈ I(P)

⋅ For any wffs f, y, and any variable a:

V M,g (¬f) = 1 iff V M,g (f) = 0

V M,g (f ∧ y) = 1 iff V M,g (f) = 1 and V M,g (y) = 1

V M,g (f ∨ y) = 1 iff V M,g (f) = 1 or V M,g (y) = 1

V M,g (f → y) = 1 iff V M,g (f) = 0 or V M,g (y) = 1

V M,g (∀af) for every i ∈ D, V M,g


[i�a]
= 1 iff (f) = 1

V M,g (∃af) for at least one i ∈ D, V M,g


[i�a]
= 1 iff (f) = 1

⋅ We now define truth-in-a-model as follows.

truth-in-a-model: f is true in a model M iff V M,g (f) = 1,


for each variable assignment g for M.

Validity and Logical Consequence


⋅ As usual, validity is defined as truth in all models and logical
consequence is defined in terms of truth preservation.

validity: f is valid in L iff f is true in every model M.

logical consequence: f is a logical consequence of a


set of wffs G in L iff for every model M and every variable
assignment g for M, if V M,g (g) = 1 for every g ∈ G, then
V M,g (f) = 1.
4

Existential Import
⋅ It is standard in logic to treat the quantifiers in fol as having exis-
tential import. That is, everything in the domain of quantification
is assumed to exist. This means that the meaning of e.g. the exis-
tential quantifier can be paraphrased as follows:





At least one existing individual x, x is F
∃xFx = � There exists at least one x such that x is F



� For some x in the domain of existing things D, x is F

⋅ So, existence statements in standard fol are formulated using


the existential quantifier rather than a predicate. For example, the
statement that unicorns exist is formulated as a claim about the
properties of the existing individuals/objects in the domain, i.e.

(1) ‘Unicorns exist’ � ∃xUnicornx

⋅ And, similarly, the statement that there are no unicorns is also for-
mulated as a claim about the properties of the existing individuals
or objects in the domain, i.e.

(2) ‘Unicorns do not exist’ � ¬∃xUnicornx

⋅ One immediate problem with assuming that the existential quanti-


fier has existential import is that it renders it impossible to capture
the meaning of sentences such as (3) since the meaning of (3) is
essentially equivalent to the seemingly contradictory (3a) and (3b).

(3) Some things do not exist.


a. ∼ At least one x (in the domain of existing things) does
not exist.
b. ∼ There is at least one x (in the domain of existing
things) such that x does not exist.

Relinquishing Existential Import


⋅ Rather than accepting existential import, one could instead as-
sume that the domain contains both existing and non-existing
objects/individuals (including impossible objects such as round
squares) and that the “existential” quantifier is simply an indicator
of proportion.

⋅ I.e. the statement ‘something is F’ is simply the statement that at


least one x in the domain of (existing and non-existing individu-
als/objects) has the property F. However, as Girle notes, this ap-

⋅ When this approach is taken, existence is simply treated as a 1-


proach is not the orthodox approach
in logic/philosophy and is strongly
place predicate, analogous to other 1-place predicates, e.g. opposed by many.

question: What kinds of objections


might one raise to the assumption that
the existential quantifier does not have
existential import?
5

(4) Some man runs. � ∃xRunx


(5) Some man exists. � ∃xExistx
⋅ One argument in favor of having existential quantifiers without
existential import is that it might seem perfectly reasonable to have
discussions/disagreements about what things exist.
⋅ For example, suppose one encounters an individual who believes
that there are round squares. It seems plausible to assume that one
might respond to such an individual by saying:
(6) The things you are talking about, i.e. round squares, do not
exist.
⋅ In any event, once modal statements are taken into consideration,
we are going to want to be able to express the possible existence of
various objects/individuals. For example, (7) seems true.
(7) There might be another planet in the universe very similar
to Venus.
⋅ If the domain is assumed to consist of both (actually) existing
and (actually) non-existing individuals/objects, capturing this is
straightforward. To be fair, this can also be captured

⋅ For example, by giving up existential import, one could assume


while assuming that the quantifiers
have existential import, for example by
that the domain contains a planet very similar to Venus already assuming that the that the domain of
existing things can vary from possible
and then interpret (7) as stating that this planet might exist. So, a world to possible world (more on this
paraphrase of (7) would be that given by (7a): later). When quantifiers are assumed
to have existential import, but in this
(7) a. ∼ For some planet very similar to Venus x, it is possible world-relative way, this is referred to as
actualist quantification.
that x exists.
⋅ So, we could translate (7) into quantified modal logic as follows:
(8) ∃xsimilar-to-Venusx�existx

Weak Existential Import


⋅ Another option is to assume that the quantifiers have weak exis-
tential import. This is the assumption that the domain of quan- This is a referred to as possibilist quan-
tification consists of both actually and merely possibly existing tification.

objects/individuals (but not impossible objects such as round


squares).
⋅ In other words, on this view the quantifiers quantify over individ-
uals/objects that might exist. This means that a formula such as (9)
is translated as follows:
(9) ∃xFx � For some possibly existing x, x is F.
⋅ However, one might worry that possibilist quantification, i.e. weak
existential import, fails to capture the actual meaning of the ex- question: What precisely would the
istential and universal quantifiers—at least as they are used in worry be here? (this worry would also
apply to giving up existential import
English. in the more general sense described
above.)
6

Domains Across Possible Worlds


⋅ When moving to quantified modal logic, the issues concerning
quantificational domains become increasingly difficult.

⋅ Setting aside the issue of existential import, in quantified modal


logic, a question arises as to the nature of the quantificational
domains.

⋅ Generally speaking, there are three main options:

⋅ Constant Domain Semantics: The domain of quantification is


identical for all possible worlds.

w1 , w2 , w3 ,
..., wn figure: Constant Domains
One quantificational domain for all
possible worlds.

⋅ Distinct Domain Semantics: The domain of quantification is


distinct for each possible world.

... figure: Distinct Domains


w1 w2 w3 wn
One distinct quantificational domain for
each possible world.

⋅ Variable Domain Semantics: The domain of quantification


varies from world to world. So, some worlds may overlap in
domain with other worlds, and some worlds may have entirely
distinct domains of quantification.

w6

w1 w3 w4 w7 w2 figure: Variable Domains


Multiple quantificational domains;
some overlapping, some distinct.

w5 ...
wn

⋅ For example, in the illustration above, the quantificational do-


mains of w1 , w6 , and w3 all overlap, whereas the quantificational
domain of w5 is distinct from the domains of every other world.
7

⋅ Furthermore, with a variable domain semantics, there are two


prominent versions:

⋅ One where for each accessible possible world, the domain expands
(an ascending chain of domains):

ww
1w4w
3w
2 1 wn figure: Variable Ascending Domain
Domain increases with each accessible
world.

⋅ And one where for each accessible world, the domain contracts:

w1 ww1 w
2w
3w4n figure: Variable Descending Domain
Domain decreases with each accessible
world

⋅ For the remainder of these notes, we will assume a constant do-


main semantics (since this makes things significantly easier).

From FOL to QML


⋅ When moving from fol to quantified modal logic (qml), we need
to make a number of changes to the logic (just as we did when we
extended pl to mpl).

1. We enrich our lexicon by adding the two binary operators �


and � to the lexicon.
2. We add the following syntactic/formation rule: If f is a wff,
then �f and �f are also formulas.
3. We add to our model a set of possible worlds W and a set of
accessibility relations R where R ⊆ (W×W)
4. We change our interpretation function I and our valuation Specifically, the interpretation function
function V to reflect that predicate extensions are world-relative, I maps predicates to extensions at
possible worlds.
so that the truth value of wffs can change depending on the
world of evaluation, i.e.
8

A Valuation Function V for QML:


For any n-place predicate P and any terms a1 ...an ,
V M,g, w (Pa1 ...an ) = 1 iff �[a1 ]M,g , ... , [an ]M,g � ∈ I(P, w)

5. We give the standard semantic interpretation of � and �, i.e.


V M (�f, w) = 1 iff for all w′ ∈W, if wRw′ , then V M (f,w′ ) = 1
V M (�f, w) = 1 iff for some w′ ∈W, wRw′ and V M (f,w′ ) = 1

The Barcan Formula and Its Converse


⋅ One interesting consequence of assuming a constant domain se-
mantics is an equivalence now commonly known as the Barcan These equivalences are named after
Formula (and the Converse Barcan Formula). the logician Ruth Barcan Marcus who
proved them in 1946/1947.
⋅ The Barcan formula is stated in (10) which is equivalent to (11):

(10) ∀x�Fx → �∀xFx


(11) �∃xFx → ∃x�Fx

⋅ If the Barcan Formula is assumed as an axiom, it implies that This thesis is also known as actualism.
there are no merely possible individuals/objects. That is, if it is
possible that an individual (with a certain property exists), then
that individual actually exists.
⋅ In other words, if the Barcan formula is assumed as an axiom,
it entails that the domain of the quantifiers cannot expand across
accessible possible worlds. exercise: Whether the Barcan Formula

⋅ The Converse Barcan formula is stated in (12) which is equivalent


is valid remains controversial. If you
think about what the BF means, can
to (13): you think of a reason why it might be
invalid?
(12) �∀xFx → ∀x�Fx
(13) ∃x�Fx → �∃xFx

⋅ If the Converse Barcan formula is assumed as an axiom in a modal


system, this implies that the domains cannot shrink across accessi-
ble possible worlds.
⋅ As should be fairly clear, if the quantificational domain is assumed
to be constant, both the BF and the CBF are going to be valid.

Modalities De Dicto and De Re


⋅ It is standard to distinguish between to distinct types of modal-
ity, namely de dicto and de re. Roughly speaking, these can be ex-
plained as follows:

⋅ de dicto modality: When necessity or possibility is at- This could be considered a misleading
tributed to a proposition. way of putting things for various
reasons, but for a first rough gloss this
⋅ de re modality: When an individual or object is said to possi- will do.
bly or necessarily have a certain property.
9

⋅ Formally, the distinction between de dicto and de re modality can


be captured in terms of scope. Again, simplifying somewhat, if
a modal is taking scope over a closed sentence, the modality is
de dicto, cf. (14). By contrast, if the modal is taking scope over an
open sentence (where the variable in that open sentence is bound
from the outside), the modality is de re, cf. (15). The de dicto/de re distinction has been
subject of much discussion in phi-
(14) �∃xFx (de dicto modality) losophy. Quine, for example, who

(15) ∃x�Fx (de re modality)


is generally skeptical of quantified
modal logic, has raised a worry about
quantifying into modal environments.
⋅ In (14), the sentence ‘∃xFx’ is claimed to be necessary, i.e. the According to Quine, it is borderline
non-sensical to attribute properties to
proposition expressed by that sentence is claimed to be true in objects necessarily when this attribution
every accessible world. By contrast, what is claimed in (15) is that fails to take into account how the object
for some individual x, x has the property F necessarily, i.e. in is described (which is precisely what
can happen in de re attributions).
every accessible world x is F.
⋅ In other words, while (14) is true as long as in every accessible
world some individual x is F, (15) is only true if some specific
individual x is F in every accessible world.

The Interpretation of Quantified Modal Sentences


⋅ In order to explicate when (and why) quantified modal sentences
are true/false relative to various models, let’s consider a simple
setup:

⋅ Suppose we have only three possible worlds w1 , w2 , and w3 with a


constant and finite domain consisting of only three individuals, a,
b, and c.

w1 w2 w3

F F F
a b c

b c a a b
¬F ¬F ¬F c

⋅ Moreover, we assume a simple reflexive T-model setup where


w1 has access to w2 , and w2 has access to w3 . There are no other
accessibility relations.
10

Exercises
⋅ Let’s now consider some quantified modal statements and their
status relative to this model.

(16) V M, g (�∃xFx, w1 ) = ? (20) V M, g (�∀xFx, w1 ) = ?


(17) V M, g (�∃xFx, w1 ) = ? (21) V M, g (�∀xFx, w3 ) = ?
(18) V M, g (∃x�Fx, w3 ) = ? (22) V M, g (∀x�Fx, w2 ) = ?
(19) V M, g (∃x�Fx, w2 ) = ? (23) V M, g (∀x�Fx, w1 ) = ?

⋅ A couple of slightly harder ones:

(24) V M, g (�∀x(Fx → �∃y¬Fy), w1 ) = ?


(25) V M, g (�∀x(Fx → ∃y�Fy), w1 ) = ?
1

Logic 2: Modal Logics – Week 8


Individual Constants
⋅ In the previous lecture, we focused on the introduction of quan-
tifiers into modal logic and we discussed issues about existential
import and the nature of the quantificational domains. Now, we
turn our attention to singular terms—starting with individual
constants.
⋅ So, remember, in standard predicate logic, our lexicon includes a
set of individual constants a, b, c, etc. The meaning/extension of
an individual constant is determined by the assignment function
which maps these to individuals in the domain. So, one interpreta-
tion function I might provide the following mapping.

� �→ �
� a Pluto �
� �→ �
� b The Dugald Stewart Building �
I ∶� �
� �→ �
� c Theresa May �
� ⋮ ⋮ �
� �

⋅ Individual constants thus contrast variables which are not inter-


preted by I, but by a variable assignment g instead.
⋅ So, essentially, we are going to think of individual constants as
names and we are going to more or less ignore variables for now Although, as I mentioned in class, one
might think of variables as pronouns.

Numerical Identity
⋅ To add expressive power to our logical system, we are going to
add numerical identity—expressed by the connective ‘=’. So, we
add the following syntactic rule: If a and b are singular terms,
then ‘a = b’ is a formula.
⋅ With ‘=’ in our inventory, we can now express statements of nu-
merical identity, e.g.

(1) Cicero is Tully.


(2) Superman is Clark Kent.
(3) Hesperus is Hesperus.

⋅ In our formal language, identity statements look as follows:

(4) a=b
(5) c=c

⋅ Without numerical identity in predicate logic, it is impossible to


formulate a range of claims, e.g. those below.

(6) There is exactly one president


∃x(Presidentx ∧ ∀y(Presidenty → x = y))
(7) There are at most two frogs.
∃x∃y((Frogx ∧ Frogy) → ∀z(Frogz → (z = x ∨ z = y)))
2

Identity vs. Predication


⋅ In English, numerical identity is expressed using the word ‘is’ (or
possibly the more cumbersome ‘is identical to’), but it is important
to distinguish between different kinds of uses of ‘is’, namely the
‘is’ of predication vs. the ‘is’ of identity.

⋅ In the examples above, two individuals are asserted to be numer-


ically identical. However, in the examples below, this is not the
case.

(8) Frank is tired.


(9) Louise is a professor.

⋅ That is, in (8), Frank is not asserted to be identical to a property


and in (9), Louise is not asserted to be identical to a professor.
Rather, in these sentences, a property is predicated of Frank and
Louise.

⋅ It just so happens, that in English, the word ‘is’ is used to pred-


icate properties of individuals as well. And, when the property
does not have a corresponding verbal predicate (e.g. ‘laughs’,
‘snores’, or ‘tall’), one must use the somewhat cumbersome ‘is Similarly, in (9) the word ‘a’ is not a
a’ instead (as in ‘is a professor’, ‘is a master of disguise’, or ‘is a quantifier expression either.

genius’).

Existential Import and Logical Principles


⋅ There are a range of logical inferences that are going to be valid
when it comes to individual constants, e.g.

Fa
∃xFx

⋅ However, the correct translation of these formulas will depend on


assumptions about the domain of quantification. For example,

⋅ If our semantics has actualist quantification, the conclusion of


this inference is that some existing individual is F.
⋅ If our semantics has possibilist quantification, the conclusion of
this inference is merely that some existing or possibly existing
individual is F.

⋅ In standard predicate logic, the following is also a logical truth:

(10) ∃x(x = a)

⋅ However, what the correct translation of this sentence is depends


on assumptions about the nature of the quantificational domain. If
the quantifiers have existential import (and if, say, a is a name for
Andy Murray), then the meaning of the sentence above is simply:
3

(11) Andy Murray exists.

⋅ Too see that (10) is a logical truth, simply consider the negation.

(12) ¬∃x(x = a)

⋅ From (12), we can infer ∀x¬(x = a) and by universal instantiation,


we reach the absurd conclusion ¬(a = a).

⋅ So, since it seems a bit strange to think that (11) is a logical truth,
this might be a reason to consider giving up existential import. If
we give up existential import, we can maintain that (10) is a logical
truth without having to accept that it has the meaning in (11).

⋅ If our semantics has possibilist quantification, the right translation


of (10) is (13). Intuitively, accepting this as a logical truth is less
controversial.

(13) Andy Murray might exist.

⋅ Also, the negation of 13 would be (14), which presumably should


be rejected.

(14) It’s impossible for Andy Murray to exist.

Descriptive Singular Terms


⋅ In standard predicate logic, individual constants and individual
variables (both singular terms) have no descriptive content. The
“meaning” of these expressions is simply an individual, the indi-
vidual assigned by the interpretation function (for constants) or
the variable assignment (for variables).

⋅ However, there is a class of expressions in natural language that


very much seem like singular terms but also appear to have sub-
stantial descriptive content, namely definite descriptions:

(15) The president of the United States


(16) The king of France
(17) The table

⋅ There are a number of different ways of trying to introduce defi-


nite descriptions into predicate logic. Girle considers the following
two options:

⋅ singular definite description operator


One option is to introduce a dedicated singular quantifier ex-
pression, ix, which combines with formulas just as other quanti-
fiers do, i.e.

ixFx � ix(Fx ∧ Gx)


4

⋅ However, syntactically, this kind expression can only occur in


the same positions as singular terms, i.e.
Frog(x) � Frog(a) � Frog(ixFx)
⋅ The latter sentence is then supposed to be translated as follows:

(18) The unique individual who is F is a frog.

⋅ Girle does not say much more than this about this way of going
for definite descriptions and unfortunately this leaves open a
number of rather complicated questions.

⋅ the quantificational theory


Another option is Russell’s proposed analysis where definite
descriptions are given a contextual definition. So, as in the pre-
vious case, a definite description operator is introduced, viz. ix,
which combines with open formulas.
⋅ So, again, we can write formulas such as:
Frog(ixFx)

⋅ However, such a sentence is really shorthand for something


more complex, namely the formula in (19):

(19) ∃x(Fx ∧ ∀y(Fy → y = x) ∧ Frogx)

⋅ Russell’s analysis of descriptions avoids many of the problems


for the previous approach and it has a range of additional ad-
vantages. However, Russell’s analysis also has a number of
problems and while it was once true that this is, as Girle puts it,
the ’most widely accepted’ view of definite descriptions, that’s
not obviously true anymore.

Existential Import: Options


⋅ We’ve already seen that when it comes to the quantifiers of pred-
icate logic, a choice has to be made as to whether these have exis-
tential import or not – i.e. does the domain consist of only existing
individuals or does it include non-existent individuals. We are
faced with a similar choice for the individual constants.
⋅ In standard predicate logic, there are no constants in the language
without an extension (i.e. without a reference). This means the
existential commitments incurred by individual constants depends
on antecedent choices about the nature of the domain.
⋅ For example, if we assume that the domain consists of only exist-
ing individuals (actualist quantification), the individual constants
will then have existential import.
⋅ And if we assume that the domain consists of both existing and
merely possibly existing individuals, the individual constans will
not have existential import.
5

⋅ There is, however, an intermediate option: Let the domain consists


of both existing and non-existing individuals, but assume that the
quantifiers only range over a subset of this domain, namely the
subset of existing individuals. This is referred to as Free Logic.

Free Logic
⋅ In Free Logic, the quantifiers have existential import, but the indi-
vidual constants do not.

⋅ So, an inference from Fa to ∃xFx is therefore invalid.

⋅ However, the formula below is a logical truth:

(∃x(x = a) ∧ Fa) → ∃xFx

⋅ So, in free logic there is no problem with statements of non-


existence involving names (let a be name of Vulcan):

(20) Vulcan does not exist.


(21) ¬∃x(x = a)

⋅ In other words, in Free Logic, names might fail to refer.

⋅ We can visualize the nature of the domain in Free Logic as follows:

domain in free logic

Range of
Range of a, b, c ...
∃ and ∀

⋅ Free Logic is particularly well suited to deal with languages con-


taining names for fictional entities, e.g. ‘Sherlock Holmes’, ‘Bat-
man’, etc.

⋅ For example, one can easily represent the truth or falsity of sen-
tences such as the following:

(22) Sherlock Holmes lives on 221B Baker Street.


(23) Batman is Bruce Wayne
(24) Superman is faster than every airplane.
6

⋅ However, Free Logic also has a range of somewhat strange con-


sequences: For example, if definite descriptions are treated as
Russell suggested, then while e.g. the sentence in (24) is true, the
corresponding sentence in (25) is going to be false.

(25) The famous superhero born on Krypton is faster than an


airplane.

⋅ This seems somewhat odd given that the description in (25) is


supposed to pick out Superman.
⋅ Girle suggest that one way to avoid this problem would be to
introduce a new quantifier with the meaning ‘exactly one’ that
ranges over the entire domain and does not have existential im-
port. This would then be the quantifier that is used to capture the
meaning of ‘the F is G’.
⋅ But it’s not clear why we should be so worried about definite
descriptions failing to pick out the right things if we are not also
concerned about statements such as those below.

(26) Superman defeated at least one other superhero in a recent


movie.
(27) Every superhero wears a special suit.

Two New Quantifiers


⋅ Yet another option is to introduce two new quantifiers, namely S
(at least one) and P (every) where neither of these have existential
import.
⋅ If we did this, we would be able to capture the meaning of (28)
below (which we were not able to previously).

(28) At least one thing does not exist.


(29) Sx∀y(x ≠ y)

⋅ We can then use these new quantifiers to rescue the Russellian


theory of descriptions, because we can simply assume that these
quantifiers are used for descriptions (when needed).
⋅ Moreover, we can now capture the meaning of the sentences in
(26) and (27).
⋅ One consequence of this view, however, is that natural language
quantifiers are genuinely ambiguous. On one interpretation, they
have existential import and on the other interpretation they do
not. But it is not clear that there is any compelling evidence that
natural language quantifiers are ambiguous in this way.
⋅ Again, one might be tempted to think that we should either keep
existential import or simply give it up so that the standard quanti-
fiers range over even non-existent (but possibly existing) individu-
als — i.e. possibilism.
7

Individual Constants in Modal Environments


⋅ Let’s now consider how to evaluate formulas in MPL involving
individual constants, e.g. (30) and (31)

(30) �Fa
(31) �Fa

⋅ Consider (30). Whether this formula is true depends on (a) our


assumptions about the domain, and (b) our assumptions about
domains across worlds. exercise: Work out the truth condi-
tions for (31) given these options.
(AC) Actualist Quantification + Constant Domains
(30) is going to be true only if there is an accessible world v
such that a (who exists) is F at v.

(PC) Possibilist Quantification + Constant Domains


(30) is going to be true only if there is an accessible world v
such that a (who might exist) is F at v.

(AV) Actualist Quantification with Variable Domains


(30) is going to be true only if there is an accessible world v
such that a exists at v and a is F at v.

(PV) Possibilist Quantification with Variable Domains


(30) is going to be true only if there is an accessible world v
such that a is in the domain of v and a is F at v.

⋅ (AC): The assumptions that yield the least complexity is actualist


quantification and constant domains, i.e. everything in the domain
exists and the quantifiers range over the same domain in every
world.
This makes life easier in many ways, but it also means that it be-
comes difficult to capture the truth of certain sentences, e.g. (32)
below.

(32) Vulcan might exist.

⋅ (PC): Moving to possibilist quantification with constant domains


fixes this problem. The sentences above are easily captured if we
assume that the domain includes possibly existing objects.
⋅ But once we accept possibilist quantification, we are assuming that
the quantifiers range over merely possibly existing individuals. As
discussed earlier, this might seem dubious.
⋅ (AV): Assuming actualist quantification but variable domains
might seem the most natural option. This way we retain existential
import (our quantifiers only quantify over existing individuals),
but we allow for the possibility that certain objects/individuals
(that do not exist in the actual world) may exist in other worlds.
8

⋅ However, our semantics now gets vastly more complicated because


in order to check for validity, we now need to consider not only
every possible assignment of extensions to our terms, but also
every possible variation in the domains.

⋅ (PV): This is the most expressively powerful but also most complex
option. Again, with this option, we would be able to capture sen-
tences such as (32) above. However, it is not entirely clear why we
for every world we need the domain to include both existing and
non-existing objects.

Names and Rigid Designation


⋅ It has been implicitly assumed in all of the above that names are
rigid designators. That is, a name a refers to the same individual in
every possible world (assuming the object exists).

⋅ Using natural language as a guide, this seems like a sensible as-


sumption about names, cf. Kripke’s many arguments in Naming
and Necessity.

⋅ If it is assumed that names are rigid, then it follows that identity


statements are necessary. That is, sentences such as those below
are necessary truths.

(33) Hesperus is Phosphorus.


(34) Agatha Christie is Mary Westmacott.

⋅ The necessity of identity is a substantial philosophical assumption


and one might have reasons to think that identity is contingent.
However, this would require that names are not treated as rigid
designators, i.e. it would need to be possible for a constant a to
have different extensions relative to different possible worlds.

⋅ The most natural way to do this is to treat “constants” similar


to the way that predicates are treated, namely as having world-
variable extensions. So, instead of thinking of the interpretation
function I simply as mapping constants to individuals in the do-
main, we should think of I as mapping pairs of constants and
worlds to extensions (where this extension might change depend-
ing on the world).

⋅ Obviously, assuming that names are non-rigid greatly complicates


the logic, but whether names are rigid or not is an empirical issue
that should not be decided based on convenience.

⋅ That said, the arguments for assumption that names are rigid are
quite compelling and has convinced most researchers in philoso-
phy of language, logic, and linguistics.
9

Exercises
⋅ Construct countermodels for the following formulas:

1. �T (�Fa ∧ �Ga) → �(Fa ∧ Ga)


2. �S4 (�∃xFx ∧ �∃yGy) → �(∃xGx ∧ ∃yFy)
3. �T (�Fa → ∃y�Fy) ∧ �(∃y�Fy → Fa)
1

Logic 2: Modal Logics – Week 8


Epistemic and Doxastic Logics
⋅ On the epistemic interpretation of modal logic, ‘�f’ is interpreted
as ‘it is known that f’.

⋅ On a doxastic interpretation of modal logic, ‘�f’ is interpreted as ‘it


is believed that f’.

⋅ In other words, these are the logics for knowledge and belief.

Agents and Operators


⋅ In epistemic and doxastic logic, it is customary to use subscripts
on the on modal operators to indicate the agent in question. More-
over, standardly, the � is written as K in epistemic logic and B in
doxastic logic.

⋅ So, the sentences below would be formalized as follows:

(1) Betty knows that f. � K b f


(2) Andy believes that f. � B a f

⋅ If we are considering a multi-agent epistemic or doxastic logic, we


will need a different modal operator for each agent. For example,
if we’re considering a logic with four agents, namely Andy, Betty,
Carly, and Deryn, we will need K a , K b , K c , and K d .

Duals
⋅ As in standard modal logics, the � has a dual, namely �.

⋅ In epistemic logic, the dual of K x is P x and in doxastic logic the


dual of B x is C x . These are defined as follows:

P x =def ¬K x ¬
C x =def ¬B x ¬

⋅ So, a formula such as (3) below, would be the translation of e.g.


(3a), (3b), or (3c). Same for (4) and (4a), (4b), and (4c).

(3) Pa f
a. It’s not the case that Andy knows that not-f.
b. Andy doesn’t know that not-f.
c. f is compatible with Andy’s knowledge.

(4) Ca f
a. It’s not the case that Andy believes that not-f.
b. Andy doesn’t believe that not-f.
c. f is compatible with Andy’s beliefs.
2

Epistemic Logic: S4
⋅ The main difference between single modality logics (e.g. alethic
modality logic) and multi-modal logics (i.e. a multiple agent logic)
is that the models will contain an accessibility relation for each
agent.

⋅ In epistemic logic, we use E for the accessibility relation and,


again, this will be indexed to an agent: E a , Eb , Ec , ...

Tree Rules for Epistemic Logic S4


⋅ The standard tree rules for propositional logic, PTr, remain un-
changed, but we add the following rules for S4 epistemic logic:

√ √
KPN-rules ¬K x f (w) ¬P x f (w)
⋮ ⋮
P x ¬f (w) (KPN) K x ¬f (w) (KPN)

√ √
KPN-Rules K x ¬f (w) P x ¬f (w)
⋮ ⋮
¬P x f (w) (KPN) ¬K x f (w) (KPN)

√ √
PR Px f (w) KR Kx f (w)
⋮ wE x v
wE x v ⋮
f (v) (PR) f (v) (KR)
↑ ↑
where v is new to path. for any v.

√ √
KT Kx f (w) KKR Kx f (w)
⋮ wE x v
f (w) (KT) ⋮
Kx f (v) (KKR)

for any v.
3

⋅ The KPN-rules are the correlate in epistemic logic of the standard


modal negation rules (MN-rules).

⋅ PR says that if x doesn’t know that not-f, then there is a world


accessible (i.e. a world compatible with x’s knowledge) such that f
is true there.

⋅ Similarly, KR says that for f to be known by x, f must be true


in every accessible world (i.e. every world compatible with x’s
knowledge).

⋅ KT is the correlate of the T-rule in standard modal logic. It sim- So, instead of this rule, we could just
ply says that for f to be known by x, f must be true. This also have (Refl).

sometimes referred to as factivity or veridicality.

⋅ KKR is the specific S4 rule. This guarantees that if a knows that So instead of this rule, we could just
f, then a knows that he knows that f. add (Trans). To see this, just consider
what a model would have to look like
to make K x f true at w, but false at v
where wE x v. This is not possible if the
Multi-Agent Rules accessibility relation is transitive.
⋅ The rules above are all single agent rules, but we also add the fol-
lowing multi-agent rule referred to as the Transmission of Knowledge
Rule.

TrKR-rule K x K y f (w)

Kx f (w) (TrKR)

in class exercises:
Using semantic trees, show that the formulas below
are valid:

1. �S4 K a ¬K a (p → p) → K a (p → K a p)

2. �S4 (K a (p → q) ∧ K a p) → ¬P a K a ¬q

3. �S4 (K a K b (p → q) ∧ K a ¬q) → K a (p ∨ r)

4. �S4 P a P a f → P a f

⋅ From an axiomatic point of view, the S4 version of epistemic logic


that we are currently considering includes the following axioms:

(K1) K x (f → y) → (K x f → K x y) Also known as Distribution.


4

(K2) K x f → f Also known as Factivity or Veridicality.

(K3) K x f → K x K x f Also known as Positive Introspection or


the KK-Principle.
(K4) K x K y f → K x f Also known as Transmissibility of Knowl-
edge
⋅ Of these axioms, only (K2) is generally considered uncontroversial.
We will discuss the consequences of these axioms in turn.

(The Problem of) Omniscience


⋅ Subtle questions arise when using modal logics to model knowl-
edge as regards what exactly we modelling. For example, are we
trying to model the structure of knowledge in an ideal (and non-
finite) knower? Or are we trying to model the structure of human
knowledge?
⋅ If the latter, how much should we assume that a human agent
knows? Does the agent know the laws of logic? The laws of epis-
temic logic? Any logic?
⋅ Epistemic logics generally imply a certain kind of omniscience on
the part of its agents, but to understand which kinds, let’s consider
a couple of different definitions:

⋅ Logical Omniscience
⋅ Deductive Omniscience
⋅ Factual Omniscience

⋅ Logical Omniscience: Following Girle, let’s distinguish between


strong and weak logical omniscience.

⋅ If an agent is represented as automatically knowing every log-


ical truth (including modal logical truths), we’ll say that the
agent is strongly logically omniscient (SLOT).
⋅ If an agent is represented as automatically knowing every logi- As Girle points out, Descartes seemed
cal truth of propositional logic or first order logic, we’ll say that to be an advocate of WLOT.

the agent is weakly logically omniscient (WLOT).

⋅ Deductive Omniscience
⋅ If an agent is represented as knowing the logical consequences
of every known proposition, we’ll say that the agent is deduc-
tively omniscient (DOT).
⋅ Factual Omniscience

⋅ If an agent is represented as knowing every true proposition,


i.e. for every proposition p, if the agent knows whether p or ¬p,
we’ll say that the agent is factually omniscient (FOT).

⋅ Girle considers a range of non-normal modal logics, but here we’ll


focus on the normal modal logics T, S4, and S5.
5

Logical Omniscience
⋅ In standard axiomatic approaches to modal logic, there is a rule of
necessitation. This rule says the following: Here D is a variable ranging over the
normal modal logics.

�D f ⇒ �D �f

⋅ That is, if there is a proof (from the empty set of premises) of a


formula f, then there is a proof of �f.

⋅ From a semantic point of view, this makes perfect sense. If there is


a proof of f from the empty set of premises, this means that f is What we should say here is really
a logical truth, i.e. f cannot be false. If f is a logical truth, then f this: If the logic is sound (which all the

holds in every possible world. Hence, �f is true.


normal modal logics are), it follows
that:

⋅ So, we also have: �D f ⇒ �D f

�D f ⇒ �D �f

⋅ Consider, for example, the rule of necessity introduction (NI) for


semantic trees. This is a general rule applying to T, S4, and S5.
Since K x is simply the epistemic correlate of �, and since it seems
natural to assume that anything that is provable from the empty
set is necessary, there is a corresponding introduction rule for K x .

⋅ As a result, we get the following in T, S4, and S5. We refer to this


as epistemic necessitation.

�D f ⇒ �D K x f

⋅ In other words, in all of the normal modal logics, agents are going
to be strongly omniscient.

⋅ One way to avoid this result, is to give up necessitation in place of


something weaker, e.g. weak necessitation:

�PL f ⇒ �D �f

⋅ This rule says that if a formula can be proven from the empty set
in PL, then it is necessary. If weak necessitation is adopted, then
when transposing it into epistemic logic, it would have the more
limited consequence that only every logical truth in PL is known.

⋅ However, it would also mean that there are no modal logical truths
which seems bizarre.
6

Deductive Omniscience
⋅ The weakest of the normal modal logics is K and every other nor- This is therefore known as the K-axiom.
mal modal logic is an extension of K.

⋅ In K (and hence every other normal modal logic) the following is


an axiom.

�(f → y) → (�f → �y)

⋅ In epistemic logics that are simply extensions of one of the normal


modal logics, we therefore get the corresponding (K1) axiom.

(K1) K x (f → y) → (K x f → K x y)

⋅ Now, this axiom, the K-axiom (or the distribution axiom), is not
itself going to lead to deductive omniscience. Deductive omni-
science requires the following theorem.

�D (f → y) → (K x f → K x y) Deductive Omniscience

⋅ But all we can get from the K-axiom on its own is:

�D K x (f → y) → (K x f → K x y)

⋅ And this doesn’t seem obviously unreasonable.

⋅ But when distribution is combined with strong omniscience, we


get deductive omniscience as a result.

Proof. Suppose that f �D y. If so, then given logical omni-


science it follows that �D K x (f → y). And then using the dis-
tribution axiom, it follows that �D K x f → K x y. So, if f �D y, it
follows that K x f �D K x y. QED

⋅ In conclusion, in T, S4, and S5, agents are both strongly omniscient


and deductively omniscient.

⋅ Let’s now consider some of the consequences of the individual


logical systems – starting with S4.

Knowing That You Know


⋅ In S4, agents are strongly omniscient and deductively omniscient.
However, remember, in S4, we also have the following axiom:

(K3) K x f → K x K x f Also known as Positive Introspection or


the KK-Principle.
⋅ So, in S4, for any proposition f, if an agent knows that f, the agent
will know that she knows that f.
7

⋅ The S4-axiom is controversial in epistemic logic and there are One prominent example is Timothy
many sophisticated arguments purporting to show that it fails. Williamson’s arguments in ‘Knowledge
and Its Limits’ (Oxford, 2000).
⋅ Here is what might seem a very simple argument against the S4-
axiom:

If the-S4 axiom governs knowledge, then everyone


should not only know that it is true, they should also
know that they know that it is true, know that they
know that they know that it is true, and so on.
So, suppose that it is an axiom governing the nature of
knowledge. If so, we know (by necessitation) that it’s
true. Moreover, we should know that we know that it
is true. But since certain people deny that it is true, it
is clear that they do not know that they know that it is
true. So, it must be false.

⋅ This argument basically says that if the S4 axiom holds, it should


not be possible to deny that it holds.

⋅ Yet, one might think that the immediate problem that this argu-
ment reveals is not with the S4-axiom itself, but rather necessita-
tion.

Limits on Knowledge in S4
⋅ In S4 there are some limits on agents’ knowledge. For example,
in S4, it is possible for an agent to be mistaken about what she
believes.

⋅ That is, the following are consistent formulas:

(5) ¬f ∧ ¬K a ¬K a f
(6) ¬K a f ∧ ¬K a ¬K a f

⋅ In other words, the S4-agent can be ignorant about what she


knows and what she doesn’t know.

Knowing That You Don’t Know


⋅ The axiom characterizing S5 is the following:

�f → ��f

⋅ So, for an S5 epistemic logic, we get the corresponding axiom:

(K5) P x f → K x P x f
8

⋅ The equivalent of which is:

¬K x ¬f → K x ¬K x ¬f

⋅ From this axiom, the following are going to be theorems:

(T1) ¬f → K x ¬K x f The Platonic Principle

(T2) ¬K x f → K x ¬K x f Negative Introspection

Re: (T1). Let f be ¬p. So, suppose ¬¬p is true. That is, p is
true. If so, ¬p is false. If ¬p is false, it cannot be known. So, it
follows that ¬K a ¬p. But then by (K5), it follows that K a ¬K a ¬p,
i.e. K a ¬K a f.

⋅ In other words, in S5 epistemic logic, every agent knows exactly


how ignorant they are. For any proposition f, if f is false, they
know that they don’t know that f is true.

⋅ This does not quite amount to factual omniscience, since it is pos-


sible for an agent to know that they don’t know that f, but also to
know that they don’t know that ¬f.

⋅ But, the S5 agent is nevertheless an incredibly epistemically pow- Girle writes that there is only possible
erful agent: She is logically omniscient, deductively omniscient, S5 knower, namely an omniscient god.
But one might think that an omniscient
and knows for any false proposition that she does not know that god would also be factually omniscient.
proposition.

Knowledge in T
⋅ In the modal logic T, we have the axioms (K1), (K2), and epistemic
necessitation. So, agents in T are strongly omniscient and deduc-
tively omniscient.

⋅ However, since neither the S4-axiom nor the S5-axiom holds in T,


the T-agent is much more restricted in her knowledge. For exam-
ple, it is possible for the T-agent to know that f without knowing
that she knows that f (failure of the S4-axiom). It’s also possible
for f to be false without the agent knowing that she doesn’t know
that f (failure of 5-axiom).

⋅ But despite the lack of the S4-axiom in T, epistemic necessitation


entails that any logical truth is known and since knowledge of log-
ical truths are theorems of T, it follows that knowledge of logical
truths is itself knowledge.

⋅ In other words, even in T, the KK-thesis holds in a restricted way:

– Given necessitation:

�T f ⇒ �T K x f

– So, it follows that if f is a theorem in T, then �T K x K x f


1

Logic 2: Modal Logics – Week 10


Natural Language Conditionals
⋅ The analysis of natural language conditionals as material impli-
cations is unsatisfactory for a variety of reasons. Specifically, this Girle refers to the converse of this
analysis validates a number of inferences that seem dubious. principle, namely (f ∧ ¬y) → ¬(f → y),
as Jackson’s uncontested principle.

⋅ negated conditional to conjunction

¬(f → y) → (f ∧ ¬y)

⋅ The formula above is valid in standard propositional logic. So, if


material implication is the right analysis of conditionals in English,
this means that the following inference should be valid:

(1) It’s not true that if there will be a minor earthquake in


Edinburgh tomorrow, the Dugald Stewart Building will
collapse.
Therefore, there will be a minor earthquake in Edinburgh to-
morrow and the Dugald Stewart Building will not collapse.

⋅ antecedent strengthening This also sometimes referred to as aug-


mentation and (confusingly) weakening.

(f → y) → ((f ∧ c) → y)

⋅ This formula is also valid in standard PL, so this means that the
following inference should be valid:

(2) If Jack flips the switch, the light comes on.


Therefore, if Jack flips the switch and turns off the electricity,
the light comes on.

⋅ contraposition

(f → y) → (¬y → ¬f)

⋅ From the validity of contraposition, it follows that the inference


below should be valid:

(3) If Herman moves in, Ernie will not move out. To see why this inference is problem-
atic, just suppose Ernie really wants to
Therefore, if Ernie moves out, then Herman will not move in. live in the same house as Herman, but
that Herman has no specific desire to
live in the same house as Ernie — Her-
man just wants to live in that particular
house.
2

⋅ paradoxes of material implication

f → (y → f)
¬f → (f → y)

⋅ The problem commonly referred to as the ‘paradoxes of material


implication’ is essentially that when conditionals are analyzed in
terms of material implication, a conditional will be true as long
as the antecedent is false or the consequent is true. This produces
some very counterintuitive results as there needs to be no relevant
relation between the content of the antecedent and consequent, e.g.

(4) If it’s 3:10pm, then Goldbach’s conjecture is false.


(5) If everyone got 100 on the midterm exam, then there is no
sugar in Diet Coke.
(6) If there are exactly two people are in this room, then there
are exactly 100 people.

⋅ Ideally, an analysis of natural language conditionals should avoid


licensing any of these inferences while still validating various
other inferences that are intuitively valid, e.g. modus ponens and
modus tollens.

f→y f→y
f ¬y
MP MT
y ¬f

⋅ Some people have alleged that there are counterexamples to MP One famous alleged counterexample to
and MT, but they are generally considered valid for natural lan- MP is McGee (1985) and a more recent
alleged counterexample to MT is Yalcin
guage conditionals: (2012).

(7) If Louise passed the exam, then John passed too. Louise
passed. Therefore, John passed.
(8) If Louise passed the exam, then John passed too. John
failed. Therefore, Louise failed.

⋅ Conditional logics were initially conceived as an attempt to avoid


these (or at least some) of these problems.

Strict Implication
⋅ C. I. Lewis proposed that natural language conditionals be in-
terpreted by use of a new connective, �, which Lewis defined as
follows:

f � y =def ¬�(f ∧ ¬y)


3

⋅ On this analysis, a natural language conditional is equivalent to


a necessary material implication, viz. (f � f) ≡ �(f → y). For this
reason, this type of analysis is standardly referred to as the strict
implication analysis.

Proof. Assume ¬�(f ∧ ¬y). Since � is the dual of �, we get


�¬(f ∧ ¬y). By DeMorgan, �¬(f ∧ ¬y) is equivalent to �(¬f ∨
¬¬y) which is equivalent to �(¬f ∨ y). So, by the definition of
material implication, we get �(f → y).

⋅ Instead of considering the merits of Lewis’ axiomatic systems


S1-S5, we will simply consider the merits of a strict implication
analysis within normal modal logics (the problems that arise are
essentially the same).

Avoiding the Paradoxes of Material Implication


⋅ The main advantage of the strict implication analysis is that it
helps resolve the paradoxes of material implication. Consider the
strict implication versions of the paradoxes of material implication:

A1. f � (y � f)
A2. ¬f � (f � y)

⋅ (A1) is equivalent to �(f → �(y → f)) and (A2) is equivalent to exercise: Give countermodel in the
�(¬f → �(f → y)). Both of these are invalid in all of the normal weakest system possible.

modal logics.
⋅ So, the strict implication analysis of conditionals avoids the predic-
tions in (4)–(6).

Avoiding Negated Conditional to Conjunction


⋅ The strict implication analysis also avoids the problem with infer-
ences from negated conditionals to conjunctions. (A3) below is the
strict implication version of that inference.

A3. ¬(f � y) � (f ∧ ¬y)

⋅ However, (A3) is equivalent to �(¬�(f → y) → (f ∧ ¬y)) which is exercise: Give countermodel in the
invalid in all the normal modal logics. weakest system possible.

Antecedent Strengthening and Contraposition


⋅ As regards antecedent strengthening and contraposition, the strict exercise: Give countermodel in the
implication analysis is unfortunately of no help as the formulas in weakest system possible.

A4. and A5. are both valid in every normal modal logic:

A4. (f � y) � ((f ∧ c) � y)
A5. (f � y) � (¬y � ¬f)

⋅ This means that the strict implication analysis of conditionals is


still going to make the predictions in (2) and (3).
4

The Paradoxes of Material Implication Redux


⋅ Moreover, the paradoxes of material implication do not disappear Here � is a necessary falsehood and � is
entirely on the strict implication analysis. The strict implication a necessary truth

analysis still validates formulas of the following kind.

�→f
f→�

⋅ In other words, on the strict implication analysis, we predict con-


ditionals with necessarily false antecedents or necessarily true
consequents are always true, e.g.

(9) If every even number greater than 2 is not the sum of two
primes, then no one would be surprised.
(10) If every even number greater than 2 is not the sum of two
primes, then every even number greater than 2 is the sum
of two primes.

Conditional Logic: The C Logics


⋅ The conditional logic C is essentially an attempt at a formalization
of an analysis of conditionals proposed by Stalnaker (1968).

⋅ The general idea here is that a natural language conditional ex-


presses a special type of necessity claim. So, we are going to need
not only a modal semantics (that is, a model theory that includes
possible worlds and accessibility relations), but also a new connec-
tive to represent natural language conditionals.

⋅ So, we introduce a new binary connective, ‘>’, governed by the


following syntactic rule.

If f and y are formulas, then so is (f > y).

⋅ The conceptual idea behind Stalnaker’s analysis is that for a nat-


ural language conditional, if p then q, to be true, one needs only
consider what is the case in possible worlds where p is true. If q is In contrast to the strict implication anal-
true in those worlds, the conditional is true. ysis which says that if the conditional
is true, then in every world either p is
⋅ So, one way to implement this idea formally is to think of ‘f > y’
false or q is true.

as expressing a restricted necessity claim, viz. in every f-world, y


is true. We can express this idea in conditional logics by using a
variation on the interpretation of � and � where these are indexed
to a formula (viz. the antecedent of the conditional):

(11) (f > y) � �f y

⋅ That is, the �-formula in (11) is to be interpreted as the statement


that y holds in every accessible f-world. In other words, it is a
necessity claim, but about a restricted set of possible worlds.
5

⋅ A similar variation of � can then be defined in terms of the � in


(11).

(12) �f y =def ¬�f ¬y

⋅ Hence, the formula ‘f > y’ is to be read: In all accessible f-worlds, y Again, this might seem very similar to
is true. the strict implication analysis, but there
is one key difference, namely that on
⋅ Since we are treating ‘f > y’ as ‘�f y’, ‘¬(f > y)’ is going to be
the strict analysis, accessible worlds
where the antecedent is false are still
equivalent to ¬�f y’ which given the definition of � above is considered relevant to the evaluation
equivalent to �f ¬y. of the conditional. This is not the case
given this alternative analysis.

⋅ As is usual in modal logic, � is a world-generator and � as a


world-filler.

⋅ However, since in the current setup accessibility relations are in-


dexed to formulas, we are going to use A to represent these (rather
than the customary R).

⋅ So, we can now state the truth conditions for ‘>’ as follows:

(13) (f > y)(w) = �f y(w) = 1 iff ∀v(wAf v → y(v) = 1)


(14) ¬(f > y)(w) = �f ¬y(w) = 1 iff ∃v(wAf v ∧ y(v) = 0)

⋅ Using this semantics, we can now construct tree rules that will
allow us to show the validity of various formulas in C.

Semantic Trees in C
⋅ The tree rules for conditional logic C will consist of the standard
rules for propositional logic (PTr) plus the two rules below.
It is imperative to note that in both
these rules, the antecedent of the
√ √
f > y (w) ¬(f > y) (w)
conditional and the subscript on the
accessibility relation must match.
wAf v ⋮
⋮ wAf v
y (v) (>R) ¬y (v) (¬>R)

where v is new to path

⋅ So, the tree rules for conditional logic C are: PTR ∪ {>R, ¬>R}
6

⋅ Let’s consider a simple proof.

⋅ (p > q), (p > r) �C p > (q ∧ r)



1. p>q (w) Pr

2. p>r (w) Pr

3. ¬(p > (q ∧ r)) (w) NC
4. wA p v 3, ¬>R

5. ¬(q ∧ r) (v) 3, ¬>R
6. q (v) 1, 4, >R
7. r (v) 2, 4, >R

8. ¬q (v) ¬r (v) 5, PL
× ×

⋅ In C, we can use semantic trees to demonstrate that antecedent


strengthening (see Girle, p.97), negated conditional to conjunction (see
Girle, p.96), and contraposition are all invalid (cf. below).

⋅ contraposition in C: (p > q) > (¬q > ¬p)



1. ¬((p > q) > (¬q > ¬p)) (w) NTF
2. wA(p > q) v 1, ¬>R

3. ¬(¬q > ¬p) (v) 1,2, ¬>R
4. vA¬q u 3, ¬>R

5. ¬¬p (u) 3,4, ¬>R
6. p (u) 5, PL

⋅ One countermodel would thus look as follows: Remember that we are analyzing
contraposition as follows: �� p q �¬q ¬p
⋅ W = {w, v, u}
⋅ A p > q = {�w,v�}
⋅ A¬q = {�v,u�}
⋅ I: {��p,w�,1�, ��p,v�,1�, ��q,w�,0�, ��q,v�,1�, ��p,u�,1�, ��q,u�,0�}

⋅ So far so good. Conditional logic C ensures that these intuitively


problematic inferences are not valid for >.

⋅ However, the reason might just be that this logic is too weak. For
example, in C, modus ponens fails for >.
7

⋅ modus ponens in C:

1. p>q (w) Premise


2. p (w) Premise
3. ¬q (v) NC

⋅ Given the rules are our disposal, there is no way forward in this
tree. Hence, we are unable to close the tree. Predictably, we get the
same result for modus tollens.
⋅ While we do want to avoid antecedent strengthening, contraposition,
negated conditionals to conjunction, and the paradoxes of material im-
plication to hold for natural language conditionals, we do not an
analysis that invalidates modus ponens and modus tollens.
⋅ We thus need to amend C in order to avoid this result.

Conditional Logic C+
⋅ The problem with C is similar to the problem with the normal
modal logic K. Since there are no constraints on accessibility, ac-
cess can only be generated by negated conditionals, and this is
essentially the reason that MP and MT fail.
⋅ So, in C+ we add a rule which allows us to evaluate the possible
consequences of true and false conditionals at the world of evalua-
tion, viz. a reflexivity rule.

(f > y) (w)

¬f (w) f (w) Refl.


wAf w
(for any w in the path)

¬(f > y) (w)


¬f (w) f (w) Refl.


wAf w
(for any w in the path)

⋅ In addition (Refl), we add another rule that, unlike the ¬>R-rule,


makes explicit that the antecedent of a false conditional, viz. ¬(p >
q), is true in the world that is made accessible, viz. the rule ¬>R+ :
8

¬(f > y) (w)


wA⋮ f v
f (v)
¬y (v) (¬>R+ )

⋅ With these rules added, it can now be shown that modus ponens is The same can be shown for modus
valid for > in C+ : tollens in C+ , cf. Girle p.98-99.


1. p>q (w) Premise
2. p (w) Premise
3. ¬q (w) NC

4. ¬p (w) p (w) 1, Refl


× wA p w
5. q (w) 1, 4, >R
×

⋅ But as desired contraposition still fails for > in C+ .


1. ¬((p > q) > (¬q > ¬p)) (w) NTF
2. wA p > q v 1, ¬>R+

3. p>q (v) 1, 2, ¬>R+

4. ¬(¬q > ¬p) (v) 1, 2, ¬>R+
5. vA¬q u 4, ¬>R+
6. ¬q (u) 4, 5, ¬>R+
7. ¬p (u) 4, 5, ¬>R+

8. ¬p (v) p (v) 3, Refl


↑ vA p v
9. q (v) 3, 8, >R

⋅ It can also be shown that antecedent strengthening, negated condi-


tional to conjunction, and the paradoxes of material implication remain
invalid in C+ .
9

exercises:
Check whether the following formulas are valid in C+ using a semantic tree. If invalid,
give a countermodel:

1. (p > q) > ((p ∧ r) > q)


2. p > (q > p)
3. �(p > q) ∧ (q > r)� > (p > r)
4. p > (¬p > q)
5. �(p ∨ q) ∧ ¬q� > p)
1

Logic 2: Modal Logics – Week 9


Deontic Logic
⋅ Deontic Logic is the logic of obligations and permissions. So, in Strictly speaking, we should probably
deontic logic, � is interpreted as follows: include an agent-parameter, but we will
ignore this complications here.

�f � ‘It is obligatory to bring it about that f’


‘One must bring it about that f’ I will use these interchangeably
‘One ought to bring it about that f’ throughout these notes.

⋅ As usual, � is the dual of �, so the interpretation of � is the fol-


lowing:

�f � ‘It’s not obligatory to bring it about that not-f’.

⋅ Or equivalently:

�f � ‘It is permissible to bring it about that f’.


‘One may bring it about that f’

⋅ As usual, we assume that the truth of ‘�f’ requires f to obtain in


every accessible world (i.e. that f has been brought about in every
accessible world) and that the truth of ‘�f’ requires that there is
an accessible world where f obtains (i.e. that f has been brought
about in at least one accessible world).

Ideal Worlds and Non-Reflexivity


⋅ It is standardly assumed that the worlds quantified over in deontic
logic are “ideal worlds”, i.e. worlds where the laws/rules/regulations
are obeyed.

⋅ So, for a sentence such as ‘it is obligatory to bring it about that f’


to be true, what is required is that f is true in all the accessible
worlds where these are construed as the ideal worlds. Similarly,
for ‘it is permissible to bring it about that f’ to be true, what is
required is simply that there is an accessible (ideal) world where f
is true (where f is brought about).

⋅ With respect to deontic logic, it is immediately clear that any of


the reflexive modal logics (e.g. T, S4, B, and S5) are going to make
a bad prediction. These modal logics all include the axiom ‘�f
→ f’ — which model theoretically corresponds to a reflexivity
constraint on the accessibility relation.

⋅ Including this axiom in a deontic logic would mean that the actual
world is always accessible, and hence that the actual world is in-
cluded among the ideal worlds. As a result, for f to be obligatory,
it would have to be true. Or, in other words, if f is not true, then
its not obligatory.
2

⋅ So, the only reasonable starting point for a logic of obligations


and permissions is a non-reflexive logic. In standard deontic logic
(which I’ll refer to as SDL), the system D is preferred. The modal
logic D is simply K with the following axiom added.

D: �f → �f

⋅ Model-theoretically, this corresponds to adding a seriality con-


straint to the accessibility relation. Specifically, the accessibility
relation must satisfy the following condition:

∀w∃v(wRv)

⋅ The addition of this axiom to K ensures that an obligation to bring


it about that f entails the permissibility of bringing it about that f.

⋅ Following standard conventions, I will use ‘O’ (obligation) is for


‘�’ and ‘P’ (permission) for ‘�’.

The Starting Point of Standard Deontic Logic


⋅ In the original formulations deontic logic, the following axioms
were assumed:

(A1) Op ↔ ¬P¬p
(A2) Pp ∨ P¬p
(A3) P(p ∨ q) ↔ (Pp ∨ Pq) * This principle was actually rejected in
(A4∗) ¬P(p ∧ ¬p) the original formulation of basic deontic
logic by von Wright.
(A5) (p ↔ q) → (Pp ↔ Pq)

⋅ Brief explanation of these principles:

(A1) This is simply the definition of P as the dual of O.

(A2) This is the principle of permission. It states that for any propo-
sition p, either it is permissible to bring about that p or it is
permissible to bring about that ¬p. This seems intuitively rea-
sonable.

(A3) This is the principle of deontic distribution. It states that if it is


permissible to bring about p or q, then it is permissible to bring
about p or it is permissible to bring about q — and vice versa, Notice that ¬P(p ∧ ¬p) is equivalent to
i.e. if it is permissible to bring about p or it is permissible to O(p ∨ ¬p).

bring about q, then it is permissible to bring about p or q.

(A4) This is essentially a necessitation principle: This states that it


is not permissible to bring about a contradiction — or equiva-
lently it is obligatory to bring about tautologies. This is admit-
tedly an odd obligation, but it is trivially satisfied.
3

(A5) This is essentially a substitution principle: This states that


if two propositions are logically equivalent, then the claim that
one is permitted is logically equivalent to the claim that the
other is permitted.

Tree Rules for Standard Deontic Logic (SDL)


⋅ The previous axioms are all included in the standard deontic logic
system DT. The tree rules for DT are the following:

√ √
⋅ ¬Pf (w) ⋅ ¬Of (w)
⋮ ⋮
O¬f (w) (OPN) P¬f (w) (OPN)

√ √ √
⋅ Pf (w) ⋅ Of (w) ⋅ Of (w)
⋮ ⋮ wAv
wAv Pf (w) (OD) ⋮
f (v) (PR) f (v) (OR)

where v is new to path.

⋅ If the following rule is added we get the system K4.


⋅ Of (w)
wAv

Of (v) (OOR)

⋅ This is the rule corresponding to (Trans) in standard normal modal


logics.

exercises:
Consider the formulas below. Determine whether
these are valid or invalid. If valid, show this using a
semantic tree. If invalid, give a countermodel:

1. O(p → q) → (Op → Oq)


2. OPp ∨ PO¬p
3. Pp → (Oq → Op)
4

Puzzles in Standard Deontic Modal Logic


These problems (and others)
Ross’ Paradox are nicely explicated in some
recent lecture notes by Will Starr, cf.
⋅ One of the most discussed problems for deontic logics is Ross’ [Link]

Paradox. This is the observation that the following inference is


valid in SDL: This is easily proved using the rules
above.

(1) a. Op → O(p ∨ q)
b. Pp → P(p ∨ q)

However, this means that inferences such as the following should


be valid:

(2) I ought to clean my room.


So, I ought to clean my room or burn the house down.
(3) I am permitted to write an essay.
So, I am permitted to write an essay or plagiarize an essay.

⋅ The extent to which these are a problem has been the subject of
much debate. Girle seems to think that these are not much of a
problem simply because the worlds quantified over in SDL are
‘deontically perfect’ worlds — and hence if there is an obligation
to bring it about that p, then p is true in every world, and so (p ∨
q) is true in every world as well.

Material Implication and Conditional Obligations


⋅ If conditionals are analyzed simply in terms of material implica-
tion, a number of very counterintuitive consequences follow in
deontic logic. For example, in deontic logic, we get a variant of the
paradox of material implication, namely the following:

(4) O¬p → O(p → q)

⋅ In other words, if p is forbidden (one is obligated to bring about


not-p), then if one were to bring about p, one would be obligated
to bring about anything and everything. But that seems absurd.

Free Choice Permission


⋅ A problem that arises with respect to permission is that these seem
to intuitively license something resembling the rule of simplifica-
tion but for disjunction. Normally, from a disjunction, one may not
infer either disjunct, i.e. (p ∨ q) does not entail p.

⋅ But now consider (5). This sentence intuitively entails both (5a)
and (5b).

(5) You may have ice cream or cake. (you are permitted ...)
5

a. You may have ice cream.


b. You may have cake.

⋅ From (5a) and (5b), however, it seems we may infer (6).

(6) You may have ice cream and you may have cake.

⋅ But this seems to suggest that you may have both ice cream and
cake and it is not clear that this follows intuitively from (5).

⋅ The problem is that it is not clear how to capture these facts. On


the one hand, it looks as if we want our logic to license the follow-
ing inferences:

P(p ∨ q) P(p ∨ q)

Pp Pq

⋅ Yet these inferences are clearly not going to be valid with our
current set of rules.

⋅ But even if these inferences could somehow be made to come out


valid, we would then need some way of blocking the following
inference:

P(p ∨ q)

Pp ∧ Pq

⋅ And this would follow from the simple rule of conjunction intro-
duction (conj.).

Forrester’s Paradox
⋅ Consider the statements below:

(7) It is forbidden for Jack to hit his son.


(8) If Jack does hit his son, he is obligated to hit him gently.
(9) Jack hits his son.

⋅ Intuitively, there is nothing inconsistent about these sentences, i.e.


it seems perfectly possible for these sentences to be simultaneously
true.

⋅ But now consider their translations into SDL: Assume:


p = ‘Jack hits his son’
q = ‘Jack hits his son gently’
(10) O¬p
(11) p → Oq
(12) p
6

⋅ The problem is that these sentences are jointly inconsistent:

⋅ Proof. From (11) and (12) plus modus ponens, we get Oq. How-
ever, q � p, viz. if Jack hit his son gently, it seems to follow that
Jack hit his son. So, then from Oq it follows that Op — and this
is, of course, inconsistent with (10).

⋅ The problem here is the substitution of logical entailments under


O. However, in order to avoid this consequence, a quite radical
departure from SDL is needed.

Chisholm’s Paradox
⋅ Consider the following set of intuitively consistent sentences: Assume:
p = ‘Jones goes’
q = ‘Jones tells his neighbors he is
(13) Jones ought to go (assist his neighbors). coming’
(14) It ought to be that if Jones goes, he tells them he is coming.
(15) If Jones does not go, he ought to not tell them he is coming.
(16) As a matter of fact, Jones did not go

⋅ Translating these into standard deontic logic, we get:

(17) Op
(18) O(p → q)
(19) ¬p → O¬q
(20) ¬p

⋅ As in the case of Forrester’s paradox (cf. above), we find that these


sentences are inconsistent in standard deontic logic:

⋅ Proof. From (19) and (20) plus modus ponens, we get O¬q.
From the proof we did in the exercise above, we know that
(Op → Oq) follows from (18). But this combined with (17) and
modus ponens yields Oq which is inconsistent with O¬q.

Must vs. Ought


⋅ We have, so far, been treating both ‘must’ and ‘ought’ as having
the same meaning, namely as O, viz.

Of � ‘It is obligatory to bring it about that f’


‘One must bring it about that f’
‘One ought to bring it about that f’

⋅ However, there are good reasons to suspect that ‘must’ and ‘ought’
have different meanings. For example, the sentence below seems
clearly consistent.

(21) You may skip the talk, but you ought to attend.

⋅ If we translate (21) into SDL, we get the following: Assume:


p = ‘you attend the talk’
7

(22) P¬p ∧ Op

⋅ But this is inconsistent.

⋅ Proof. From P¬p, it follows that there is an accessible ¬p-world


but from Op it follows that every accessible world is a p-world.
Contradiction.

⋅ Also, notice that the sentence sounds weird with ‘ought’ substi-
tuted for ‘must’.

(23) # You may skip the talk, but you must attend.

⋅ So, in conclusion, it seems that we cannot treat ‘ought’ and ‘must’


as having the same quantificational force, viz. the same meaning.

The Miners Puzzle


⋅ Recently a new puzzle about ‘if’ and ‘ought’ and the interaction of
these were presented by Kolodny and MacFarlane (2010):

“Ten miners are trapped either in shaft A or in shaft B, but we do


not know which. Flood waters threaten to flood the shafts. We have
enough sandbags to block one shaft, but not both. If we block one
shaft, all the water will go into the other shaft, killing any miners
inside it. If we block neither shaft, both shafts will fill halfway with
water, and just one miner, the lowest in the shaft, will be killed.”
(K&M, 2010, p.1-2)

⋅ Given this scenario, the following claims are all intuitively true.

(24) The miners are in shaft A or shaft B.


(25) If the miners are in shaft A, we ought to block A.
(26) If the miners are in shaft B, we ought to block B.
(27) We ought to block neither shaft.

exercise:
Translate each of these sentences into formulas of stan-
dard deontic logic and show that they jointly lead to a
contradiction.
1

Logic 2: Modal Logics – Revision Week


Modal Logics: Tree Rules, Axioms, and Countermodels
⋅ Truth conditions for � and �:

⋅ V M (�f, w) = 1 iff ∀v(wRv → V M (f, v) = 1)


⋅ V M (�f, w) = 1 iff ∃v(wRv ∧ V M (f, v) = 1)

Basic Tree Rules for � and �



�R-rule �f (w) NB! For the (�R) rule, it is still crucial

that the world v introduced in line n is
new to that part of the tree.
wRv
f (v) l.1, (�R)

new world to path


�R-rule �f (w) NB! For the (�R) rule, notice that
antecedent access to some world is
wRv required in order to “discharge” the box.

f (v) l.1-2, (�R)

Modal Negation Rules


⋅ Since � and � are duals, we have the following rules for the inter-
action of negation with � and �:

√ √
1. ¬�f (w) 1. ¬�f (w)
⋮ ⋮ ⋮ ⋮
n �¬f (w) l.1, (MN) n �¬f (w) l.1, (MN)

⋅ The basic rules and the MN rules combined with the standard
rules for propositional logic (PTr) yield the modal logic K.

Modal Logic T
⋅ The modal logic T is characterized by having a reflexive accessibil-
ity relation, so we add the following tree rule:

Refl. ⋮
wRw (Refl.)

for any w in the path
2

⋅ In axiomatic approaches to modal logic, the axiom that character-


izes T is the following:

T: �f → f

Modal Logic S4
⋅ The modal logic S4 is characterized by having a reflexive and
transitive accessibility relation, so in addition to (Refl.) we add the
following rule:

Trans. wRv
vRu

wRu l.1, l.2, (Trans.)

⋅ In axiomatic approaches to modal logic, the axiom that character-


izes S4 is the following:

S4: �f → ��f

Modal Logic B
⋅ The modal logic B is characterized by having a reflexive and sym-
metric accessibility relation, so in addition to (Refl.) we add the
following rule:

Symm. wRv

vRw l.1, (Symm.)

⋅ In axiomatic approaches to modal logic, the axiom that character-


izes B is the following:

B: f → ��f

Modal Logic S5
⋅ The modal logic B is characterized by having a reflexive, transitive,
and symmetric accessibility relation, so the tree rules for S5 simply
include �R, �R, (Refl.), (Trans.), and (Symm.).

⋅ In axiomatic approaches to modal logic, the axiom that character-


izes S5 is the following:

S5: �f → ��f
3

Summary of Tree Rules for the Normal Modal Logics


⋅ The tree rules for each of the normal modal systems thus look as
follows:

KTr = PTR ∪ MN ∪ {�R, �R}


TTr = PTR ∪ MN ∪ {�R, �R, Refl.}
S4Tr = PTR ∪ MN ∪ {�R, �R, Refl., Trans.}
BTr = PTR ∪ MN ∪ {�R, �R, Refl., Symm.}
S5Tr = PTR ∪ MN ∪ {�R, �R, Refl., Trans., Symm.}

⋅ Since K is the weakest of the systems, anything valid in K will be


valid in every other system. Similarly, since S5 is the strongest of
the systems, anything invalid in S5 will be invalid in every weaker
systems. The diagram below indicates the relative strength of the
normal modal logics:

reflexive, transitive

S4

K T S5 reflexive, transitive, symmetric

reflexive

reflexive, symmetric

Epistemic Logic
⋅ The tree rules for epistemic logic are more or less identical to the
rules in standard modal logic — the main difference being that
the modal operators, K and P, and the accessibility relation, E, are
relativized to agents.

⋅ Truth Conditions for K and P:

⋅ V M (K x f, w) = 1 iff ∀v(wE x v → V M (f, v) = 1)


⋅ V M (P x f, w) = 1 iff ∃v(wE x v ∧ V M (f, v) = 1)

Tree Rules for Epistemic Logic

√ √
KPN-rules ¬K x f (w) ¬P x f (w)
⋮ ⋮
P x ¬f (w) (KPN) K x ¬f (w) (KPN)
4

√ √
PR Px f (w) KR Kx f (w)
⋮ wE x v
wE x v ⋮
f (v) (PR) f (v) (KR)
↑ ↑
where v is new to path. for any v.

√ √
KT Kx f (w) KKR K x f (w)
⋮ wE x v
f (w) (KT) ⋮
Kx f (v) (KKR)

for any v.

⋅ The KT-rule is the correlate of reflexivity guaranteeing factivity,


hence with this rule added we get an epistemic logic based on
modal logic T:

⋅ Tree Rules for Epistemic Logic T = {PTr, KPN, PR, KR, KT}.

⋅ The KKR-rule is the correlate of transitivity guaranteeing the so-


called KK-principle, viz. K x f → K x K x f. Thus, if this rule is added
to the tree rules for T, we get the epistemic logic S4.

⋅ Tree Rules for Epistemic Logic S4 = {PTr, KPN, PR, KR, KT,
KKR}.

⋅ For multiagent epistemic logic, we might add the following tree


rule as well:

TrKR K x K y f (w)

Kx f (w) (TrKR)

Deontic Logic
⋅ Similar to Epistemic Logic, we are only going to consider the tree
rules for two deontic modal logics, namely DT and D4.

⋅ In deontic logic, as usual we have two modal operators O (obliga-


tion) and P (permission) which have the standard truth conditions:

⋅ V M (Of, w) = 1 iff ∀v(wAv → V M (f, v) = 1)


⋅ V M (Pf, w) = 1 iff ∃v(wAv ∧ V M (f, v) = 1)
5

Tree Rules for Deontic Logic


√ √
OPN ¬Pf (w) OPN ¬Of (w)
⋮ ⋮
O¬f (w) (OPN) P¬f (w) (OPN)

√ √
PR Pf (w) OR Of (w)
⋮ wAv
wAv ⋮
y (v) (PR) f (v) (OR)

where v is new to path.

√ √
OD Of (w) OOR Of (w)
⋮ wAv
Pf (w) (OD) ⋮
Of (v) (OOR)

⋅ The key thing to remember about the deontic logics is that these
are non-reflexive logics. So, the name DT is slightly misleading as
none of the tree rules above correspond to a reflexivity constraint
on the accessibility relation.

⋅ Although deontic logics are non-reflexive, they do include the rule


OD: This corresponds to a seriality constraint on the accessibility
relation. The axiom that characterizes modal logics with a serial
accessibility relation is the following:

�f → �f

⋅ Tree Rules for DT: {PTr, OPN, PR, OR, OD}.

⋅ The rule OOR is the correlate of transitivity in deontic logic. This


rule validates the following formula: Of → OOf.

⋅ When the OOR rule is added to DT, we get the system D4.

⋅ Tree Rules for D4: {PTr, OPN, PR, OR, OD, OOR}.

Conditional Logic C+
⋅ Conditional Logic C+ is a type of modal propositional logic. How-
ever, instead of � and �, C+ has just one (binary) modal operator,
namely >.
6

⋅ The semantics for > is the following: The notation ‘wAf v’ here means that v
is an accessible f-world from w.
⋅ V M (f > y, w) = 1 iff ∀v(wAf v → V M (y, v) = 1)
⋅ V M (¬(f > y), w) = 1 iff ∃v(wAf v ∧ V M (y, v) = 0)

Semantic Trees in Conditional Logic C+


⋅ In addition the standard tree rules for propositional logic (PTr), we
add the following rules in C+ :

√ √
>R f > y (w) ¬>R ¬(f > y) (w)
wAf v ⋮
⋮ wAf v
y (v) (>R) ¬y (v) (¬>R)

where v is new to path

Refl. (f > y) (w)


¬f (w) f (w) (Refl.)


wAf w
(for any w in the path)

Refl. ¬(f > y) (w)


¬f (w) f (w) (Refl.)


wAf w
(for any w in the path)

¬>R+ ¬(f > y) (w)


wA⋮ f v
f (v)
¬y (v) (¬>R+ )

⋅ Tree Rules for Conditional Logic C+ : {PTr, >R, ¬>R, ¬>R+ , Refl.}
7

Countermodels
⋅ To show that a formula is invalid one constructs a model in which
the target formula is false. Since validity consists in truth in every
world of every model (for the relevant system), the existence of
a such a model (viz. a countermodel) is proof that the formula is
invalid.

Countermodels in MPL
⋅ A model in modal propositional logic consists of the following:
⋅ A non-empty set of worlds.
⋅ A (possibly empty) set of accessibility relations.
⋅ A specification of the interpretation function, i.e. an assignment
of truth values to the sentence letters in the formula relative to
each world of the model.
⋅ The modal system in question will constrain which accessibility
relations must be included.

Countermodels in QML
⋅ In quantified modal logic, models are more involved than in
propositional modal logic. A model must include:
⋅ A domain of individuals. If the modal logic in question has
⋅ A non-empty set of worlds (possibly a singleton set). variable domains, then world-specific
domains need to be specified.
⋅ A set of accessibility relations (possibly empty).
⋅ An interpretation function which specifies:
a. The extension of the relevant constants: a, b, c, ...
b. The extension of the relevant predicates relative to the rele-
vant worlds: F1 , F2 , F3 , ...
c. The extension of the relevant relations relative to the relevant
worlds: R1 , R2 , R3 , ...
⋅ Again, the modal system in question will constrain which acces-
sibility relations must be included.

Countermodels in Propositional Epistemic Logic


⋅ Countermodels in epistemic propositional logic are very similar to
countermodels in standard modal propositional logic except that
the accessibility relations must be agent-relative. So, a model in
epistemic logic must include:
⋅ A non-empty set of worlds (possibly a singleton set).
⋅ A (possibly empty) set of accessibility relations relativized to
the relevant agent(s).
⋅ A specification of the interpretation function, i.e. an assignment
of truth values to the sentence letters in the formula relative to
each world of the model.
8

Countermodels in Propositional Deontic Logic


⋅ Countermodels in deontic propositional logic are structurally
identical to countermodels in standard modal propositional logic,
i.e. these must include a set of worlds, set of accessibility relations,
and an interpretation function.

Countermodels in Conditional Logic C+


⋅ Countermodels in Conditional Logic C+ must contain the follow-
ing:

⋅ A non-empty set of possible worlds.


⋅ A non-empty set of accessibility relations. Notice that the set of accessibility
⋅ An interpretation function assigning truth values to each sen-
relations cannot be empty in deontic
logic as there is a seriality constraint on
tence letter in the relevant formula relative to each possible the frames.
world in the model.
1

Logic 2: Modal Logics — Exercise Booklet


Translations: Exercises
⋅ Translate the following sentences into formulas of Modal Proposi-
tional Logic.

1. It is possible that it might rain.


2. If Sue runs for office, Louise might run too.
3. We must block door 1 or door 2.
4. To start the engine, the key must be turned.
5. The garbage truck can only lift the bins if they are closed.
6. Sue must not be happy.
7. If parents routinely question their doctor, they might not do
what is right for their child.
8. Fred or Mary might have stolen the diamonds, but not both.

Solutions on the next pages...


2

Translations: Solutions
1. It is possible that it might rain. p: ‘it rains’.

(a) ��p
One could argue that the second modal
is superfluous in English, i.e. it doesn’t
(b) �p actually change the meaning and hence
translate this with just one �. That
would be acceptable in this case.

2. If Sue runs for office, Louise might run too. p: ‘Sue runs for office’
q: ’Louise runs for office’
(a) p → �q

⋅ Follow up question: Why is this translation better than ‘�(p → q)’?

3. We must block door 1 or door 2. p: ‘We block door 1’


q: ’We block door 2’
(a) �(p ∨ q) Both these translations are correct, the
(b) �p ∨ �q sentence is ambiguous. Notice that
these do not mean the same thing. In
(a), we have an obligation, but are free
to choose how to satisfy this obligation.
In (b), we also have an obligation, but
the sentence could be true even if we
only have one specific obligation, e.g. to
see to it that p.

4. To start the engine, the key must be turned. p: ‘The engine starts’
q: ’The key is turned’
(a) �(p → q) Notice that this conditional states a
necessary condition, not a sufficient con-
⋅ Follow up question: Why is this translation better than ‘p → �q’? dition. In other words, it is necessary
for starting the engine that the key is
turned. Hence, necessarily, if the engine
started, the key was turned.

5. The garbage truck can only lift the bins if they are closed. p: ‘The garbage truck lifts the bins’
q: ’The bins are closed’
(a) �p → q Again, this conditional states a neces-
sary condition, namely that in order for
it to be possible for the bins to be lifted
by the truck, the have to be closed.
It follows that if it is possible for the
trucks to lift the bins, they are closed.

6. Sue must not be happy. p: ‘Sue is happy’

(a) �¬p
3

7. If parents routinely question their doctor, they might not do what


is right for their child. p: ‘parents routinely question their
doctor’
(a) p → �¬q q: ‘parents do what is right for their
child’

8. Fred or Mary might have stolen the diamonds, but not both. p: ‘Fred stole the diamonds’
q: ‘Mary stole the diamonds’
(a) (�p ∧ �q) ∧ ¬�(p ∧ q)
4

Semantic Trees: Exercises

⋅ Using semantic trees, show the following formulas are semanti-


cally valid. For these you may use the PL-rules, MN-rules, �S5-
rule, and �S5-rule.

1. �S5 (�p → ��p)


2. �S5 �(p → q) → �(�p → �q)
3. �S5 (��p → �p)
4. �S5 �(p ∨ q) ↔ ¬(¬�p ∧ ¬�q)
5. �S5 (�p → p)
6. �S5 �p → ¬�¬�p
7. �S5 ��(p → p)
8. �S5 (�(q → p) ∧ �(¬q → p)) ↔ �p

⋅ For these, you may only use the PL-rules, MN-rules, �R-rule, That is, you are not allowed to use the
�R-rule, and also the rules (Refl.), (Trans.), and (Symm.) when �S5 and �S5.

appropriate.

9. �K �¬p → �(p → q)
10. �T ��p → �p
11. �T ¬�(p ∨ q) → ¬��q
12. �S4 ���p ∨ ¬�p
13. �S4 �p → ���p

⋅ Additional Exercises
If you’ve finished the above exercises, go back and do 1-8 with-
out using the �S5-rule or the �S5-rule, but �R-rule and (Refl.),
(Trans.), (Symm.) instead.

Solutions on the next pages...


5

Semantic Trees: Solutions


Rules admissible for 1-8: PL, MN, �S5,
1. �S5 (�p → ��p) �S5.

1. ¬(�p → ��p) (w) NTF


2. �p (w) l.1, PL
3. ¬��p (w) l.1, PL
4. p (v) l.2, �S5
5. �¬�p (w) l.3, MN
6. ¬�p (u) l.5, �S5
7. �¬p (u) l.6, MN
8. ¬p (v) l.7, �S5
×

2. �S5 �(p → q) → �(�p → �q)

1. ¬(�(p → q) → �(�p → �q)) (w) NTF


2. �(p → q) (w) l.1, PL
3. ¬�(�p → �q) (w) l.1, PL
4. �¬(�p → �q) (w) l.3, MN
5. ¬(�p → �q) (v) l.4, �S5
6. �p (v) l.5, PL
7. ¬�q (v) l.5, PL
8. �¬q (v) l.6, MN
9. ¬q (u) l.8, �S5
10. p (u) l.6, �S5
11. p→q (u) l.2, �S5

12. ¬p (u) q (u) l.11, PL


× ×

3. �S5 (��p → �p)

1. ¬(��p → �p) (w) NTF


2. ��p (w) 1, PL
3. ¬�p (w) 1, PL
4. �p (v) 2, �S5
5. p (u) 4, �S5
6. �¬p (w) 3, MN
7. ¬p (u) 6, �S5
×
6

4. �S5 �(p ∨ q) ↔ ¬(¬�p ∧ ¬�q)

1. ¬(�(p ∨ q) ↔ ¬(¬�p ∧ ¬�q)) (w) NTF

2. �(p ∨ q) (w) ¬�(p ∨ q) (w) 1, PL


¬¬(¬�p ∧ ¬�q) (w) ¬(¬�p ∧ ¬�q) (w)

4. ¬�p ∧ ¬�q (w) 4, PL


5. ¬�p (w) l.4, PL
6. ¬�q (w) l.4, PL
7. �¬p (w) l.6, PL
8. �¬q (w) l.7, PL
9. p∨q (v) l.2, �S5

10. p (v) q (v) 9, PL


11. ¬p (v) ¬q (v) l.7, l.8, �S5
× ×
12.

left as exercise – check your answer at office hours

5. �S5 (�p → p)

1. ¬(�p → p) (w) NTF


2. �p (w) l.1, PL
3. ¬p (w) l.1, PL
4. p (w) l.2, �S5
×

6. �S5 �p → ¬�¬�p

1. ¬(�p → ¬�¬�p) (w) NTF


2. �p (w) l.1, PL
3. ¬¬�¬�p (w) l.1, PL
4. �¬�p (w) l.3, PL
5. ¬�p (v) l.4, �S5
6. �¬p (v) l.5, MN
7. ¬p (u) l.6, �S5
8. p (u) l.2, �S5
×
7

7. �S5 ��(p → p)

1. ¬��(p → p) (w) NTF


2. �¬�(p → p) (w) l.1, MN
3. ¬�(p → p) (w) l.2, �S5
4. �¬(p → p) (w) l.3, MN
5. ¬(p → p) (w) l.4, �S5
6. p (w) l.5, PL
7. ¬p (w) l.5, PL
×

8. �S5 (�(q → p) ∧ �(¬qi → p)) → �p

1. ¬((�(q → p) ∧ �(¬q → p)) → �p) (w) NTF


2. (�(q → p) ∧ �(¬q → p)) (w) l.1, PL
3. ¬�p (w) l.1, PL
4. �(q → p) (w) l.2, PL
5. �(¬q → p) (w) l.2, PL
6. �¬p (w) l.3, MN
7. ¬p (v) l.6, �S5
8. q→p (v) l.4, �S5
9. ¬q → p (v) l.5, �S5

10. ¬q p (v) 8, PL
×
11. ¬¬q p (v) l.9, PL
× ×
Rules admissible for 9-13: PL, MN,
�R, �R. The rules (Refl.), (Trans.), and
9. �K �¬p → �(p → q) (Symm.) should also be used when
appropriate.

1. ¬(�¬p → �(p → q)) (w) NTF


2. �¬p (w) l.1
3. ¬�(p → q) (w) l.1
4. �¬(p → q) (w) l.3, MN
5. wRv l.4, �R
6. ¬(p → q) (v) 4-5, �R
7. p (v) l.6
8. ¬q (v) l.6
9. ¬p (v) l.2, l.5, �R
×

10. �T ��p → �p
8

1. ¬(��p → �p) (w) NTF


2. ��p (w) l.1
3. ¬�p (w) l.1
4. �¬p (w) l.3, MN
5. wRw (refl.)
6. ¬p (w) l.4, l.5, �R
7. �p (w) l.2,l.5
8. p (w) l.5, l.7, �R
×

11. �T ¬�(p ∨ q) → ¬��q

1. ¬(¬�(p ∨ q) → ¬��q) (w) NTF


2. ¬�(p ∨ q) (w) l.1
3. ¬¬��q (w) l.1
4. ��q (w) l.3
5. wRv l.4, �R
6. �q (v) l.4, l.5,�R
7. �¬(p ∨ q) (w) l.2, MN
8. ¬(p ∨ q) (v) l.7, l.5, �R
9. ¬p (v) l.8
10. ¬q (v) l.8
11. vRv (Refl.)
12. q (v) l.6, l.11, �R
×

12. �S4 ���p ∨ ¬�p

1. ¬(���p ∨ ¬�p) (w) NTF


2. ¬���p (w) l.1
3. ¬¬�p (w) l.1
4. �¬��p (w) l.2, MN
5. wRw (w) (Refl.)
6. ¬��p (w) l.4, l.5, �R
7. �¬�p (w) l.6, MN
8. wRv l.7, �R
9. ¬�p (v) l.7, l.8, �R
10. �¬p (v) l.9, MN
11. vRu l.10, �R
12. ¬p (u) l.10, l.11, �R
13. �p (w) l.3
14. wRu l.8, l.11, (Trans.)
15. p (u) l.13, l.14, �R
×
9

13. �S4 �p → ���p

1. ¬(�p → ���p) (w) NTF


2. �p (w) l.1
3. ¬���p (w) l.1
4. �¬��p (w) l.3, MN
5. wRv l.4, �R
6. ¬��p (v) l.4, l.5, �R
7. �¬�p (v) l.7, MN
8. vRv (Refl.)
9. ¬�p (v) l.7, l.8, �R
10. �¬p (v) l.9, MN
11. vRu l.10, �R
12. ¬p (u) l.10, l.11, �R
13. wRu l.5, l.11, (Trans.)
14. p (u) l.2, l.13, �R
×
10

Translations and Semantic Trees: Exercises


⋅ Translate these arguments into Modal Propositional Logic and show
that they are S5-valid using semantic trees.

Argument 1: God exists


God’s existence is either necessary or impossible. Moreover, p: ‘God exists’
God’s existence is conceivable. If God’s existence is conceivable, q: ‘God’s existence is conceivable’
then his existence is possible. So, God exists.

Argument 2: There is no external world.


If I have hands, there must be an external world. If there is an p: ‘I have hands’
external world, then I cannot be a brain in a vat. It’s possible that q: ‘There is an external world’
I’m a brain in a vat. So, it is possible that I do not have hands. r: ‘I’m a brain in a vat’

Solutions on the next pages...


11

Argument 1, Translation: Solution


Premise 1: �p ∨ ¬�p
Premise 2: q
Premise 3: q → �p
Conclusion: p

Argument 1, Semantic Tree: Solution

1. �p ∨ ¬�p (w) Premise


2. q (w) Premise
3. q → �p (w) Premise
4. ¬p (w) NC

5. ¬q (w) �p (w) l.3, PL


×
6. �p (w) ¬�p (w) l.1, PL
×
7. wRw Refl.
8. p l.6-7, �R
×

Argument 2, Translation: Solution


Premise 1: �(p → q)
Premise 2: �(q → ¬r)
Premise 3: �r
Conclusion: �¬p

Argument 2, Semantic Tree: Solution

1. �(p → q) (w) Premise


2. �(q → ¬r) (w) Premise
3. �r (w) Premise
4. ¬�¬p (w) NC
5. �p (w) l.4, MN
6. wRv l.3, �R
7. r (v) l.3, �R
8. p (v) l.5,l.6, �R
9. p→q (v) l.1,l.6, �R
10. q → ¬r (v) l.2,l.6, �R

11. ¬p (v) q (v) l.9, PL


×
12. ¬q (v) ¬r (v) l.10, PL
× ×
12

Countermodels: Exercises
1. �K ¬(�¬p → �(p → ¬p))

2. �S4 ¬�(�p ∨ �q) ∨ ¬��p�

3. �S5 ¬�¬�(�p → �q) → ¬�(p → q)�

4. �S5 (�(p ∨ q) → �r) → �(p ∨ �q)

Solutions on the next pages...


13

Countermodels: Solutions
1. �K ¬(�¬p → �(p → ¬p))

This is a negated conditional, so it is true only if the antecedent w

is true and the consequent false. So, to find a countermodel we v


p
simply need to find a model where either the antecedent is false or p
the consequent true.

model W w, v
R wRv
I ��p,w�,1�, ��p,v�,1�

Since in this model V M (p,v) = 1 and wRv ∈ R, this means that


V M (�¬p,w) = 0. So, the antecedent of the conditional is false,
which means that the conditional is true. This in turn means that
the negation of the conditional is false. And hence we have a coun-
termodel.

2. �S4 ¬�(�p ∨ �q) ∨ ¬��p�

Since this is a negated disjunction, by DeMorgan it is equivalent to


¬(�p ∨ �q) ∧ ¬¬��p. So, to find a countermodel, we simply need w

to find an S4-model where one of these conjuncts are false. ¬p

model W w NB! This counts as an S4-model since


the transitivity condition on R is
R wRw satisfied. That condition is:
I ��p,w�,0�
∀x∀y∀z�(xRy ∧ yRz) → xRz�

In this model, V M (��p, w = 0), so it follows that V M (¬¬��p, w) =


But since there is only one world w,
there is only one possible instantiation:
0. Given this, one of the conjuncts are false, so the conjunction as a
(wRw ∧ wRw) → wRw.
whole is false. Hence we have a countermodel.
And this condition is clearly satisfied.
14

3. �S5 ¬�¬�(�p → �q) → ¬�(p → q)�

This is a negated conditional, so it is true only if the antecedent


is true and the consequent false. So, to find a countermodel we
simply need to find a model where either the antecedent is false or w
the consequent true.
p, ¬q

model W w
R wRw
I ��p,w�,1�, ��q,w�,0�

In this model, V M ((p → q), w) = 0. Consequently, V M (¬(p → q), w)


= 1. Since wRw, it follows that V M (�¬(p → q), w) = 1 and this, of
course is equivalent to V M (¬�(p → q), w) = 1. This means that the
conditional is true, and hence that the negation of the conditional
false. So, we have a countermodel.

4. �S5 (�(p ∨ q) → �r) → �(p ∨ �q)

To find a countermodel here, we need a model where the an-


tecedent of the conditional is true and the consequent false. Again,
we only need one world. w

¬p, ¬q
model W w
R wRw
I ��p,w�,0�, ��q,w�,0�

In this model, V M ((p ∨ q), w) = 0. Since there is only one accessible


world w from w, it follows that V M (�(p ∨ q), w) = 0. Given this, it
follows that the antecedent of 4 is true. However, V M (p, w) = 0 and
V M (�q), w) = 0, so V M ((p ∨ �q), w) = 0. And, again, since only w
is accessible from w, it follows that V M �((p ∨ �q), w) = 0. So, the
consequent of 4 is false and hence we have a countermodel.
15

Natural Deduction in PL: Basic Inference Rules


These are the basic inference rules for
natural deduction in propositional
(f → y) (f → y)
logic (PL). We will add a set of modal
inference rules below in order to extend
f ¬y the system to modal propositional logic
(MPL).

y ¬f In addition, we will add several derived


modus ponens (MP) modus tollens (MT) rules to our inventory by proving these
from our set of basic inference rules.

(f ∧ y) f f
y y
f
y (f ∧ y) (y ∧ f)
simplication (Simp) addition (Conj)

(f ∨ y) (f ∨ y)
f f ¬f ¬y

(f ∨ y) (y ∨ f) y f
addition (Add) disjunctive syllogism (DS)

f→y f↔y
¬¬f f y→f
f→y
f ¬¬f f↔y y→f
double negation (DN) biconditional (BC)

f ass. f ass.

⋮ ⋮
� y
¬f raa f→y cp

Reductio ad Absurdum (RAA) Conditional Proof (CP)


16

Natural Deduction in PL: Exercises


⋅ Prove the theorems below using the basic inference rules above. Note: Once a theorem is proved,
you may appeal to that theorem in
1. �PL ¬(f ∨ y) ↔ (¬f ∧ ¬y) derivations of other theorems, cf. info
on replacement rules below.
2. �PL ¬(f ∧ y) ↔ (¬f ∨ ¬y)
3. �PL (f → y) ↔ (¬f ∨ y)
4. �PL f ↔ (f ∧ f)
5. �PL f ↔ (f ∨ f)
6. �PL (f → y) ↔ (¬y → ¬f)
7. �PL (f ∧ y) ↔ (y ∧ f)
8. �PL (f ∨ y) ↔ (y ∨ f)
9. �PL (f ∧ (y ∧ c)) ↔ ((f ∧ y) ∧ c)
10. �PL (f ∨ (y ∨ c)) ↔ ((f ∨ y) ∨ c)
11. �PL (f ∧ (y ∨ c)) ↔ ((f ∧ y) ∨ (f ∧ c))
12. �PL (f ∨ (y ∧ c)) ↔ ((f ∨ y) ∧ (f ∨ c))
13. �PL ((f ∧ y) → c) ↔ (f → (y → c))
14. �PL (f → (y → c)) ↔ (y → (f → c))

Solutions on the next pages...


17

Natural Deduction in PL: Solutions (Proofs of Replacement Rules)


1. �PL ¬(p ∨ q) ↔ (¬p ∧ ¬q)

Proof.

1 ¬(p ∨ q) assumption

2 p assumption

3 p∨q Add., 2

4 � 1,3

5 ¬p RAA, 2, 4

6 q assumption

7 p∨q Add, 6

8 � 1,7

9 ¬q RAA, 6, 8

10 (¬p ∧ ¬q) Conj., 5, 9

11 ¬(p ∨ q) → (¬p ∧ ¬q) CP, 1, 10

12 ¬p ∧ ¬q assumption

13 p∨q assumption

14 ¬p Simp., 12

15 q DS, 13, 14

16 ¬q Simp., 12

17 � 15,16

18 ¬(p ∨ q) RAA, 13, 17

19 (¬p ∧ ¬q) → ¬(p ∨ q) CP, 12, 18

20 ¬(p ∨ q) ↔ (¬p ∧ ¬q) BC, 11, 19


18

2. �PL ¬(p ∧ q) ↔ (¬p ∨ ¬q)

Proof.
1 ¬(p ∧ q) assumption

2 ¬(¬p ∨ ¬q) assumption

3 ¬p assumption

4 ¬p ∨ ¬q Add., 3

5 � 2,4

6 ¬¬p RAA, 3, 5

7 p DN, 6

8 ¬q assumption

9 ¬p ∨ ¬q Add., 8

10 � 2,9

11 ¬¬q RAA, 8, 10

12 q DN, 11

13 p∧q Conj., 7, 12

14 ¬(¬p ∨ ¬q) → (p ∧ q) CP, 4, 13

15 ¬¬(¬p ∨ ¬q) MT, 1, 14

16 (¬p ∨ ¬q) DN, 15

17 ¬(p ∧ q) → (¬p ∨ ¬q) CP, 1, 16

18 ¬p ∨ ¬q assumption

19 p∧q assumption

20 p Simp., 19

21 ¬q DS, 19, 20

22 q Simp., 19

23 � 21,22

24 ¬(p ∧ q) RAA, 19, 23

25 (¬p ∨ ¬q) → ¬(p ∧ q) CP, 19, 24

26 ¬(p ∧ q) ↔ (¬p ∨ ¬q) BC, 17, 25


19

3. �PL (p → q) ↔ (¬p ∨ q)

Proof.

1 p→q assumption

2 ¬(¬p ∨ q) assumption

3 p assumption

4 q MP, 1, 3

5 ¬p ∨ q Add., 4

6 � 2,4

7 ¬p RAA, 3, 6

8 ¬p ∨ q Add., 7

9 � 2,8

10 ¬¬(¬p ∨ q) RAA, 2, 9

11 ¬p ∨ q DN, 10

12 (p → q) → (¬p ∨ q) CP, 1, 11

13 ¬p ∨ q assumption

14 p assumption

15 q DS, 13, 14

16 p→q CP, 14, 15

17 (¬p ∨ q) → (p → q) CP, 13, 16

18 (p → q) ↔ (¬p ∨ q) BC, 12, 17


20

4. �PL p ↔ (p ∧ p)

Proof.

1 p assumption

2 ¬(p ∧ p) assumption

3 ¬p ∨ ¬p DeM, 2

4 ¬p DS, 1, 3

5 � 1,4

6 ¬¬(p ∧ p) RAA, 2, 5

7 (p ∧ p) DN, 6

8 p → (p ∧ p) CP, 1, 7

9 p∧p assumption

10 p Simp., 9

11 (p ∧ p) → p CP, 9, 10

12 p ↔ (p ∧ p) BC, 8, 11

5. �PL p ↔ (p ∨ p)

Proof.

1 p assumption

2 p∨p Add., 1

3 p → (p ∨ p) CP, 1, 2

4 p∨p assumption

5 ¬p assumption

6 p DS, 4, 5

7 � 5,6

8 ¬¬p RAA, 4, 7

9 p DN, 8

10 (p ∨ p) → p CP, 4, 9

11 (p ∨ p) ↔ p BC, 3, 10
21

6. �PL (p → q) ↔ (¬q → ¬p)

Proof.

1 p→q assumption

2 ¬q assumption

3 p assumption

4 q MP, 1, 3

5 � 2,4

6 ¬p RAA, 3, 5

7 ¬q → ¬p CP, 2, 6

8 (p → q) → (¬q → ¬p) CP, 1, 7

9 ¬q → ¬p assumption

10 p assumption

11 ¬q assumption

12 ¬p MP, 9, 11

13 � 10,12

14 ¬¬q RAA, 11, 13

15 q DN, 14

16 p→q CP, 10, 15

17 (¬q → ¬p) → (p → q) CP, 9, 16

18 (p → q) ↔ (¬q → ¬p) BC, 8, 17


22

7. �PL (p ∧ q) → (q ∧ p) Note: We only prove the left-to-right


directions here since the proving the
Proof. opposite direction becomes trivial.

1 p∧q assumption

2 p Simp., 1

3 q Simp., 1

4 q∧p Conj., 2, 3

5 (p ∧ q) → (q ∧ q) CP, 1, 4

8. �PL (p ∨ q) → (q ∨ p)

Proof.

1 p∨q assumption

2 ¬(q ∨ p) assumption

3 ¬q ∧ ¬p DeM., 2

4 ¬q Simp., 3

5 p DS, 1, 4

6 ¬p Simp., 3

7 � 5,6

8 ¬¬(q ∨ p) RAA, 2, 7

9 q∨p DN, 8

10 (p ∨ q) → (q ∨ p) CP, 1, 9

9. �PL (f ∧ (y ∧ c)) ↔ ((f ∧ y) ∧ c)

Proof.

10. �PL (f ∨ (y ∨ c)) ↔ ((f ∨ y) ∨ c)

Proof.

11. �PL (f ∧ (y ∨ c)) ↔ ((f ∧ y) ∨ (f ∧ c))

Proof.
23

12. �PL (f ∨ (y ∧ c)) ↔ ((f ∨ y) ∧ (f ∨ c))

Proof.

13. �PL ((f ∧ y) → c) ↔ (f → (y → c))

Proof.

14. �PL (f → (y → c)) ↔ (y → (f → c))

Proof.
24

Replacement Rules
⋅ For each theorem above, we add a corresponding replacement rule.
A replacement rule, ‘f :: y’, is a rule that allows the replacement
of f for y any point in a proof and vice versa (citing the relevant
rule).

1. (¬f ∨ ¬y) :: ¬(f ∧ y) DeMorgan (DeM)

2. (¬f ∧ ¬y) :: ¬(f ∨ y) DeMorgan (DeM)

3. (f → y) :: (¬f ∨ y) Material Implication (IMP)

3. (f → y) :: (¬f ∨ y) Material Implication (IMP)

4. (f ∨ f) :: f Idempotence (Idem)

5. (f ∧ f) :: f Idempotence (Idem)

6. (f → y) :: (¬y → ¬f) Contraposition (Cont)

7. (f ∧ y) :: (y ∧ f) Commutativity (Com)

8. (f ∨ y) :: (y ∨ f) Commutativity (Com)

9. (f ∧ (y ∧ c)) :: (f ∧ y) ∧ c) Associativity (Assoc)

10. (f ∨ (y ∨ c)) :: (f ∨ y) ∨ c) Associativity (Assoc)

11. (f ∧ (y ∨ c)) :: ((f ∧ y) ∨ (f ∧ c)) distribution (Dist)

12. (f ∨ (y ∧ c)) :: ((f ∨ y) ∧ (f ∨ c)) distribution (Dist)

13. ((f ∧ y) → c) :: (f → (y → c)) exportation (Exp)

14. (f → (y → c)) :: (y → (f → c)) permutation (Per)


25

Natural Deduction in MPL: Basic Modal Inference Rules


Note: These rules only cover the modal
logics T, S4, and S5.

¬�¬f �f ¬�¬f �f

�f ¬�¬f �f ¬�¬f
modal negation (MN) modal negation (MN)

¬�f ¬�f �¬f �¬f

�¬f �¬f ¬�f ¬�f


modal negation (MN) modal negation (MN)

f �f

�f f
possibility introduction (PI) modal reiteration T (MRT)

�f �f

�f �f
modal reiteration S4 (MRS4) modal reiteration S5 (MRS5)

Rules for null assumption proofs:


null ass.
1. There is no assumption (or a null
assumption).
⋮ 2. Every formula inside the scope of a
null assumption must be deduced
f from preceding formulas inside the
scope of that null assumption unless
�f ni it is deduced by modal reiteration for
the appropriate modal logic.
3. The proof ends with a discharge of
Necessity Introduction (NI)
the null assumption line. After the
discharge of the null assumption line
where the formula immediately before
the discharge is f, the next formula is
�f.
4. A null assumption proof may contain
a subproof, but anything derived via
a subproof must rely on no assump-
tions.
26

Natural Deduction in MPL: Exercises


⋅ Prove the theorems below using the basic inference rules and the
basic modal inference rules above.

1. �S4 �p → ���p
2. �T ¬�p ∨ (p ∨ q)
3. �S4 �p → ���p
4. �T ��(p → q) → (p → p)�
5. �T �(p → (p ∨ q))
6. �T �(p → q) → (�p → �q)
7. ��p �S4 �p
8. �T (�p ∧ ¬�q) → ¬(¬p ∨ �q)
9. �(p ∨ q), �(p → r), �(q → r) �T �r
10. �T �¬¬p → �p

Solutions on the next pages...


27

Natural Deduction in MPL: Solutions


1. �S4 �p → ���p

1 �p assumption

2 null assumption

3 �p MRS4, 1

4 ��p NI, 3

5 null assumption

6 ��p MRS4, 3

7 ���p NI, 6

8 �p → ���p CP, 1, 8

2. �T ¬�p ∨ (p ∨ q)

1 �p assumption

2 p MRT, 1

3 p∨q Add, 2

4 �p → (p ∨ q) CP, 1, 3

5 ¬�p ∨ (p ∨ q) IMP, 1, 3
28

3. �S4 �p → ���p

1 �p assumption

2 null assumption

3 �p MRS4, 1

4 p MRT, 3

5 �p PI, 3

6 ��p NI, 5

7 null assumption

8 ��p MRS4, 7

9 ���p NI, 8

10 �p → ���p CP, 1, 10

4. �T ��(p → q) → (p → p)�

1 null assumption

2 p→q assumption

3 p assumption

4 p∧p Idem, 3

5 p Simp, 4

6 p→p CP, 3, 5

7 (p → q) → (p → p) CP, 2, 6

8 ��(p → q) → (p → p)� NI, 7

5. �T �(p → (p ∨ q))

1 null assumption

2 p assumption

3 p∨q Add, 2

4 p → (p ∨ q) CP, 2–3

5 �(p → (p → q)) NI, 1–4


29

6. �T �(p → q) → (�p → �q)

1 �(p → q) ass.

2 �p ass.

3 null assumption

4 (p → q) MRT, 1

5 p MRT, 2

6 q MP, 4, 5

7 �q NI, 3–6

8 �p → �q CP, 2, 8

9 �(p → q) → (�p → �q) CP, 1–8

7. ��p �S4 �p

1 ��p premise

2 �¬p ass.

3 null assumption

4 �¬p MRS4, 2

5 ��¬p NI, 2–4

6 �¬p → ��¬p CP, 2, 5

7 ¬��¬p → ¬�¬p Cont., 6

8 �¬�¬p → ¬�¬p MN, 7

9 ��p → ¬�¬p MN, 8

10 ��p → �p MN, 9

11 �p MP, 1, 10
30

8. �T (�p ∧ ¬�q) → ¬(¬p ∨ �q)

1 �p ∧ ¬�q ass.

2 �p Simp., 1

3 ¬�q Simp., 1

4 �¬q MN, 3

5 ¬q MRT, 4

6 �¬q PI, 5

7 ¬�q MN, 6

8 p MRT, 2

9 p ∧ ¬�q Conj, 7, 8

10 ¬(¬p ∨ ¬¬�q) DeM, 9

11 ¬(¬p ∨ �q) DN, 10

12 (�p ∧ ¬�q) → ¬(¬p ∨ �q) CP, 1, 11


31

9. �(p ∨ q), �(p → r), �(q → r) �T �r

1 �(p ∨ q) premise

2 �(p → r) premise

3 �(q → r) premise

4 null assumption

5 p∨q MRT, 1

6 p→r MRT, 2

7 q→r MRT, 3

8 ¬r assumption

9 ¬p MT, 6, 8

10 q DS, 5, 9

11 r MP, 7, 10

12 � 8, 11

13 ¬¬r RAA, 8–12

14 r DN, 13

15 �r NI, 4–13
32

10. �T �¬¬p → �p Proof without illicit use of DN within


complex formulas.

This includes a proof of the theorem:


1 null assumption �(p → q) → (�p → �q)

2 ¬¬p assumption

3 p DN, 2

4 ¬¬p → p CP, 2, 3

5 �(¬¬p → p) NI, 1–4

6 �¬¬p assumption

7 ¬�¬¬¬p def. �, 6

8 �¬p assumption

9 null assumption

10 ¬¬p → p MRT, 5

11 ¬p MRT, 8

12 ¬¬¬p MT, 10, 11

13 �¬¬¬p NI, 9–12

14 �¬p → �¬¬¬p CP, 8, 13

15 ¬�¬p MT, 7, 14

16 �p def. �, 15

17 �¬¬p → �p CP, 6, 16
33

11. �T (�p ∨ �q) → �(p ∨ q)

1 �p assumption

2 ¬�(p ∨ q) assumption

3 �¬(p ∨ q) MN, 2

4 null assumption

5 ¬(p ∨ q) MRT, 3

6 ¬p ∧ ¬q DeM, 5

7 ¬p Simp., 6

8 �¬p NI, 4–7

9 ¬�p MN, 8

10 � 1, 9

11 ¬¬�(p ∨ q) RAA, 2, 10

12 �(p ∨ q) DN, 11

13 �p → �(p ∨ q) CP, 1, 12

14 �q assumption

15 ¬�(p ∨ q) assumption

16 �¬(p ∨ q) MN, 15

17 null assumption

18 ¬(p ∨ q) MRT, 16

19 ¬p ∧ ¬q DeM, 18

20 ¬q Simp., 19

21 �¬q NI, 17–20

22 ¬�q MN, 21

23 � 14, 22

24 ¬¬�(p ∨ q) RAA, 15, 23

25 �(p ∨ q) DN, 24

26 �q → �(p ∨ q) CP, 14, 25


34

27 ¬�(p ∨ q) assumption

28 ¬�p MT, 13, 27

29 ¬�q MT, 26, 27

30 ¬�p ∧ ¬�q Conj, 28, 29

31 ¬(�p ∨ �q) DeM, 30

32 ¬�(p ∨ q) → ¬(�p ∨ �q) CP, 27, 31

33 (�p ∨ �q) → �(p ∨ q) Cont, 32

You might also like