DM_notes
DM_notes
Discrete Mathematics
Miguel A. Lerma
Contents
Introduction 5
Chapter 1. Logic, Proofs 6
1.1. Propositions 6
1.2. Predicates, Quantifiers 11
1.3. Proofs 13
Chapter 2. Sets, Functions, Relations 19
2.1. Set Theory 19
2.2. Functions 27
2.3. Relations 32
Chapter 3. Algorithms, Integers 38
3.1. Algorithms 38
3.2. The Euclidean Algorithm 48
3.3. Modular Arithmetic, RSA Algorithm 52
Chapter 4. Induction, Recurences 59
4.1. Sequences and Strings 59
4.2. Mathematical Induction 62
4.3. Recurrence Relations 65
Chapter 5. Counting 69
5.1. Basic Principles 69
5.2. Combinatorics 71
5.3. Generalized Permutations and Combinations 73
5.4. Binomial Coefficients 75
5.5. The Pigeonhole Principle 77
Chapter 6. Probability 78
6.1. Probability 78
Chapter 7. Graph Theory 82
7.1. Graphs 82
7.2. Representations of Graphs 88
7.3. Paths and Circuits 91
3
CONTENTS 4
Miguel A. Lerma
mlerma@math.northwestern.edu
Northwestern University
Spring 2005
http://www.math.northwestern.edu/~mlerma/courses/cs310-05s/
5
CHAPTER 1
Logic, Proofs
1.1. Propositions
p q ¬p p ∧ q p ∨ q p ⊕ q p → q p ↔ q
T T F T T F T T
T F F F T T F F
F T T F T T T F
F F T F F F T T
p ¬p p ∨ ¬p p ∧ ¬p
T F T F
T F T F
F T T F
F T T F
6 6
tautology contradiction
p q ¬p ¬p ∨ q p → q
T T F T T
T F F F F
F T T T T
F F T T T
p → q ≡ ¬p ∨ q
¬(p ∨ q) ≡ ¬p ∧ ¬q
¬(p ∧ q) ≡ ¬p ∨ ¬q
p q ¬p ¬q p ∨ q ¬(p ∨ q) ¬p ∧ ¬q p ∧ q ¬(p ∧ q) ¬p ∨ ¬q
T T F F T F F T F F
T F F T T F F F T T
F T T F T F F F T T
F F T T F T T F T T
p ↔ q ≡ (p → q) ∧ (q → p)
p q p → q q → p (p → q) ∧ (q → p) p ↔ q
T T T T T T
T F F T F F
F T T F F F
F F T T T T
¬(p → q) ≡ p ∧ ¬q
p → q ≡ ¬q → ¬p
¬(p ↔ q) ≡ p ⊕ q
In normal use, there is almost no difference between "inverse" and "converse".
But in mathematics, especially in logic, they are different things.
1.1.5. Converse, Contrapositive. The converse of a conditional
proposition p → q is the proposition q → p. As we have seen, the bi-
conditional proposition is equivalent to the conjunction of a conditional
proposition an its converse.
p ↔ q ≡ (p → q) ∧ (q → p)
So, for instance, saying that “John is married if and only if he has a
spouse” is the same as saying “if John is married then he has a spouse”
and “if he has a spouse then he is married”.
On the other hand, if ∀x P (x) is false then it is not true that for
every x, P (x) holds, hence for some x, P (x) must be false. Thus:
¬∀x P (x) ≡ ∃x ¬P (x) .
This two rules can be applied in successive steps to find the negation
of a more complex quantified statement, for instance:
¬∃x∀y p(x, y) ≡ ∀x¬∀y P (x, y) ≡ ∀x∃y ¬P (x, y) .
Exercise: Write formally the statement “for every real number there
is a greater real number”. Write the negation of that statement.
1.3. Proofs
p1
p2
..
.
pn
∴ q
or
p 1 , p2 , . . . , p n / ∴ q
p→q
p
∴ q
p q p→q p q
T T T T T
T F F T F
F T T F T
F F T F F
If we look now at the rows in which both p → q and p are true (just
the first row) we see that also q is true, so the argument is valid.
p
q
∴ p∧q
6. Hypothetical Syllogism:
p→q
q→r
∴ p→r
7. Disjunctive Syllogism:
p∨q
¬p
∴ q
8. Resolution:
p∨q
¬p ∨ r
∴ q∨r
Arguments are usually written using three columns. Each row con-
tains a label, a statement and the reason that justifies the introduction
of that statement in the argument. That justification can be one of the
following:
then from ¬p(a) we can derive ¬∀x p(x). Answer : The argument is as
follows:
step statement reason
1) ¬p(a) Premise
2) ∃x ¬p(x) Existential Generalization
3) ¬∀x p(x) Negation of Universal Statement
CHAPTER 2
A B
A B
A B
A B
M P
60 90 185
10
140 65
235
C
1. Associative Laws:
A ∪ (B ∪ C) = (A ∪ B) ∪ C
A ∩ (B ∩ C) = (A ∩ B) ∩ C
2. Commutative Laws:
A∪B =B∪A
A∩B =B∩A
3. Distributive Laws:
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
4. Identity Laws:
A∪∅=A
A∩U=A
5. Complement Laws:
A∪A=U
A∩A=∅
6. Idempotent Laws:
A∪A=A
A∩A=A
7. Bound Laws:
A∪U=U
A∩∅=∅
8. Absorption Laws:
A ∪ (A ∩ B) = A
A ∩ (A ∪ B) = A
9. Involution Law:
A=A
2.1. SET THEORY 25
Example: The division of the integers Z into even and odd numbers
is a partition: S = {E, O}, where E = {2n | n ∈ Z}, O = {2n + 1 | n ∈
Z}.
2.1. SET THEORY 26
Analogously we can define triples or 3-tuples (a, b, c), 4-tuples (a, b, c, d),
. . . , n-tuples (a1 , a2 , . . . , an ), and the corresponding 3-fold, 4-fold,. . . ,
n-fold Cartesian products:
A1 × A2 × · · · × An =
{(a1 , a2 , . . . , an ) | (a1 ∈ A1 ) ∧ (a2 ∈ A2 ) ∧ · · · ∧ (an ∈ An )} .
If all the sets in a Cartesian product are the same, then we can use
an exponent: A2 = A × A, A3 = A × A × A, etc. In general:
(n times)
An = A × A × ··· × A.
2.2. Functions
0 0
1 1
2 −1
3 4 2
5
6 9 3 −2
7 8
10 −3
N Z
√
Figure 2.8. Correspondence x 7→ ± x.
A B
y 2
–2 –1 0 1 2
x
A B
A B
A B
f g
(
/ B / C
A / /
x y=f (x) z=g(y)=g(f (x))
2.3. Relations
The set
{a ∈ A | a R b for some b ∈ B}
is called the domain of R. The set
{b ∈ B | a R b for some a ∈ A}
is called the range of R. For instance, in the relation “married to”
above, the domain is the set of married men, and the range is the set
of married women.
Arrow diagrams. Venn diagrams and arrows can be used for repre-
senting relations between given sets. As an example, figure 2.14 rep-
resents the relation from A = {a, b, c, d} to B = {1, 2, 3, 4} given by
R = {(a, 1), (b, 1), (c, 2), (c, 3)}. In the diagram an arrow from x to y
means that x is related to y. This kind of graph is called directed graph
or digraph.
Before starting algorithms to solve a problem, you have to choose correct
representation method to enter your problem data into computer.
2.3. RELATIONS 33
1
a
b
2
c
d 3
4
A B
1 8
2
4
6 3
9
5
7
Examples:
1. Reflexive: a = a 1 ⇒ a|a.
2. Antisymmetric: a|b ⇒ b = at for some t and b|a ⇒ a = bt0 for
some t0 . Hence a = att0 , which implies tt0 = 1 ⇒ t0 = t−1 . The
only invertible positive integer is 1, so t = t0 = 1 ⇒ a = b.
3. Transitive: a|b and b|c implies b = at for some t and c = bt0 for
some t0 , hence c = att0 , i.e., a|c.
8 9
4 6
5 7
2 3
1. Ai ∩ Aj = ∅ for i 6= j,
S
2. n An = A.
Algorithms, Integers
3.1. Algorithms
procedure ProcedureName(Input)
Instructions...
end ProcedureName
1: procedure max(a,b,c)
2: x := a
3: if b>x then
4: x := b
5: if c>x then
6: x := c
7: return x
8: end max
Next we show a few common operations in pseudocode.
x := y
if p then
action
if p then
action1
else
action2
1: while p
2: action
The following is the structure of a for loop:
3.1. ALGORITHMS 40
begin
Instruction1
Instruction2
Instruction3
...
end
Comments are enclose between brackets:
{This is a comment}
return output
call Procedure(arguments...)
3.1.2. Recursiveness.
• Factorial:
1. 0! = 1
2. n! = n · (n − 1)! (n ≥ 1)
• Power:
1. a0 = 1
2. an = an−1 a (n ≥ 1)
factorial recursive(n)
factorial recursive(n-1)
factorial recursive(n-2)
factorial recursive(0)
After reaching the basic case the procedure returns a value to the last
call, which returns a value to the previous call, and so on up to the
first invocation of the procedure.
1: procedure fibonacci(n)
2: if n=0 then
3: return 0
4: if n=1 then
5: return 1
6: return fibonacci(n-1) + fibonacci(n-2)
7: end fibonacci
1: for i := 1 to n
2: x := x + 1
The following double loop performs it n2 times:
1: for i := 1 to n
2: for j := 1 to n
3: x := x + 1
1: for i := 1 to n
2: for j := 1 to i
3: x := x + 1
Since the time that takes to execute an algorithm usually depends
on the input, its complexity must be expressed as a function of the
input, or more generally as a function of the size of the input. Since
the execution time may be different for inputs of the same size, we
define the following kinds of times:
|f (n)| ≤ C1 |g(n)|
|f (n)| ≥ C2 |g(n)|
Order Name
Θ(1) Constant
Θ(log log n) Log log
Θ(log n) Logarithmic
Θ(n log n) n log n
Θ(n) Linear
Θ(n2 ) Quadratic
Θ(n3 ) Cubic
Θ(nk ) Polynomial
Θ(an ) Exponential
Let’s see now how we find the complexity of algorithms like bubble sort
and merge sort.
(n − 1) + (n − 2) + · · · + 1 = n(n − 1)/2
1: procedure gcd(a,b)
2: if a<b then {make a the largest}
3: swap(a,b)
4: while b 6= 0
5: begin
6: r := a mod b
7: a := b
8: b := r
9: end
10: return a
11: end gcd
3.2. THE EUCLIDEAN ALGORITHM 51
1. a ≡ b (mod m).
2. a = b + km for some integer k.
3. a and b have the same remainder when divided by m.
As an example, tables 3.3.1 and 3.3.2 show the addition and multi-
plication tables for Z5 and Z6 respectively.
+ 0 1 2 3 4 · 0 1 2 3 4
0 0 1 2 3 4 0 0 0 0 0 0
1 1 2 3 4 0 1 0 1 2 3 4
2 2 3 4 0 1 2 0 2 4 1 3
3 3 4 0 1 2 3 0 3 1 4 2
4 4 0 1 2 3 4 0 4 3 2 1
Table 3.3.1. Operational tables for Z5
+ 0 1 2 3 4 5 · 0 1 2 3 4 5
0 0 1 2 3 4 5 0 0 0 0 0 0 0
1 1 2 3 4 5 0 1 0 1 2 3 4 5
2 2 3 4 5 0 1 2 0 2 4 0 2 4
3 3 4 5 0 1 2 3 0 3 0 3 0 3
4 4 5 0 1 2 3 4 0 4 2 0 4 2
5 5 0 1 2 3 4 5 0 5 4 3 2 1
Table 3.3.2. Operational tables for Z6
Z∗2 = {1}
Z∗3 = {1, 2}
Z∗4 = {1, 3}
Z∗5 = {1, 2, 3, 4}
Z∗6 = {1, 5}
Z∗7 = {1, 2, 3, 4, 5, 6}
Z8∗ = {1, 3, 5, 7}
Z9∗ = {1, 2, 4, 5, 7, 8}
64 = 3 · 17 + 13 → r = 13
17 = 1 · 13 + 4 → r =4
13 = 3·4+1 → r =1
4 = 4·1+0 → r =0
1 = 13 − 3 · 4 = 13 − 3 · (17 − 1 · 13) = 4 · 13 − 3 · 17
= 4 · (64 − 3 · 17) − 3 · 17 = 4 · 64 − 15 · 17 .
Hence (−15) · 17 ≡ 1 (mod 64), but −15 ≡ 49 (mod 64), so the in-
verse of 17 in (Z∗64 , ·) is 49. We will denote this by writing 17−1 = 49
(mod 64), or 17−1 mod 64 = 49.
3.3. MODULAR ARITHMETIC, RSA ALGORITHM 55
For instance
φ(15) = φ(3 · 5) = φ(3) · φ(5) = (3 − 1) · (5 − 1) = 2 · 4 = 8 .
Induction, Recurences
2. Product notation:
Yn
ai = am · am+1 · am+2 · · · · · an
i=m
6
Y
an = a3 · a4 · a5 · a6 = 7 · 9 · 11 · 13 = 9009 .
n=3
Remark : In the basis step we may replace 1 with some other integer
m. Then the conclusion is that the property is true for every integer n
greater than or equal to m.
Example: Prove that the sum of the n first odd positive integers is
n2 , i.e., 1 + 3 + 5 + · · · + (2n − 1) = n2 .
This completes the induction, and shows that the property is true for
all positive integers.
Example: Prove that 2n + 1 ≤ 2n for n ≥ 3.
This completes the induction, and shows that the property is true for
all n ≥ 3.
n (n + 1)
• 1 + 2 + 3 + ··· + n = .
2
n (n + 1) (2n + 1)
• 12 + 22 + 32 + · · · + n2 = .
6
• 13 + 23 + 33 + · · · + n3 = (1 + 2 + 3 + · · · + n)2 .
and
xn = A + c n if r = 1 .
So:
x10 = 110, 462, 317 .
Counting
For instance, assume that a license plate contains two letters fol-
lowed by three digits. How many different license plates can be printed?
Answer : each letter can be printed in 26 ways, and each digit can be
printed in 10 ways, so 26 · 26 · 10 · 10 · 10 = 676000 different plates can
be printed.
69
5.1. BASIC PRINCIPLES 70
5.2. Combinatorics
P (n, n) = n! .
By convention 0! = 1.
For instance, there are 3! = 6 permutations of the 3 letters a, b, c.
The number of permutations of size 2 of the 4 letters a, b, c, d is P (4, 2) =
4 × 3 = 12.
Exercise: Given a set A with m elements and a set B with n ele-
ments, find the number of one-to-one functions from A to B.
One way to derive the formula for C(n, r) is the following. Let A
be a set with n objects. In order to generate all possible permutations
of size r of the elements of A we 1) take all possible subsets of size
r in the set A, and 2) permute the k elements in each subset in all
possible ways. Task 1) can be performed in C(n, r) ways, and task
2) can be performed in P (r, r) ways. By the product rule we have
P (n, r) = C(n, r) × P (r, r), hence
P (n, r) n!
C(n, r) = = .
P (r, r) r! (n − r)!
5.3. GENERALIZED PERMUTATIONS AND COMBINATIONS 73
n!
P (n; n1 , n2 , . . . , nk ) = .
n 1 ! n 2 ! . . . nk !
Exercise: Prove
n n
X n n
X nk
=2 and (−1) = 0.
k=0
k k=0
k
Hint: Apply the binomial theorem to (1 + 1)2 and (1 − 1)2 .
Probability
6.1. Probability
1. The probability that the shooter does not hit the target in one
shot.
2. The probability that the shooter does not hit the target three
times in a row.
3. The probability that the shooter hits the target at least once
after shooting three times.
Answer :
Graph Theory
7.1. Graphs
a b
d c
a b
d c
d c
Since deg(v) is even for v ∈ Ve , the first sum in the right hand side of
the equality is even. The total sum must be 2e, which is even, so the
second sum must be even too. But its terms are all odd, so there must
be an even number of them.
7.1. GRAPHS 85
a b
6 3
6
d 4 c
010 011
100 101
000 001
e b
d c
a
q
b
r
c
Regular Graph: a simple graph whose vertices have all the same
degree. For instance, the n-cube is regular.
a b c d
a 0 1 0 1
b 1
0 2 0
c 0 2 0 0
d 1 0 0 1
b
e2
e1
e3
a c
e4 e5
d
Figure 7.8
1For some authors if i = j then the entry is twice the number of loops incident
on i; so in the example of figure 7.8 entry (d, d) would be 2 instead of 1.
7.2. REPRESENTATIONS OF GRAPHS 89
e1 e2 e3 e4 e5
a 1 0 0 1 0
b 1 1 1 0 0
c 0 1 1 0 0
d 0 0 0 1 1
a1 b1 a2 b2
d2
e1 c2
c1 e2
d1
Two graphs are isomorphic if and only if for some ordering of their
vertices their adjacency matrices are equal.
c
3
b e
2 4
a f 1 5
During the eighteenth century the city of Königsberg (in East Prus-
sia)1 was divided into four sections, including the island of Kneiphop,
by the Pregel river. Seven bridges connected the regions, as shown in
figure 7.11. It was said that residents spent their Sunday walks trying
to find a way to walk about the city so as to cross each bridge exactly
once and then return to the starting point. The first person to solve
the problem (in the negative) was the Swiss mathematician Leonhard
Euler in 1736. He represented the sections of the city and the seven
bridges by the graph of figure 7.12, and proved that it is impossible to
find a path in it that transverses every edge of the graph exactly once.
In the next section we study why this is so.
1The city is currently called Kaliningrad and is part of the Russian republic.
7.3. PATHS AND CIRCUITS 92
a e
t s f
r
m n q g
l
o p
h
b d
k i
j
a e c
is not greater than L.2 Currently it is not known whether the class
NP is strictly larger than the class P, although it is strongly suspected
that it is. The class NP contains a subclass called NP-complete con-
taining the “hardest” problems of the class, so that their complexity
must be higher than polynomial unless P=NP. The TSP is one of these
problems.
Gray Codes. A Gray code is a sequence s1 , s2 , . . . , s2n of n-binary
strings verifying the following conditions:
For instance: 000, 001, 011, 010, 110, 111, 101, 100,
1: procedure dijkstra(w,a,z,L)
2: L(a) := 0
3: for all vertices x 6= a
4: L(x) := ∞
5: T := set of all vertices
6: {T is the set of all vertices whose shortest}
7: {distance from a has not been found yet}
8: while z in T
9: begin
10: choose v in T with minimum L(v)
11: T := T - {v}
12: for each x in T adjacent to v
13: L(x) := min{L(x),L(v)+w(v,x)}
2Informally,P problems are “easy to solve”, and NP problems are problems
whose answer is “easy to check”. In a sense the P=NP problem consist of de-
termining whether every problem whose solution is easy to check is also easy to
solve.
7.3. PATHS AND CIRCUITS 96
14: end
15: return L(z)
16: end dijkstra
2 3
2
a z
3 d 1
4 4
e f
2
The algorithm would label the vertices in the following way in each
iteration (the boxed vertices are the ones removed from T ):
iteration a b c d e f z
0 0 ∞ ∞ ∞ ∞ ∞ ∞
1 0 2 ∞ 3 4 ∞ ∞
2 0 2 3 3 4 ∞ ∞
3 0 2 3 3 4 ∞ 6
4 0 2 3 3 4 ∞ 4
5 0 2 3 3 4 6 4
6 0 2 3 3 4 6 4
c c f c
b d b d b d
a e a e a e
h
e
c d
1. χ(Kn ) = n.
2. Let G be a simple graph. The following statements are equiva-
lent:
(a) χ(G) = 2.
(b) G is bipartite.
(c) Every circuit in G has even length
3. Five Color Theorem (Kempe, Heawood) (not hard to prove):
Every simple, planar graph is 5-colorable.
Trees
8.1. Trees
We draw rooted trees with the root at the top. The arrows indicat-
ing the directions of the edges can be omitted.
r
a b c
d g
e f i
h
j k l m n o p
The level of a vertex v is the length of the simple path from the root
to v. The height of a rooted tree is the maximum level of its vertices.
Consider the tree in Figure 8.1 as a graph, and draw a new rooted tree where the
node "a" is the root. Then comment the new relation between "r" and "d".
8.2. BINARY TREES 102
Example: Figure 8.2 contains a binary search tree for the set S =
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. In order to find a element we start at the root
and compare it to the data in the current vertex (initially the root).
If the element is greater we continue through the right child, if it is
smaller we continue through the left child, if it is equal we have found
it. If we reach a terminal vertex without founding the element, then
that element is not present in S.
5 5, 2, 9, 1, 3, 7, 10, 4, 6, 8
2 9
6
1 3 7 10 3
9
4 6 8 2
5
Figure 8.2. Binary Search Tree. 8
12
compare it to the data in the current vertex v. If the new data item is
greater than the data in the current vertex then we move to the right
child, if it is less we move to the left child. If there is no such child
then we create one and put the new data in it. For instance, the tree in
figure 8.3 has been made from the following list of words choosing them
in the order they occur: “IN A PLACE OF LA MANCHA WHOSE
NAME I DO NOT WANT TO REMEMBER”.
IN
A PLACE
I OF WHOSE
DO LA WANT
MANCHA TO
NAME REMEMBER
NOT
TAKE A BREAK!
8.3. DECISION TREES, TREE ISOMORPHISMS 104
a2<a3? a1<a3?
YES NO YES NO
a1,a2,a3 a1<a3? a2,a1,a3 a2<a3?
YES NO YES NO
a1,a3,a2 a3,a1,a2 a2,a3,a1 a3,a2,a1
T1 T2 T3
Figure 8.6. Trees with different kinds of isomorphisms.
T1 T2 T3 T4 T5
1 0
1 0
a 1 0
1 0 1 0
b
1 0
a b c d
c d
Fix length code Huffman code
character frequency
a 2
b 3
c 7
d 8
e 12
2, 3 , 7, 8, 12 → 5, 7 , 8, 12 → 12, 8, 12
|{z} |{z}
5 12
12, 8, 12 → 20, 12 → 32
|{z} | {z }
20 32
32
1 0
20
1 12
0 e
12
8
1 0 d
5
7
1 0 c
2 3
a b
character code
a 1111
b 1110
c 110
d 10
e 0
2+3+7+8+12 = 32
The other choice yields the following:
4x2+4x3+3x7+2x8+1x12 = 69
12, 8, 12 → 20, 12 → 32
|{z}
20
| {z }
32
L : bit length per character
L = 69/32
32
1 0
12 20
0 0
1 1
5 7 8 12
1 0 c d e
2 3
a b
3
1 1
2 0 3
0 1 0 1 1 1 00
1
1 0 2 1 0 0 2 1 0
1 1 0 1 1 0 0 1 0 10 0 0 0
0 0 1
1
1 0 0 1 0 0 0 1 0 0
1 0 1 1 0 0 0 0 0 10 0 0
0 1 0 0 0 0 0
0 0 0 0
0 0 0 0
1 1 1 1
value of the labels of its children. This process is called the minimax
procedure. Every vertex labeled “1” will represent a position in which
the first player has advantage and can win if he/she works without
making mistakes; on the other hand, vertices labeled “0” represent
positions for which the second player has advantage. Now the strategy
is for the first player to select at each position a children with maximum
value, while the second player will be interested in selecting children
with minimum value. If the starting position has been labeled “1” that
means that the first player has a winning strategy, otherwise the second
player has advantage. For instance in the present game the first player
has advantage at the initial
position, and only one favorable movement
at that point: 31 → 01 , i.e., he/she must remove all 3 coins from
the first pile. If for any reason the first player makes a mistake and
2
removes say one coin from the first pile, going to position 1
, then the
0
second player has one favorable move to vertex 1 , which is the one
with minimum “value”.
(4) v
1 3 4 w
4 3 6 7 5 4 5 1 ?
TL TR
+ *
a
* +
b c d e f h
Figure 8.14. Tree for a + b ∗ c + d ↑ e ∗ (f + h).
T1 T2 ... Tk
Figure 8.15. Ordering of trees.
TAKE A BREAK!
8.5. SPANNING TREES 116
b h
e
g i
a f
5. Go to step 2.
1: procedure bfs(V,E)
2: S := (v1) {ordered list of vertices of a fix level}
3: V’ := {v1} {v1 is the root of the spanning tree}
4: E’ := {} {no edges in the spanning tree yet}
5: while true
6: begin
7: for each x in S, in order,
8: for each y in V - V’
9: if (x,y) is an edge then
10: add edge (x,y) to E’ and vertex y to V’
11: if no edges were added then
12: return T
13: S := children of S
14: end
15: end bfs
Figure 8.17 shows the spanning tree obtained using the breadth-first
search algorithm on the graph with its vertices ordered lexicographi-
cally: a, b, c, d, e, f, g, h, i.
c
b h
i e
g
a f
1: procedure dfs(V,E)
2: V’ := {v1} {v1 is the root of the spanning tree}
3: E’ := {} {no edges in the spanning tree yet}
4: w := v1
5: while true
6: begin
7: while there is an edge (w,v) that when added
8: to T does not create a cycle in T
9: begin
10: Choose first v such that (w,v)
11: does not create a cycle in T
12: add (w,v) to E’
13: add v to V’
14: w := v
15: end
16: if w = v1 then
8.5. SPANNING TREES 119
17: return T
18: w := parent of w in T {backtrack}
19: end
20: end
Figure 8.18 shows the spanning tree obtained using the breadth-first
search algorithm on the graph with its vertices ordered lexicographi-
cally: a, b, c, d, e, f, g, h, i.
b h
i e
g
a f
a 4 b
2 5
c 1 d
3
6
6
e 2 f
1: procedure prim(V,w,s)
2: V’ := {s} {vertex set starts with s}
3: E’ = {} {edge set initially empty}
4: for i := 1 to n-1 {put n edges in spanning tree}
5: begin
6: find x in V’ and y in V - V’ with minimum w(x,y)
7: add y to V’
8: add (x,y) to E’
9: end
10: return E’
11: end prim
2 4
101
a z
1
100
c
Figure 8.20
1: procedure kruskal(E,w,n)
2: V’ := V
3: E’ := {}
4: while |E’| < n-1
5: begin
6: among all edges not completing a cycle in T
7: choose e of minimum weight and add it to E
8: end
9: T’ = (V’,E’)
10: return T’
11: end kruskal
Boolean Algebras
x1
OR GATE x1 + x2
x2
x1
AND GATE x1 x2
x2
NOT GATE x x
x1 x2 x1 + x2
1 1 1
1 0 1
0 1 1
0 0 0
OR GATE
122
9.1. COMBINATORIAL CIRCUITS 123
x1 x2 x1 · x2
1 1 1
1 0 0
0 1 0
0 0 0
AND GATE
x x
1 0
0 1
NOT GATE
x1
x2 y
x3
x1 x2 x3 y = (x1 + x2 ) · x3
1 1 1 0
1 1 0 1
1 0 1 0
1 0 0 1
0 1 1 0
0 1 0 1
0 0 1 1
0 0 0 1
stays so even after x2 goes back to its original value 0. That way we
can store a bit. We can “delete” it by switching input x1 to 0.
x1
x2
1. Associative
(a + b) + c = a + (b + c)
(a · b) · c = a · (b · c)
2. Commutative
a+b=b+a
a·b=b·a
3. Distributive
a · (b + c) = (a · b) + (a · c)
a + (b · c) = (a + b) · (a + c)
4. Identity
a+0=a
a·1=a
5. Complement
a+a=1
a·a=0
x y x+y x·y
1 1 0 0
1 0 0 0
0 1 0 0
0 0 1 1
1. Associative
(x ∨ y) ∨ z = x ∨ (y ∨ z)
(x ∧ y) ∨ z = x ∧ (y ∧ z)
2. Commutative
x∨y =y∨x
x∧y =y∧x
3. Distributive
x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z)
x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z)
4. Identity
x∨0=x
x∧1=x
5. Complement
x∨x=1
x∧x=0
1. Idempotent
x∨x=x
x∧x=x
2. Bound
x∨1=1
x∧0=0
3. Absorption
x ∨ xy = x
x ∧ (x ∨ y) = x
4. Involution
x=x
5. 0 and 1
0=1
1=0
6. De Morgan’s
x∨y =x∧y
x∧y =x∨y
For instance the first idempotent law can be proved like this: x =
x ∨ 0 = x ∨ x ∧ x = (x ∨ x) ∧ (x ∨ x) = (x ∨ x) ∧ 1 = x ∨ x.
9.2. BOOLEAN FUNCTIONS, APPLICATIONS 127
x1 x2 x1 ⊕ x2
1 1 0
1 0 1
0 1 1
0 0 0
x1
x1 x2
x2
x1
x1 + x2
x2
x1
x1 x2
x2
x1
x1 x2
x2
x
x
x y
y
x y z f(x,y,z)
1 1 1 1
1 1 0 1
1 0 1 0
1 0 0 1
0 1 1 0
0 1 0 0
0 0 1 0
0 0 0 0
x
y
f(x,y,z)
.
x y s c
1 1 0 1
1 0 1 0
0 1 1 0
0 0 0 0
x
s
y
f g
I a b a b
S
σ0 σ0 σ1 1 0
σ1 σ1 σ1 0 0
133
10.1. FINITE STATE MACHINES 134
a/1 a/0
/ GFED
@ABC / GFED
@ABC
b/0
start σ0 σ1
g
b/0
start / GFED
@ABC
σ0
| BB
a/0 ||| BB b/0
| BB
|| BB
3 GFED
@ABC , GFED
@ABC
|~ | b/1 B
σ1 m σ2 k
a/0 a/1 b/0
/ HIJK
ONML , @ABC
GFED
11/0 10/0
start NC l C r
L 00/1 X
10/1 11/1
4. An initial state σ ∈ S.
5. A subset F ⊆ S of accepting or final states.
b/0 a/1
/ GFED
@ABC , GFED
@ABC
a/1
start σ0 l σ1
b/0
The second kind of diagram omits the outputs and represents the
accepting states with double circles:
b a
/ GFED
@ABC , GFED
@ABC
89:;
?>=<
a
start σ0 l σ1
b
Two finite-state automata that accept exactly the same set of strings
are said to be equivalent. For instance the following automaton also
accepts precisely strings of a’s abd b’s that end with an a, so it is
equivalent to the automaton shown above:
b a
/ GFED
@ABC , GFED
@ABC
89:;
?>=< / GFED
@ABC
89:;
?>=<
a
a
start σ0 l σ1 σ2
Z b
b b
/ GFED
@ABC
89:;
?>=< + GFED
@ABC
a
start E k O
a
b a
/ GFED
@ABC / GFED
@ABC
89:;
?>=< / GFED
@ABC
a a
start σ0 σ1 σ2
D g
b
/ GFED
@ABC / GFED
@ABC / GFED
@ABC , GFED
@ABC
89:;
?>=<
x a
a b
start σ0 σ1 σ2 l σ3
R F Z b
b a a
10.2. LANGUAGES AND GRAMMARS 137
E → T,
or the sum of an algebraic expression and a term
E → E + T.
T →T ∗F
A factor may consists of an algebraic expression between parenthesis
10.2. LANGUAGES AND GRAMMARS 138
F → (E),
F → z.
Those expressions are called productions, and tell us how we can
generate syntactically correct algebraic expressions by replacing suc-
cessively the symbols on the left by the expressions on the right. For
instance the algebraic expression “‘y + (x ∗ y + y) ∗ x” can be generated
like this:
E ⇒ E +T ⇒ T +T ⇒ F +T ⇒ y +T ⇒ y +T ∗F ⇒ y +F ∗F ⇒
y + (F ∗ F + T ) ∗ F ⇒ y + (x ∗ T + T ) ∗ F ⇒ y + (x ∗ F + T ) ∗ F ⇒
y + (x ∗ y + T ) ∗ F ⇒ y + (x ∗ y + F ) ∗ T ⇒ y + (x ∗ y + y) ∗ F ⇒
y + (x ∗ y + y) ∗ x .
We write G = (V, T, σ, P ).
A→B.
E ::= T | E + T
T ::= F | T ∗ F
F ::= (E) | x | y | z
There are also type 0 languages that are not type 1, but they are
harder to describe.
E → T, E → E + T,
T → F, T →T ∗F
10.2. LANGUAGES AND GRAMMARS 142
F → (E), F → L,
L → x, L → y, L → z.
Example: The Peano curve. The Peano curve is a space filling curve,
i.e., a function f : [0, 1] → [0, 1]2 such that the range of f is the whole
square [0, 1]2 , defined as the limit of the sequence of curves shown in
the figures below.
T = {l, r}, N = {C, L, R}, with and start symbol C, and productions
C → LLLL ,
L → RLLLR , R → RLR ,
L → l, R → r, l → l, r → r.
10.3. LANGUAGE RECOGNITION 144
So, for instance, the expression a∗ bb∗ represents all strings of the
form an bm with n ≥ 0, m > 0, a∗ (b + c) is the set of strings consisting
of any number of a’s followed by a b or a c, a(a + b)∗ b is the set of
strings over {a, b} than start with a and end with b, etc.
b b
/ @ABC
GFED / GFED
@ABC / GFED
@ABC
89:;
?>=<
a b
start σ C F
σ → bσ , σ → aC , C → bC , C → bF , F → λ.
f (S, x) = {S 0 | S → xS 0 ∈ P } ∪ {F | S → x ∈ P }.
F = {F } ∪ {S | S → λ ∈ P } .
1. S0 = P(S).
2. I0 = I.
3. σ 0 = {σ}.
4. F0 = {X ⊆ S[| X ∩ F 6= ∅}.
0
5. f (X, x) = f (S, x) , f 0 (∅, x) = ∅ .
S∈X
10.3. LANGUAGE RECOGNITION 148
b b
/ @ABC
GFED / GFED
@ABC / GFED
@ABC
89:;
?>=<
a b
start σ C F
Answer : The set of input symbols is the same as that of the given
automaton: I0 = I = {a, b}. The set of states is the set of subsets of
S = {σ, C, F }, i.e.:
S0 = {∅, {σ}, {C}, {F }, {σ, C}, {σ, F }, {C, F }, {σ, C, F }} .
The starting state is {σ}. The final states of A0 are the elements of S0
containing some final state of A:
F0 = {{F }, {σ, F }, {C, F }, {σ, C, F }} .
b b
GF
/@A{σ}ED GF EDo a GF?> =<
ED
BC @A
/ {C}BC @A
89{σ, C, F }:; BC
a
start
O x; d
66 II I O
a xxx 66 IIa
x a 6 II
b x 66 II b
GF
?> ED
=< GFED 66b GF ED
xx
@ABC
@A
89{σ, F }:;
BC 66 @A{σ, C}BC
b 66
2 ∅ r
d
L R IIII
a
II 666
II 6 b
a III 66
a b
GF
?> ED
=< GF
?> ED
=<
@A
89{F }BC:; @A
89{C, F }BC :;
We notice that some states are unreachable from the starting state.
After removing the unreachable states we get the following simplified
version of the finite-state automaton:
b b a
GF
/@A{σ}ED /GF ED GF?> ED
=< a GFED
/@ABC
BC @A{C}BC /@A
89{C, F }BC
:;
a b
start 8 ∅ e
b
a
10.3. LANGUAGE RECOGNITION 149
150
A.1. EFFICIENT COMPUTATION OF POWERS MODULO M 151
a %= m;
while (y) {
p *= p;
p %= m;
if (x & y) {
p *= a;
p %= m;
}
y >>= 1;
}
return p;
}
Tape
Tape head
control
unit
The execution of a Turing machine stops when it enters the Halt state
or when it enters a state for which there is no valid move. The output
of the Turing machine is the contents of the tape when the machine
stops.
We say that an input string is accepted by a Turing machine if
the machine enters the Halt state. Otherwise the string is rejected.
This can happen in two ways: by entering a state other than the Halt
state from which there is no move, or by running forever (for instance
executing an infinite loop).
If a Turing machine has at least two instructions with the same state
and input letter, then the machine is nondeterministic. Otherwise it is
deterministic.