Theory of Computation
Dr. Sarmad Abbasi
Dr. Sarmad Abbasi () Theory of Computation 1 / 38
Lecture 21: Overview
Big-Oh notation.
Little-o notation.
Time Complexity Classes
Non-deterministic TMs
The Class P
Dr. Sarmad Abbasi () Theory of Computation 2 / 38
Resource Bounded Computations
Let us now look at resource bounded computations
The most important resources are:
1 Time
2 Space
Dr. Sarmad Abbasi () Theory of Computation 3 / 38
Resource Bounded Computations
Let
A = {0k 1k : k ≥ 0}
we saw an algorithm that accepts A in time O(n2 ).
Dr. Sarmad Abbasi () Theory of Computation 4 / 38
Big Oh Notation
Let f , g : N → R. We say that
f (n) = O(g (n))
if there are positive integers c and n0 such that for all n ≥ n0
f (n) ≤ cg (n).
When f (n) = O(g (n)) we say that g is an asymptotic upper bound on f .
Some times we just say that g is an upper bound on f .
Dr. Sarmad Abbasi () Theory of Computation 5 / 38
Big Oh Notation
Let
f (n) = 13n4 + 7n2 + 13
then
f (n) = O(n4 ).
Furthermore,
f (n) = O(n5 ).
But it is not true that
f (n) = O(n3 )
Dr. Sarmad Abbasi () Theory of Computation 6 / 38
Big Oh Notation
Suppose that
f (n) = logb n
where b is a constant. Note that
log2 n
logb n = .
log2 b
Hence, we can say
f (n) = O(logn).
Let
f2 (n) = 5n log n + 200n log log n + 17
then f2 (n) = O(nlogn).
Dr. Sarmad Abbasi () Theory of Computation 7 / 38
Big Oh Notation
We can use O notation in exponents also. Thus
2O(n)
stands for a function that is upper bounded by
2cn
for some constant c.
Dr. Sarmad Abbasi () Theory of Computation 8 / 38
Big Oh Notation
We have to be careful though. For example consider
2O(log n) .
Well remember that O(log n) has a constant attached to it. Thus this
function is upper bounded by
2c log n
for some constant c. Now, we get
2c log n = (2log n )c = nc .
Thus 2O(log n) is polynomial.
Dr. Sarmad Abbasi () Theory of Computation 9 / 38
Big Oh Notation
Bounds of the form nc are called polynomial bounds. Where as bounds of
c
the form 2n are called exponential bounds.
Dr. Sarmad Abbasi () Theory of Computation 10 / 38
little Oh Notation
The Big-Oh notation roughly says that f is upper bounded by g. Now we
want to say
f grows strictly slower that g.
Let f , g : N → R + . We say that
f (n) = o(g (n))
if
f (n)
lim = 0.
n→∞ g (n)
Thus the ratio of f and g becomes closer and closer to 0.
Dr. Sarmad Abbasi () Theory of Computation 11 / 38
little Oh Notation
√
1 n = o(n)
2 n = o(n log log n)
3 n log log n = o(n log n)
4 n log n = o(n2 )
5 n2 = o(n3 )
Dr. Sarmad Abbasi () Theory of Computation 12 / 38
little Oh Notation
Lets look at a new TM that recognizes
A = {0k 1k : k ≥ 0}.
1 Scan the tape and make sure all the 0s are before 1s.
2 Repeat till there are 0s and 1s on the tape.
1 Scan the tape and if the total number of 0s and 1s is odd reject.
2 Scan and cross of every other 0 and do the same with every other 1.
3 If all 0s and 1s are crossed accept.
Thus if we have 13 0s in the next stage there would be 6.
Dr. Sarmad Abbasi () Theory of Computation 13 / 38
little Oh Notation
This algorithm shows that there is
O(nlogn)
time TM that decides A. Since,
nlogn = o(n2 ).
This points to a real algorithmic improvement.
Dr. Sarmad Abbasi () Theory of Computation 14 / 38
Time complexity
Time Complexity Classes Let t : N → N be a function. Define the time
complexity class
TIME(t(n))
to be:
TIME(t(n)) = {L|L is decided by an O(t(n))-time Turing machine}.
Dr. Sarmad Abbasi () Theory of Computation 15 / 38
Time complexity
So for example:
1 TIME(n) is the set of all languages, L, for which there is a linear time
TM that accepts L.
2 TIME(n2 ) is the set of all languages, L, for which there is a quadratic
time TM that accepts L.
Dr. Sarmad Abbasi () Theory of Computation 16 / 38
Time complexity
Recall
A = {0k 1k : k ≥ 0}.
Our first algorithm (or TM) showed that
A ∈ TIME(n2 ).
The second one showed that
A ∈ TIME(n log n).
Notice that
TIME(nlogn) ⊆ TIME(n2 ).
Dr. Sarmad Abbasi () Theory of Computation 17 / 38
Time complexity
What will happen if we allow ourselves to have a 2-tape TM.
Can we have a faster TM.
Yes, if we have two tapes we can do the following:
Dr. Sarmad Abbasi () Theory of Computation 18 / 38
Time complexity
1 Scan across to find out if all 0s are before 1s.
2 Copy all the 0s to the second tape.
3 Scan on the first tape in forward direction and the second tape in
reverse direction crossing off 0s and 1s.
4 If all 0s have been crossed off and all ones have been crossed off
accept else reject.
In fact, it one can design a 2-tape TM that accepts A while scanning the
input only once!
Dr. Sarmad Abbasi () Theory of Computation 19 / 38
Time complexity
This leads to an important questions?
Question
Can we design a 1-tape TM that accepts A in o(n log n) time. Preferably
linear time?
The answer is
NO!
Dr. Sarmad Abbasi () Theory of Computation 20 / 38
Time complexity
In fact there is a language L such that
1 L is accepted by a 2-tape TM in time O(n).
2 L cannot be accepted by any 1-TM TM in time o(n2 ).
Dr. Sarmad Abbasi () Theory of Computation 21 / 38
Time complexity
Thus the definition of TIME(t(n)) changes if we talk about 1-tape TMs or
2-tape TM. Fortunately, it does not change by too much as the following
theorem shows.
Theorem
Let t(n) be a function where t(n) ≥ n. Then every t(n)-time k-tape TM
has an equivalent O(t 2 (n))-time single tape TM.
Dr. Sarmad Abbasi () Theory of Computation 22 / 38
Time complexity
Notice the difference from computability theory. Previously we showed:
Theorem
Every k-tape TM has an equivalent 1-tape TM.
(We ignored time). Now, we are doing more detailed analysis.
Dr. Sarmad Abbasi () Theory of Computation 23 / 38
Time complexity
The proof of the theorem is just a more careful study of the previous
proof. Remember given a k-Tape TM, M, we made a 1-tape TM, N. N
works as follows:
1 On input x convert the input to #q0#x# · · · # the start
configuration of M. This configuration says that x is on the first
tape. The rest of the tapes are empty and the machine is in q0 .
2 In each pass over the tape change the current configuration to the
next one.
3 If an accepting configuration is reached accept
4 If a rejecting configuration is reached reject.
Now, we just have to estimate how much time does N require.
Dr. Sarmad Abbasi () Theory of Computation 24 / 38
Time complexity
On any input x of length n, we make the following claims.
1 M uses at most t(n) cells of its k-tapes.
2 Thus any configuration has length at most kt(n) = O(t(n)).
3 Thus each pass of N requires at most O(t(n)) steps.
4 N makes at most t(n) passes on its tape (after that M must halt).
This shows that N runs in time
O(t(n) × t(n)) = O(t 2 (n)).
Dr. Sarmad Abbasi () Theory of Computation 25 / 38
Time complexity
Where did we use the fact that
t(n) ≥ n.
In the first step the machine converts x to the initial configuration. This
takes time O(n). Thus the total time is actually
O(n) + O(t 2 (n)) = O(t 2 (n)).
This is a reasonable assumption as M requires time O(n) to read its input.
Which is true for most problems. The answer usually depends on the
entire input.
Dr. Sarmad Abbasi () Theory of Computation 26 / 38
Time complexity
You have studied many non-deterministic models such as:
1 NFA: Non-deterministic finite automata.
2 NPDA: Non-deterministic pushdown automata. Now, we will define
non-deterministic TMs.
We will now define non-deterministic TMs.
Dr. Sarmad Abbasi () Theory of Computation 27 / 38
Time complexity
A non-deterministic TM is formally given as M = (Σ, Γ, , q0 , qc , qr , δ)
where
1 Σ is the input alphabet.
2 Γ is the work alphabet.
3 q0 , qa , qr are start, accept and reject states.
4 δ is now a transition function:
δ : Q × Γ :→ P(Q × Γ × {L, R}).
For example we may have
δ(q5 , a) = {(q7 , b, R), (q4 , c, L), (q3 , a, R)}.
Thus the machine has three possible ways to proceed.
Dr. Sarmad Abbasi () Theory of Computation 28 / 38
Time Complexity
We can think of a non-deterministic computation as a tree:
Dr. Sarmad Abbasi () Theory of Computation 29 / 38
Time complexity
We will say that a non-deterministic TM M accepts x. If x is accepted on
at least one branch.
On the other hand M rejects x if all branches of the tree reach the reject
state.
Dr. Sarmad Abbasi () Theory of Computation 30 / 38
Time complexity
The nondeterministic running time of M is the length of the longest
branch. More, formally, The running time of a nondeterministic TM M is
the function f : N → N such that f (n) is the maximum number of steps
M uses on any branch of its computation on any input of length n.
Dr. Sarmad Abbasi () Theory of Computation 31 / 38
Time complexity
A few comments on the definition:
1 The notion of non-deterministic time does not correspond to any real
world machine we can make.
2 It is merely a mathematical definition.
3 We will see that this definition is extremely useful. It allows us to
capture many interesting problems.
Dr. Sarmad Abbasi () Theory of Computation 32 / 38
Time complexity
Let us prove some relation between non-deterministic time and
deterministic time.
Theorem
Let t(n) be a function, where t(n) ≥ n. Then every t(n) time
nondeterministic single tape TM has an equivalent
2O(t(n))
time deterministic single tape TM.
Thus this theorem says that we can get rid of non-determinism at the cost
of exponentiating the time!
Dr. Sarmad Abbasi () Theory of Computation 33 / 38
Time complexity
Let N be a non-deterministic TM that runs in time t(n). Let us first
estimate how big the computation tree of N can be:
1 The tree has depth t(n).
2 Let b be the maximum number of legal choices given by Ns transition
function.
3 Then this tree can have b t(n) leaves.
4 The total number of nodes can be at most
1 + b + b 2 + · · · + b t(n) ≤ 2b t(n) .
Dr. Sarmad Abbasi () Theory of Computation 34 / 38
Time complexity
A deterministic TM D simulates the computation tree of N. There are
several ways to do this. One is given in the book. Here is a slightly
different one. To begin with D is going to be a multi-tape TM.
Dr. Sarmad Abbasi () Theory of Computation 35 / 38
Time complexity
Consider a string over R = 1, 2, ... . . . , b of length t(n). So the string
12313412 corresponds to a computation of M where
1 Starting from the root go to the first child c1 of the root.
2 Then the second child of c1 call it c2 .
3 Then to the third child child of c2 call it c3 .
4 Then to the first child of c3
Thus each such string corresponds to a branch in the tree.
Dr. Sarmad Abbasi () Theory of Computation 36 / 38
Time complexity
Thus D is the following TM
1 On input x of length n.
2 For all strings s ∈ R t(n) .
3 Simulate N making the choices dictated by s.
4 If N accepts then accept.
5 If all branches of N reject then reject.
This method assumes that t(n) can be computed quickly. However, we
can also eliminate this requirement.
Dr. Sarmad Abbasi () Theory of Computation 37 / 38
Time complexity
For each string s the simulation can take O(t(n)) time. There are b t(n)
strings to simulate. Thus the total time is
O(t(n))b t(n) = 2O(t(n)) .
Our last part is to convert this multi-tape TM into a single tape TM. If we
use the previous theorem then the total time would be
(2O(t(n)) )2 = 22O(t(n)) = 2O(t(n)) .
We will come back to NTMs again.
Dr. Sarmad Abbasi () Theory of Computation 38 / 38
Time complexity
We want to know which problems can be solved in practice. Thus we
define the class P as follows:
[
P= TIME(nk ).
k
or more elaborately write is as
P = TIME(n) ∪ TIME(n2 ) ∪ TIME(n3 ) ∪ TIME(n4 ) · · · .
Dr. Sarmad Abbasi () Theory of Computation 39 / 38
Time complexity
The class P is important because:
1 P is invariant for all models of computation that are polynomially
equivalent to the deterministic single-tape machines.
2 P roughly corresponds to the problems that we can solve realistically
on these models.
Thus in defining P we can talk about single tape TMs, Java programs,
multi-tape TMs. They all give the same definition of P.
Dr. Sarmad Abbasi () Theory of Computation 40 / 38
Time complexity
The second point is problems in P can be solved in reasonable time. Some
people may object to this as they can point out that if a problem can be
solved in time
O(n5000 )
then such an algorithm is of no practical value. This objection is valid to a
certain point. However calling P to be the threshold of practical and
unpractical problems has been extremely useful.
Dr. Sarmad Abbasi () Theory of Computation 41 / 38