Module 1 - Algorithm Analysis
Introduction
What is Analysis of an Algorithm?
Analyzing an algorithm means calculating/predicting the resources that the
algorithm requires.
Analysis provides theoretical estimation for the required resources of an
algorithm to solve a specific computational problem.
Two most important resources are computing time (time complexity) and
storage space (space complexity).
Why Analysis is required?
By analyzing some of the candidate algorithms for a problem, the most
efficient one can be easily identified.
Efficiency of Algorithm
• The efficiency of an algorithm is a measure of the amount of resources
consumed in solving a problem of size 𝑛.
• An algorithm must be analyzed to determine its resource usage.
• Two major computational resources are execution time and memory
space.
• Memory Space requirement can not be compared directly, so the
important resource is computational time required by an algorithm.
• To measure the efficiency of an algorithm requires to measure its
execution time using any of the following approaches:
1. Empirical Approach: To run it and measure how much processor time is
needed.
2. Theoretical Approach: Mathematically computing how much time is needed as
a function of input size.
How Analysis is Done?
Empirical (posteriori) approach Theoretical (priori) approach
Programming different competing Determining mathematically the
techniques & running them on resources needed by each
various inputs using computer. algorithm.
Implementation of different Uses the algorithm instead of an
techniques may be difficult. implementation.
The same hardware and software The speed of an algorithm can be
environments must be used for determined independent of the
comparing two algorithms. hardware/software environment.
Results may not be indicative of Characterizes running time as a
the running time on other inputs function of the input size 𝒏,
not included in the experiment. considers all possible values.
Time Complexity
• Time complexity of an algorithm quantifies the amount of time taken by
an algorithm to run as a function of the length of the input.
• Running time of an algorithm depends upon,
1. Input Size
2. Nature of Input
• Generally time grows with the size of input, for example, sorting 100
numbers will take less time than sorting of 10,000 numbers.
• So, running time of an algorithm is usually measured as a function of
input size.
• Instead of measuring actual time required in executing each statement in
the code, we consider how many times each statement is executed.
• So, in theoretical computation of time complexity, running time is
measured in terms of number of steps/primitive operations performed.
Asymptotic Notations
The theoretical (priori) approach of analyzing an algorithm to measure the
efficiency does not depend on the implementation of the algorithm.
In this approach, the running time of an algorithm is describes as
Asymptotic Notations.
Computing the running time of algorithm’s operations in mathematical
units of computation and defining the mathematical formula of its run-
time performance is referred to as Asymptotic Analysis.
An algorithm may not have the same performance for different types of
inputs. With the increase in the input size, the performance will change.
Asymptotic analysis accomplishes the study of change in performance of
the algorithm with the change in the order of the input size.
Using Asymptotic analysis, we can very well define the best case, average
case, and worst case scenario of an algorithm.
Asymptotic Notations
• Asymptotic notations are mathematical notations used to represent the
time complexity of algorithms for Asymptotic analysis.
• Following are the commonly used asymptotic notations to calculate the
running time complexity of an algorithm.
1. Ο Notation
2. Ω Notation
3. θ Notation
• This is also known as an algorithm’s growth rate.
• Asymptotic Notations are used,
1. To characterize the complexity of an algorithm.
2. To compare the performance of two or more algorithms solving the same
problem.
1. 𝐎-Notation (Big 𝐎 notation) (Upper Bound)
• The notation Ο(𝑛) is the formal way to express the upper bound of an
algorithm's running time.
• It measures the worst case time complexity or the longest amount of time
an algorithm can possibly take to complete.
• For a given function 𝑔(𝑛), we denote by Ο(𝑔(𝑛)) the set of functions,
Ο(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n0 ≤
n}
Big(𝐎) Notation 𝑔(𝑛) is an asymptotically upper bound for
𝑓(𝑛).
𝑓(𝑛) = 𝑂(𝑔(𝑛)) implies:
𝒄. 𝒈(𝒏)
𝒇 𝒏 “ ≤ ” 𝒄. 𝒈(𝒏)
An upper bound 𝑔(𝑛) of an algorithm
𝒇(𝒏)
defines the maximum time required, we can
always solve the problem in at most 𝑔(𝑛)
time.
Time taken by a known algorithm to solve a
problem with worse case input gives the
𝒏 upper bound.
𝒏𝟎 𝒇(𝒏) = 𝑶(𝒈(𝒏))
2. 𝛀-Notation (Omega notation) (Lower Bound)
• Big Omega notation (Ω ) is used to define the lower bound of any
algorithm or we can say the best case of any algorithm.
• This always indicates the minimum time required for any algorithm for
all input values, therefore the best case of any algorithm.
• When a time complexity for any algorithm is represented in the form of
big-Ω, it means that the algorithm will take at least this much time to
complete it's execution. It can definitely take more time than this too.
• For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,
Ω(g(n)) = {f(n):there exist positive constants c and 𝑛0 such that 0 ≤ cg n ≤ f n for all 𝑛0 ≤ n}
2. 𝛀-Notation (Omega notation) (Lower Bound)
• For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,
Ω(g(n)) = {f(n):there exist positive constants c and 𝑛0 such that 0 ≤ cg n ≤ f n for all 𝑛0 ≤
n}
Big(𝛀) Notation 𝑔(𝑛) is an asymptotically lower bound for
𝑓(𝑛).
𝒇(𝒏) 𝑓(𝑛) = Ω(𝑔(𝑛)) implies:
𝑓(𝑛)“ ≥ ” 𝑐. 𝑔(𝑛)
𝒄. 𝒈(𝒏) A lower bound 𝑔(𝑛) of an algorithm defines
the minimum time required, it is not
possible to have any other algorithm (for the
same problem) whose time complexity is
less than 𝑔(𝑛) for random input.
𝒏
𝒏𝟎 𝒇(𝒏) = 𝜴(𝒈(𝒏))
3. 𝛉-Notation (Theta notation) (Same order)
• The notation θ(n) is the formal way to enclose both the lower bound and
the upper bound of an algorithm's running time.
• Since it represents the upper and the lower bound of the running time of
an algorithm, it is used for analyzing the average case complexity of an
algorithm.
• The time complexity represented by the Big-θ notation is the range
within which the actual running time of the algorithm will be.
• So, it defines the exact Asymptotic behavior of an algorithm.
• For a given function 𝑔(𝑛), we denote by θ(𝑔(𝑛)) the set of functions,
𝜃(𝑔(𝑛)) = {𝑓(𝑛) ∶ there exist positive constants c1 , c2 and n0 such that 0 ≤ 𝑐1 𝑔 𝑛 ≤ 𝑓 𝑛 ≤
𝑐2 𝑔 𝑛 for all 𝑛0 ≤ 𝑛}
𝛉-Notation 𝜃(𝑔(𝑛)) is a set, we can write
𝒄𝟐 . 𝒈(𝒏)
𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to indicate that 𝑓(𝑛) is a
member of 𝜃(𝑔(𝑛)).
𝒇(𝒏) 𝑔(𝑛) is an asymptotically tight bound for
𝑓(𝑛).
𝒄𝟏 . 𝒈(𝒏)
𝑓(𝑛) = 𝜃(𝑔(𝑛)) implies:
𝑓(𝑛)“ = ” 𝑐. 𝑔(𝑛)
𝒏
𝒏𝟎 𝒇(𝒏) = 𝜽(𝒈(𝒏))
Asymptotic Notations
1. O-Notation (Big O notation) (Upper Bound)
Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such 𝐟(𝐧) = 𝐎(𝐠(𝐧))
that 𝟎 ≤ 𝒇(𝒏) ≤ 𝒈(𝒏) for all 𝑛0 ≤ 𝑛}
2. Ω-Notation (Omega notation) (Lower Bound)
Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such that 𝐟 𝐧 = Ω(𝐠(𝐧))
𝟎 ≤ 𝒄𝒈 𝒏 ≤ 𝒇 𝒏 for all 𝑛0 ≤ 𝑛}
3. θ-Notation (Theta notation) (Same order)
θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐1 , 𝑐2 and 𝑛0 such 𝐟(𝐧) = 𝛉(𝐠(𝐧))
that 𝟎 ≤ 𝒄𝟏 𝒈 𝒏 ≤ 𝒇 𝒏 ≤ 𝒄𝟐 𝒈 𝒏 for all 𝑛0 ≤ 𝑛}
Asymptotic Notations – Examples
• Example 1: Example 2:
𝑓(𝑛) = 𝑛2 and 𝑔 𝑛 = 𝑛 𝑓 𝑛 = 𝑛 and 𝑔 𝑛 = 𝑛2
Algo. 1 Algo. 2 Algo. 1 Algo. 2
running time running time running time running time
𝑓 𝑛 ≥ 𝑔 𝑛 ⟹ 𝑓 𝑛 = Ω(𝑔(𝑛)) 𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = 𝑂(𝑔(𝑛))
𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝒏 𝒏 𝒇(𝒏) = 𝒏 𝒈(𝒏) = 𝒏𝟐
1 1 1 1 1 1
2 4 2 2 2 4
3 9 3 3 3 9
4 16 4 4 4 16
5 25 5 5 5 25
Asymptotic Notations – Examples
• Example 3: 𝑓 𝑛 = 𝑛2 and 𝑔 𝑛 = 2𝑛
𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = 𝑂(𝑔(𝑛))
𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝟐𝒏
1 1 2 𝑓(𝑛) < 𝑔(𝑛)
2 4 4 𝑓(𝑛) = 𝑔(𝑛)
3 9 8 𝑓(𝑛) > 𝑔(𝑛)
4 16 16 𝑓(𝑛) = 𝑔(𝑛)
Here for 𝑛 ≥ 4,
5 25 32 𝑓(𝑛) < 𝑔(𝑛)
𝑓 𝑛 ≤𝑔 𝑛
6 36 64 𝑓(𝑛) < 𝑔(𝑛)
𝑠𝑜, 𝑛0 = 4
7 49 128 𝑓(𝑛) < 𝑔(𝑛)
Common Orders of Magnitude
𝒏 𝒍𝒐𝒈 𝒏 𝒏𝒍𝒐𝒈 𝒏 𝒏𝟐 𝒏𝟑 𝟐𝒏 𝒏!
4 2 8 16 64 16 24
16 4 64 256 4096 65536 2.09 x 1013
64 6 384 4096 262144 1.84 × 1019 1.26 x 1029
256 8 2048 65536 16777216 1.15 × 1077 ∞
1024 10 10240 1048576 1.07 × 109 1.79 × 10308 ∞
4096 12 49152 16777216 6.87 × 1010 101233 ∞
Growth of Function
𝑛! 𝑒𝑛 2𝑛 𝑛3 𝑛2 𝑛𝑙𝑜𝑔𝑛 𝑛 𝑛 𝑙𝑜𝑔𝑛 1
• Arrange the given notations in the increasing order of their values.
1. 𝑛 𝑛𝑙𝑜𝑔𝑛 2𝑛 𝑙𝑜𝑔𝑛 𝑛 𝑒𝑛 𝑛2 + 𝑙𝑜𝑔𝑛 𝑛2 𝑙𝑜𝑔𝑙𝑜𝑔𝑛 𝑛3 (𝑙𝑜𝑔𝑛)2 𝑛!
2. 𝑛8 (𝑛2 − 𝑛 + 1)4 𝑛𝑛 𝑛(1+𝑒) (1 + 𝑒) 𝑛 𝑛2 𝑙𝑜𝑔𝑛 𝑛𝑙𝑜𝑔𝑛
• For each of the following pairs of functions, either 𝑓(𝑛) is 𝑂(𝑔(𝑛)), 𝑓(𝑛)
is Ω(𝑔(𝑛)), or 𝑓(𝑛) = 𝜃(𝑔(𝑛)). Determine which relationship is correct.
𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑛𝟐 𝑓(𝑛) = 10; 𝑔(𝑛) = 𝑙𝑜𝑔 10
𝑓(𝑛) = 𝑙𝑜𝑔 𝑙𝑜𝑔 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔 𝑛 𝑓(𝑛) = 2𝑛 ; 𝑔(𝑛) = 10𝑛2
𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔2 𝑛 𝑓 𝑛 = 2𝑛; 𝑔(𝑛) = 3𝑛
𝑓(𝑛) = 𝑛 𝑙𝑜𝑔 𝑛 + 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔 𝑛 𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔 𝑛2
Asymptotic Notations in Equations
• Consider an example of buying elephants and goldfish:
Cost = cost_of_elephants + cost_of_goldfish
Cost ≈ cost_of_elephants (approximation) Negligible
• Maximum Rule: Let, 𝑓, 𝑔: 𝑁 → 𝑅 + the max rule says that:
𝑂( 𝑓(𝑛)+𝑔(𝑛))=𝑂(max(𝑓(𝑛),𝑔(𝑛)))
1. n4 + 100n2 + 10n + 50 is 𝐎(𝐧𝟒 )
2. 10n3 + 2n2 is 𝐎(𝐧𝟑 )
3. n3 - n2 is 𝐎(𝐧𝟑 )
• The low order terms in a function are relatively insignificant for large 𝒏
𝑛4 + 100𝑛2 + 10𝑛 + 50 ≈ 𝑛4
Asymptotic Notations
1. O-Notation (Big O notation) (Upper Bound)
Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such 𝐟(𝐧) = 𝐎(𝐠(𝐧))
that 𝟎 ≤ 𝒇(𝒏) ≤ 𝒈(𝒏) for all 𝑛0 ≤ 𝑛}
2. Ω-Notation (Omega notation) (Lower Bound)
Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛0 such that 𝐟 𝐧 = Ω(𝐠(𝐧))
𝟎 ≤ 𝒄𝒈 𝒏 ≤ 𝒇 𝒏 for all 𝑛0 ≤ 𝑛}
3. θ-Notation (Theta notation) (Same order)
θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐1 , 𝑐2 and 𝑛0 such 𝐟(𝐧) = 𝛉(𝐠(𝐧))
that 𝟎 ≤ 𝒄𝟏 𝒈 𝒏 ≤ 𝒇 𝒏 ≤ 𝒄𝟐 𝒈 𝒏 for all 𝑛0 ≤ 𝑛}
Expressing Function in terms of 𝑂, 𝛺, θ notation
• Find Big O notation for f n = 2𝑛 + 6𝑛2 + 3𝑛
2𝑛 + 6𝑛2 + 3𝑛 ≤ 2𝑛 + 6𝑛2 + 𝑛2
2𝑛 + 7𝑛2 ≤ 2𝑛 + 2𝑛
f n ≤ 4𝑛
C = 2, 𝑔 n = 2𝑛
f n = O(2𝑛)
Expressing Function in terms of 𝑂, 𝛺, θ notation
• Find Big O, Omega Ω, Theta θ notation for f n = 2n3 + 7𝑛2 + 𝑛 + 7
2n3 + 7𝑛2 + 𝑛 + 7 ≤ 2n3 + 7𝑛2 + 𝑛 + 𝑛
≤ 2n3 + 7𝑛2 + 2𝑛
≤ 2n3 + 7𝑛2 + 𝑛2
≤ 2n3 + 8𝑛2
≤ 2n3 + n3
≤ 3n3
Thus, C = 3, 𝑔 n = n3
Big O Notation: Omega Ω Theta θ Notation:
f n ≤ 3𝑔(n3) Notation: 2𝑔(n3) ≤ f n ≤ 3𝑔(n3)
f n = O(n3) f n ≥ 2𝑔(n3) f n = θ(n3)
f n = Ω(n3)
Exercises
1. Express the function 𝑛3/1000 − 100𝑛2 − 100𝑛 + 3 in terms of θ notation.
2. Express 20𝑛3 + 10𝑛 𝑙𝑜𝑔 𝑛 + 5 in terms of O notation.
3. Express 5𝑛 𝑙𝑜𝑔 𝑛 + 2𝑛 in terms of O notation.
4. Prove or disprove (i) Is 2n+1 = O(2n) (ii) Is 22n = O(2n)
5. Check the correctness for the following equality, 5n3 + 2n = O(n3 )
6. Find θ notation for the following function
a. F(𝑛) = 3 ∗ 2𝑛 + 4𝑛2 + 5𝑛 + 2
7. Find O notation for the following function
a. F(n) = 2n + 6n2 + 3n
b. F(n) = 4n3 + 2n + 3
8. Find Ω notation for the following function
a. F(n) = 4 ∗ 2n + 3n
b. F(n) = 5n3 + n2 + 3n + 2
Recurrence Equation
Introduction
• Many algorithms (divide and conquer) are recursive in nature.
• When we analyze them, we get a recurrence relation for time complexity.
• We get running time as a function of 𝒏 (input size) and we get the
running time on inputs of smaller sizes.
• A recurrence is a recursive description of a function, or a description of a
function in terms of itself.
• A recurrence relation recursively defines a sequence where the next term
is a function of the previous terms.
Introduction
• Many algorithms (divide and conquer) are recursive in nature.
• When we analyze them, we get a recurrence relation for time complexity.
• We get running time as a function of 𝒏 (input size) and we get the
running time on inputs of smaller sizes.
• A recurrence is a recursive description of a function, or a description of a
function in terms of itself.
• A recurrence relation recursively defines a sequence where the next term
is a function of the previous terms.
Substitution Method – Example 1
• We make a guess for the solution and then we use mathematical
induction to prove the guess is correct or incorrect.
Time to solve the
Example 1: instance of size 𝑛 − 1
𝑻(𝒏) = 𝑻(𝒏 − 𝟏) + 𝒏 1
• ReplacingTime
𝑛 to
by 𝑛
solve − 1 and 𝑛 − 2, we can write following equations.
the
instance of size 𝑛
𝑻 𝒏−𝟏 =𝑻 𝒏−𝟐 +𝒏−𝟏 2
𝑻 𝒏−𝟐 =𝑻 𝒏−𝟑 +𝒏−𝟐 3
• Substituting equation 3 in 2 and equation 2 in 1 we have now,
𝑻(𝒏) = 𝑻(𝒏 − 𝟑) + 𝒏 − 𝟐 + 𝒏 − 𝟏 + 𝒏 4
Substitution Method – Example 1
𝑻(𝒏) = 𝑻(𝒏 − 𝟑) + 𝒏 − 𝟐 + 𝒏 − 𝟏 + 𝒏 4
• From above, we can write the general form as,
𝑻 𝒏 = 𝑻 𝒏 − 𝒌 + (𝒏 − 𝒌 + 𝟏) + (𝒏 − 𝒌 + 𝟐) + … + 𝒏
• Suppose, if we take 𝑘 = 𝑛 then,
𝑻 𝒏 = 𝑻 𝒏 − 𝒏 + (𝒏 − 𝒏 + 𝟏) + (𝒏 − 𝒏 + 𝟐) + … + 𝒏
𝑻 𝒏 = 𝟎 +𝟏 + 𝟐 + …+𝒏
𝒏 𝒏+𝟏
𝑻 𝒏 = = 𝑶 𝒏𝟐
𝟐
Substitution Method – Example 2
𝑐1 𝑖𝑓 𝑛 = 0
𝑡 𝑛 =
𝑐2 + 𝑡 𝑛 − 1 𝑜/𝑤
• Rewrite the equation, 𝑡 𝑛 = 𝑐2 + 𝑡(𝑛 − 1)
• Now, replace 𝐧 by 𝒏 – 𝟏 and 𝒏 − 𝟐
𝑡 𝑛 − 1 = 𝑐2 + 𝑡(𝑛 − 2) ∴ 𝑡 𝑛 − 1 = 𝑐2 + 𝑐2 + 𝑡(𝑛 − 3)
𝑡 𝑛 − 2 = 𝑐2 + 𝑡(𝑛 − 3)
• Substitute the values of 𝒏 – 𝟏 and 𝒏 − 𝟐
𝑡 𝑛 = 𝑐2 + 𝑐2 + 𝑐2 + 𝑡(𝑛 − 3)
• In general,
𝑡 𝑛 = 𝑘𝑐2 + 𝑡(𝑛 − 𝑘)
• Suppose if we take 𝑘 = 𝑛 then,
𝑡 𝑛 = 𝑛𝑐2 + 𝑡 𝑛 − 𝑛 = 𝑛𝑐2 + 𝑡 0
Substitution Method Exercises
• Solve the following recurrences using substitution method.
1 if n = 0 or 1
1. T n =
T n − 1 + n − 1 o/w
2. T (n) = T (n − 1) + 1 and T (1) = θ (1).
Master Theorem
• The master theorem is a cookbook method for solving recurrences.
Time to divide
• Suppose you have a recurrence of the form & recombine
𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛)
Number of sub- Time required to
problems solve a sub-problem
• This recurrence would arise in the analysis of a recursive algorithm.
• When input size 𝒏 is large, the problem is divided up into 𝒂 sub-
problems each of size 𝒏/𝒃. Sub-problems are solved recursively and
results are recombined.
• The work to split the problem into sub-problems and recombine the
results is 𝒇(𝒏).
Recurrence Tree Method
• In recurrence tree, each node represents the cost of a single sub-problem in the set of
recursive function invocations.
• We sum the costs within each level of the tree to obtain a set of per level costs.
• Then we sum the all the per level costs to determine the total cost of all levels of the
recursion.
• Here while solving recurrences, we divide the problem into sub-problems of equal
size.
• E.g., 𝑇(𝑛) = 𝑎 𝑇(𝑛/𝑏) + 𝑓(𝑛) where 𝑎 > 1 , 𝑏 > 1 and 𝑓(𝑛) is a given function.
• 𝐹(𝑛) is the cost of splitting or combining the sub problems.
𝒏 𝒃 𝒏 𝒃
Example 1: 𝑻(𝒏) = 𝟐𝑻(𝒏/𝟐) + 𝒏
Recurrence Tree Method
The recursion tree for this recurrence is When we add the values across the levels of
the recursion tree, we get a value of 𝑛 for
every level.
𝑛
The bottom level has 2log 𝑛 nodes, each
contributing the cost 𝑇(1).
𝑛 2 𝑛 2 𝒏
𝒍𝒐𝒈𝟐 𝒏 We have 𝑛 + 𝑛 + 𝑛 + …… 𝑙𝑜𝑔 𝑛
𝑡𝑖𝑚𝑒𝑠
𝒍𝒐𝒈𝟐 𝒏−𝟏
𝑛 22 𝑛 22 𝑛 22 𝑛 22 𝒏
𝑻(𝒏) = 𝒏 + 𝟐𝒍𝒐𝒈 𝒏 𝑻(𝟏)
𝒊=𝟎
1 1 1 1 𝑻 𝒏 = 𝒏 𝒍𝒐𝒈 𝒏 + 𝒏
𝑻 𝒏 = 𝑶(𝒏 log 𝒏)
Example 2: 𝑻(𝒏) = 𝑻(𝒏/𝟑) + 𝑻(𝟐𝒏/𝟑) + 𝒏
Recurrence Tree Method
The recursion tree for this recurrence is When we add the values across the levels of
the recursion tree, we get a value of 𝑛 for
every level.
log3/2 𝑛−1
𝑛
𝑇(𝑛) = 𝑛 + 𝑛log3/2 2 𝑇(1)
𝑖=0
𝑛 3 2𝑛 3 𝒏
𝒍𝒐𝒈𝟑 𝒏 𝒍𝒐𝒈𝟑/𝟐 𝒏 𝑻(𝒏) ∈ 𝒏 log 𝟑/𝟐 𝒏
1𝑛 2𝑛 1 2𝑛 2 2𝑛 𝒏
33 33 3 3 3 3
Example 3: 𝑻(𝒏) = 𝟐𝑻(𝒏/𝟐) + 𝒄. 𝒏𝟐
Recurrence Tree Method
The recursion tree for this recurrence is Sub-problem size at level 𝑖 is 𝑛 2𝑖
2
Cost of problem at level 𝑖 Is 𝑛
2𝑖
Total cost,
𝒍𝒐𝒈𝟐 𝒏−𝟏 𝒊
𝟏
𝑛2 𝑻 𝒏 ≤ 𝒏𝟐
𝟐
𝒊=𝟎
∞
𝑛 2 2
𝑛 2 2 1 2 𝑛2 𝟏
𝒊
𝑻 𝒏 ≤ 𝒏𝟐
𝟐
𝒊=𝟎
𝑛 4 2 𝑛 4 2 𝑛 4 2 𝑛 4 2 1 4 𝑛2 𝑻 𝒏 ≤ 𝟐𝒏𝟐
𝑻 𝒏 = 𝑶 𝒏𝟐
𝑂 𝑛2
Recurrence Tree Method - Exercises
• Example 1: 𝑇(𝑛) = 𝑇(𝑛/4) + 𝑇(3𝑛/4) + 𝑐. 𝑛
• Example 2: 𝑇(𝑛) = 3𝑇(𝑛/4) + 𝑐. 𝑛2
• Example 3: 𝑇(𝑛) = 𝑇(𝑛/4) + 𝑇(𝑛/2) + 𝑛2
• Example 4: 𝑇(𝑛) = 𝑇(𝑛/3) + 𝑇(2𝑛/3) + 𝑛