Analyzing Algorithm Efficiency
Analyzing Algorithm Efficiency
1. Algorithm vs pseudocode
2. Differences among best, expected, and worst case behaviours of an algorithm
3. Asymptotic analysis
-O notation
-Omega notation
-Theta notation
4. Time Complexity with simple and algorithms
5. Space Complexity with simple and algorithms
6. Recursion with examples
7. -Analyzing Recurrence relation (There are broadly three methods:)
– Iteration based method
– Recursion based method
– Master method
Algorithms
Algorithms vs pseudocode
Asymptotic analysis
4. Control Structures
Use IF...THEN...ELSE for conditional statements.
Use FOR, WHILE, or REPEAT...UNTIL for loops.
Pseudocode Conventions
5. Boolean Conditions
Write conditions using operators like AND, OR, NOT.
Use relational operators like <, >, = for comparisons.
7. Comments
Use comments to clarify logic (e.g., // This is a comment).
8. General Guidelines
Avoid language-specific syntax.
Keep it simple and clear for better readability.
Example: Calculate the Sum of the First n Natural Numbers.
#include <iostream>
using namespace std;
int main() {
int n, sum = 0;
cout << "Enter a positive integer: ";
Input:
cin >> n;
Enter a positive integer: 5
Output:
for (int i = 1; i <= n; i++) {
The sum of the first 5 natural numbers is: 15
sum += i; // Add i to sum
}
cout << "The sum of the first " << n << " natural numbers is: " << sum << endl;
return 0;
}
Pseudocode vs Algorithms
Pseudocode Using Loop Algorithm
Input: A positive integer n.
1. BEGIN Output: The sum of the first n natural numbers
2. INPUT n 1. Start
3. sum ← 0 2. Input the value of n.
4. FOR i ← 1 TO n DO 3. Initialized sum to 0.
5. sum ← sum + i 4. Repeat the following step for I from 1 to n:
6. ENDFOR • Add I to sum.
7. PRINT "The sum is", sum 5. End loop.
8. END 6. Output the value of sum.
7. Stop
Example : Adding two numbers
#include <iostream>
using namespace std;
int main() {
int num1, num2, sum;
cout << "Enter first number: ";
cin >> num1;
cout << "Enter second number: ";
cin >> num2;
sum = num1 + num2;
cout << "The sum of " << num1 << " and " << num2 << " is: " << sum << endl;
return 0;
}
Example : Adding two numbers
pseudocode
Algorithm to Add Two Numbers
1. BEGIN
Input: Two integers num1 and num2.
2. INPUT num1
Output: The sum of the two numbers.
3. INPUT num2
4. sum ← num1 + num2 1. Start
5. PRINT "The sum is", sum 2. Input the first number num1.
6. END 3. Input the second number num2.
4. Add the two numbers:
• sum=num1+num2
5. Output the value of sum.
6. Stop
Analysis of an
Algorithm
Introduction
What is Analysis of an Algorithm?
Analyzing an algorithm means calculating/predicting the resources that the algorithm
requires.
Analysis provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem.
Two most important resources are computing time (time complexity) and storage space
(space complexity).
Why Analysis is required?
By analyzing some of the candidate algorithms for a problem, the most efficient one can be
easily identified.
How?
Empirical/Experimental Theoretical
approach approach
Comparing value of ith index with the given element one by one, until we get the required
element or end of the array
Step 1: i=1 Step 3: i=3
𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖
i i
𝟐≠𝟏 𝟑≠𝟏
Step 2: i=2 Step 4: i=4
𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖
i i
𝟗≠𝟏 Element found at ith index, i=4
Linear Search – Algorithm
# Input : Array A, element x
# Output : First index of element x in A or -1 if not found
Algorithm: Linear_Search
for i = 1 to last index of A
if A[i] equals element x
return i
return -1
Linear Search - Analysis
The required element in the given array can be found at,
1. E.g. 2: It is at the first position
Best Case: minimum comparison is required
2. E.g. 3 or 1: Anywhere after the first position
Average Case: average number of comparison is required
3. E.g. 7: Last position or element does not found at all
Worst Case
Worst Case: maximum comparison is required
Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕
Best Case
Average Case
Linear Search - Analysis
The required element in the given array can be found at,
Worst Case
Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕
Best Case
Average Case
Analysis of Algorithm
𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏
𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏
𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓
Number Sorting - Example
Suppose you are sorting numbers in Ascending / Increasing order.
The initial arrangement of given numbers can be in any of the following three orders.
Case 1: Numbers are already Case 2: Numbers are Case 3: Numbers are initially
in required order, i.e., randomly arranged initially. arranged in Descending order
Ascending order Some numbers will change so, all numbers will change
No change is required their position their position
𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏
𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏
𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓
Best, Average, & Worst Case
• So instead of measuring the actual running time, input size is considered, that is going to be
given to the algorithm, and we calculate how the time required for execution of the program
increases with the input size.
Any other example
Autonomous Vehicles: In autonomous vehicles, real-time calculations are essential for route
optimization, obstacle detection, and decision-making to ensure safety. Delays in processing,
such as taking too long to brake in an emergency or avoid an obstacle, could lead to accidents,
making the vehicle unsafe and unreliable for everyday use.
E-commerce Checkout: A delayed checkout process can lead to cart abandonment, resulting in
lost sales and frustrated customers.
Online Banking Systems: Slow transaction processing can cause frustration, leading to
abandoned payments and reduced customer trust.
1. 𝐎-Notation (Big 𝐎 notation) (Upper Bound)
The notation Ο(𝑛) is the formal way to express the upper bound of an algorithm's running time.
It measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
For a given function 𝑔(𝑛), we denote by Ο(𝑔(𝑛)) the set of functions,
Ο(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
Big(𝐎) Notation 𝑔(𝑛) is an asymptotically upper bound for
𝑓(𝑛).
Ω(g(n)) = {f(n):there exist positive constants c and 𝑛 such that 0 ≤ cg n ≤ f n for all n ≥ 𝑛 }
2. 𝛀-Notation (Omega notation) (Lower Bound)
For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,
Ω(g(n)) = {f(n):there exist positive constants c and 𝑛 such that 0 ≤ cg n ≤ f n for all n ≥ 𝑛 }
Big(𝛀) Notation 𝑔(𝑛) is an asymptotically lower bound for
𝑓(𝑛).
𝒏
𝒏𝟎 𝒇(𝒏) = 𝜴(𝒈(𝒏))
3. 𝛉-Notation (Theta notation) (Same order)
The notation θ(n) is the formal way to enclose both the lower bound and the upper bound of an
algorithm's running time.
Since it represents the upper and the lower bound of the running time of an algorithm, it is used
for analyzing the average case complexity of an algorithm.
The time complexity represented by the Big-θ notation is the range within which the actual
running time of the algorithm will be.
So, it defines the exact Asymptotic behavior of an algorithm.
For a given function 𝑔(𝑛), we denote by θ(𝑔(𝑛)) the set of functions,
θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants c , c and n0 such that 0 ≤ 𝑐 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝑐 𝑔 𝑛 for all 𝑛 ≥ 𝑛 }
𝛉-Notation 𝜃(𝑔(𝑛)) is a set, we can write
𝒄𝟐 . 𝒈(𝒏)
𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to indicate that 𝑓(𝑛) is a
member of 𝜃(𝑔(𝑛)).
𝒏
𝒏𝟎 𝒇(𝒏) = 𝜽(𝒈(𝒏))
Asymptotic Notations
1. O-Notation (Big O notation) (Upper Bound) worst Case
Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛 such 𝐟(𝐧) = 𝐎(𝐠(𝐧))
that 𝟎 ≤ 𝒇(𝒏) ≤ 𝒈(𝒏) for all 𝑛 ≥ 𝑛 }
Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛 such that 𝐟 𝐧 = Ω(𝐠(𝐧))
𝟎 ≤ 𝒄𝒈 𝒏 ≤ 𝒇 𝒏 for all 𝑛 ≥ 𝑛 }
θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 , 𝑐 and 𝑛 such that 𝐟(𝐧) = 𝛉(𝐠(𝐧))
𝟎 ≤ 𝐜𝟏 𝐠 𝐧 ≤ 𝐟 𝐧 ≤ 𝐜𝟐 𝐠 𝐧 for all 𝑛 ≥ 𝑛 }
Asymptotic Notations
Asymptotic Notations – Examples
Example 1: Example 2:
𝑓(𝑛) = 𝑛 and 𝑔 𝑛 = 𝑛 𝑓 𝑛 = 𝑛 and 𝑔 𝑛 = 𝑛
𝑓 𝑛 ≥ 𝑔 𝑛 ⟹ 𝑓 𝑛 = Ω(𝑔(𝑛)) 𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = O(𝑔(𝑛))
𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝟐𝒏
1 1 2 𝑓(𝑛) < 𝑔(𝑛)
2 4 4 𝑓(𝑛) = 𝑔(𝑛)
3 9 8 𝑓(𝑛) > 𝑔(𝑛)
4 16 16 𝑓(𝑛) = 𝑔(𝑛)
Here for 𝑛 ≥ 4,
5 25 32 𝑓(𝑛) < 𝑔(𝑛)
𝑓 𝑛 ≤𝑔 𝑛
6 36 64 𝑓(𝑛) < 𝑔(𝑛)
𝑠𝑜, 𝑛0 = 4
7 49 128 𝑓(𝑛) < 𝑔(𝑛)
Example 4:
Asymptotic Notations – Examples 𝐟(𝐧) = 𝟑𝟎𝐧 + 𝟖 is in the order of n, or O(n)
𝐠(𝐧) = 𝒏𝟐 + 𝟏 is order n , or O(n )
𝒇(𝒏) = 𝑶(𝒈(𝒏))
g (n)=n2+1
Value of function
f(n)=30n+8
In general, any 𝑂(𝑛 ) function is faster-
growing than any 𝑂(𝑛) function.
Base value 𝑛
Increasing n
Big-O and Growth Rate
The big-O notation gives an upper bound on the growth rate of a function
The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the growth
rate of g(n)
We can use the big-O notation to rank functions according to their growth rate:
Big-O and Growth Rate
Common Orders of Magnitude
1 𝒍𝒐𝒈 𝒏 𝒏 𝒏𝒍𝒐𝒈 𝒏 𝒏𝟐 𝒏𝟑 𝟐𝒏 𝒏!
1 2 4 8 16 64 16 24
1 4 16 64 256 4096 65536 2.09 x 1013
1 6 64 384 4096 262144 1.84 × 1019 1.26 x 1029
1 8 256 2048 65536 16777216 1.15 × 1077 ∞
1 10 1024 10240 1048576 1.07 × 109 1.79 × 10308 ∞
1 12 4096 49152 16777216 6.87 × 1010 101233 ∞
For each of the following pairs of functions, either 𝑓(𝑛) is 𝑂(𝑔(𝑛)), 𝑓(𝑛) is Ω(𝑔(𝑛)), or
𝑓(𝑛) = θ(𝑔(𝑛)). Determine which relationship is correct.
𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Asymptotic Notations in Equations
Consider an example of travelling cost of flight and Train:
Cost = Cost for flight + Cost for Train Negligible
Total Cost ≈ Cost for flight (approximation)
Maximum Rule: Let, 𝑓, 𝑔: 𝑁 → 𝑅 the max rule says that:
𝑂( 𝑓(𝑛)+𝑔(𝑛))=𝑂(max(𝑓(𝑛),𝑔(𝑛)))
𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
More Big-O Examples
3n3 + 20n2 + 5 3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
7n-2
this is true for c = 4 and n0 = 21
3 log n + 5
7n-2 is O(n)
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0
this is true for c = 6 and n0 = 1
3 log n + 5 is O(log n)
need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
this is true for c = 8 and n0 = 2
𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Expressing Function in terms of 𝑂, Ω, θ notation
Find Big O, Omega Ω, Theta θ notation for f n = 2n3 + 7𝑛2 + 𝑛 + 7
2n3 + 7𝑛2 + 𝑛 + 7 ≤ 2n3 + 7𝑛2 + 𝑛 + 𝑛
≤ 2n3 + 7𝑛2 + 2𝑛
≤ 2n3 + 7𝑛2 + 𝑛2
≤ 2n3 + 8𝑛2
≤ 2n3 + n3
≤ 3n3
Thus, C = 3, 𝑔 n = n3
Big O Notation: Omega Ω Notation: Theta θ Notation:
f n ≤ 3𝑔(n3) f n ≥ 2𝑔(n3) 2𝑔(n3) ≤ f n ≤ 3𝑔(n3)
f n = O(n3) f n = Ω(n3) f n = θ(n3)
𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Proof why all are same
2n3 + 7𝑛2 + 𝑛 + 7
Example of Big- O
Q1: 3n+2 = O(g(n)) what is the possible value of g(n)???
A. g(n) = 1
B. g(n) = 3n+2
C. g(n) = n
D. g(n) = n2
Example of Big- O
Q2: 3n+2 = O(n) why???
How:
A. 3n+2 <= n
B. 3n+2 <= 2n
C. 3n+2 <= 3n
D. 3n+2 <= 4n for all n>=2. [Here n0=2 and c=4 from the definition]
Example of Big- O
Q3: 100n+6=O(n) Which one is correct and what is possible value of c and n0
A. 100n+6<=20n then n>? and c>?
B. 100n+6<=50n then n>? and c>?
D. 100n+6<=cn then n>? and c>? for all n>=6. [Here n0=6 and c=101 from the definition]
E. 100n+6<=cn2 then n>? and c>? for all n>=2. [Here n0=2 and c=101 from the definition]
Example of Big- O
Q4: 10n2+4n+2 In terms in Big – O notation ?
A. Error
B. O(1)
C. O(n)
D. O(n2)
Example of Big- O
Q5: 10n2+4n+2 = O(n2) then n0=? and c=?
A. 10n2+4n+2 <= 2n ,where n0 = ____ and c = ____
B. 10n2+4n+2 <= 5n2 ,where n0 = ____ and c = ____
C. 10n2+4n+2 <= 8n3 ,where n0 = ____ and c = ____ Here n0= 2 and c= 8 from the definition
D. 10n2+4n+2 <= 11n2 ,where n0 = ____ and c = ____ Here n0= 5 and c= 11 from the definition
Example of Big- O
Q6: 6*2n+n2 =O(g(n)) What is g(n)???
A. O(2n)
B. O(nn)
C. O(n2)
D. O(n)
3n+2=Ω(n) why?
3n+2>=cn.
[Here n0=1 and c=3 from the definition]
• Number of statements are fixed and do not depend upon the input
characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=0;i<1000;i++){
print(i) O(1) WHY?
}
● What is the time complexity of above statement?
• Here also, Number of statements are fixed and do not depend upon
the input characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=1;i<1000;i++){
print(i) O(1) WHY?
}
● What is the time complexity of above statement?
• Here also, Number of statements are fixed and do not depend upon
the input characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=1;i<=n;i++){
// some statement O(n) WHY?
}
● What is the time complexity of above statement?
Analysis
𝑇 𝑛 =𝑐 𝑛+1 +𝑐 𝑛 𝑛+1 +𝑐 𝑛 𝑛
𝑇(𝑛) = 𝑐 𝑛 + 𝑐 + 𝑐 𝑛2 + 𝑐 𝑛 + 𝑐 𝑛2
𝑇(𝑛) = 𝑛2(𝑐 + 𝑐 ) + 𝑛(𝑐 + 𝑐 ) + 𝑐
𝑇(𝑛) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐
𝟐
𝑻(𝒏) = 𝑶 𝒏
Asymptotic Time Complexity
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.
O(log2n) WHY?
Asymptotic Time Complexity
for (i=1; i< n; 2 * i){
// some statement some FIXED steps
(constant time)
O(log2n) WHY?
}
O(log2n) WHY?
Asymptotic Time Complexity
𝒊
for (𝒊 =n; 𝒊 > 1 ; ){
𝟐 O(log2n) WHY?
// some statement some FIXED steps
(constant time)
}
Steps i= i>1 • i ≤1 so
1 n T • 𝑛/2 ≤1
2 n/2 T
• 𝑛≤ 2
3 n/𝟐𝟐 T
4 n/𝟐𝟑 T • 𝒌 ≥ log2n+1
……….. ………… T
k 𝒏/𝟐𝒌 𝟏
F 𝒏/𝟐𝒌 𝟏
≤1
Asymptotic Time Complexity
for (𝒊 =2; 𝒊 < n ; 𝒊𝟐 ){
// some statement some FIXED steps
(constant time)
}
● What is the time complexity of above statement?
O(log2(log2n)) WHY?
Asymptotic Time Complexity
for (𝒊 =2; 𝒊 < n ; 𝒊𝟐 ){
// some statement O(log2(log2n)) WHY?
some FIXED steps
} (constant time)
• i ≥ n so
Steps i= i>1
𝟐𝒌
1 2 T • 𝟐 ≥𝒏
2 𝟐𝟐 T • 2 𝑙𝑜𝑔22 ≥ log2 𝑛
3 𝟒𝟐 T
• 2 ≥ log2 𝑛
4 𝟏𝟔𝟐 T
……….. ………… T • 𝒌 ≥ 𝒍𝒐𝒈𝟐(𝐥𝐨𝐠𝟐 𝒏)
k 𝟐𝟐
𝒌 𝒌
F: 𝟐𝟐 ≥ 𝒏
Analyzing Control Statements
Example 1: Example 3:
𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝐜 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒄𝟏 𝒏 + 𝟏
Statement is executed once only 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒄𝟐 𝒏 𝒏 + 𝟏
So, The execution time 𝑇(𝑛) is some 𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝒄 ∗ 𝒏 ∗ 𝒏
constant 𝐜 ≈ 𝑶(𝟏) 𝟑
Analysis
Example 2: 𝑇 𝑛 =𝑐 𝑛+1 +𝑐 𝑛 𝑛+1 +𝑐 𝑛 𝑛
𝑇(𝑛) = 𝑐 𝑛 + 𝑐 + 𝑐 𝑛2 + 𝑐 𝑛 + 𝑐 𝑛2
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝐜𝟏 ∗ 𝒏 + 𝟏 𝑇(𝑛) = 𝑛2(𝑐 + 𝑐 ) + 𝑛(𝑐 + 𝑐 ) + 𝑐
𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝑇(𝑛) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐
𝐜𝟐 ∗ 𝒏 𝑻(𝒏) = 𝑶 𝒏𝟐
Total time is denoted as,
𝑻 𝒏 = 𝒄𝟏 𝒏 + 𝒄𝟏 + 𝒄𝟐 𝒏
𝑻(𝒏) = 𝒏(𝒄𝟏 + 𝒄𝟐 ) + 𝒄𝟏 ≈ 𝑶(𝒏)
Analyzing Control Statements
Example 4: Example 6:
𝑙 = 0 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑗 𝑑𝑜 𝜽 𝒏𝟐
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑖 𝑑𝑜 𝑠𝑢𝑚 = 𝑠𝑢𝑚 + 𝑗 ∗ 𝑘
𝑓𝑜𝑟 𝑘 = 𝑗 𝑡𝑜 𝑛 𝑑𝑜
𝑙 = 𝑙 + 1 𝑓𝑜𝑟 𝑙 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝜽 𝒏
𝑠𝑢𝑚 = 𝑠𝑢𝑚 − 𝑙 + 1
𝒕 𝒏 = 𝜽 𝒏𝟑
printf(“sum is now %d”,𝒔𝒖𝒎) 𝜽 𝟏
Example 5:
𝑙 = 0 𝒕 𝒏 = 𝜽 𝒏𝟐 + 𝜽 𝒏 + 𝜽 𝟏
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒕 𝒏 = 𝜽 𝒏𝟐
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑙 = 𝑙 + 1
𝒕 𝒏 = 𝜽 𝒏𝟔
Asymptotic Control Statements
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.
int i = 0; int i = 0;
while (i < n) { while (i < n) {
// statements // statements
i *= 2; i *= 3;
O(log2n) O(log3n)
} }
Analyzing Control Statements
int i=1;
while(i<n)
{
j=n;
while(j>1)
{
j=j/2;
O(log2n)
}
i=2*i
}
Space Complexity
Whenever a solution to a problem is written some memory is required to complete. For any
algorithm memory may be used for the following:
1. Variables (This include the constant values, temporary values)
2. Program Instruction
3. Execution
Space complexity is the amount of memory used by the algorithm (including the input values to
the algorithm) to execute and produce the result.
Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the extra
space or the temporary space used by the algorithm during it's execution.
TYPE SIZE
Bool, char 1 bytes
short 2 bytes
Int, long, float 4 bytes
Double, long double 8 bytes
Examples-1
void basicFunction(int n) {
int x = 10; // O(1) space
int y = 20; // O(1) space
int sum = x + y; // O(1) space
cout << sum; Space complexity:
}
Total space for variables (x, y, sum): 4 + 4 + 4 = 12 bytes
}
Memory Usage: 4n + 12
Examples-2: 2D Array (Matrix)
void matrixFunction(int n) {
int matrix[n][n]; // 4 bytes * n * n
Space complexity:
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) { Matrix matrix[n][n]: 4 * n * n bytes
matrix[i][j] = i * j;
Other Variables (i, j): 4 bytes each (constant)
}
} Space Complexity: O(n2)
cout << matrix[n-1][n-1];
} Memory Usage: 4n2 bytes for the matrix
Example-3: Singly Linked List
Node Structure:
Example for n = 5
Each Node consists of: Number of Nodes: 5
data: 4 bytes (integer) Space for Nodes: 12 * 5 = 60 bytes
Pointers (head and current): 8 + 8 = 16 bytes
next: 8 bytes (pointer to the next node)
Total Memory Usage: 60 + 16 = 76 bytes
Total size of one node: 4 + 8 = 12 bytes
Additional Pointers:
Node* head: 8 bytes • Total Space Complexity:
• Dynamic Memory: O(n)
Node* current: 8 bytes
• Fixed Memory: O(1)
• Memory Usage for Doubly Linked List: 20 * n + 16 bytes
Linear Arrays
We already saw various algorithms to perform certain operations on arrays in our previous
lectures.
Runtime
– Accessing an element: O(1)
– Inserting an element at the first index: O(n)
– Inserting an element at ith index: O(n)
– Searching an element (unsorted array): O(n)
– Deleting an element at the first index: O(n)
– Deleting an element at ith index: O(n)
Singly Linked Lists
Runtime
– Accessing an element: O(n)
– Inserting an element at the first index (at Head node): O(1)
– Inserting an element at ith index: O(n)
– Searching an element: O(n)
– Deleting an element at the first index (at Head node): O(1)
– Deleting an element at ith index: O(n)
Stack
Runtime
– Push operation: O(1)
● Hence runtime to insert n items will require O(n) time.
– Pop operation: O(1)
– Checking whether stack is full or empty: O(1)
● Extra space required for above operations: O(1)
Queue
Runtime
– Enqueue operation: O(1)
Hence runtime to insert n items will require O(n) time.
– Dequeue operation: O(1)
– Checking whether queue is full or empty: O(1)
Extra space required for above operations: O(1)
Factorial-Iterative algorithm
factorial_interative(n) Time Complexity:
Explanation
{ • Initialization:
result = 1 is executed once before the loop starts → O(1).
result = 1 • Loop Condition:
The condition i <= n is checked n−1 times → O(n).
for (i = 2; i <= n; i++) Incrementing i is also performed n−1 times → O(n).
result = result * i; • The key operation result = result * i is performed n−1 times →
O(n).
return result; • Return Statement:
• Returning the value of result is a single operation → O(1).
} Total Time Complexity
Adding all the steps:
O(1)+O(n)+O(n)+O(n)+O(1) = O(n)
Thus, the overall time complexity of the iterative factorial algorithm is
O(n).
Factorial-Iterative algorithm
factorial_interative(n)
{ Space Complexity:
result = 1 Variables:
n: Integer (4 bytes)
for (i = 2; i <= n; i++)
result: int (4 bytes)
result = result * i; i: Integer (4 bytes)
return result; Fixed Memory Usage:
} Total for variables: 4 + 4 + 4 = 12 bytes
Space Complexity: O(1) (constant space).
Factorial-Recursive algorithm
factorial(int n)
{
if (n == 0 || n == 1) Runtime analysis?
return 1;
How to do the runtime analysis
for recursive function calls??
return n * factorial(n - 1);
}
Factorial based
Algorithms
Recursion
Recursion is defined as a process which calls itself directly or indirectly the corresponding
function is called a recursive function.
Recursion involves calling the same function within itself, which leads to a call stack.
A recursive function must have a base case or stopping criteria to avoid infinite recursion.
The primary property of recursion is the ability to solve a problem by breaking it down into
smaller sub problems, each of which can be solved in the same way.
Many algorithms (divide and conquer) are recursive in nature
Recursion
Breaking bigger problem into smaller problem :
23 = 2x22
= 2x2x2
an= a x an-1
= a x a x an-2
= a x a x a x an-3
...
= a x a x a x . . . x a (n times)
Recursion
Factorial Problem: Given an integer n, find its factorial n!
Breaking bigger problem into smaller problem:
n!= n x (n-1) x (n-2)... x 1
Algorithm:
Fact(n): RECURSIVE FUNCTION CALL
if (n = = 0) 1. Deriving recurrence relation
2. Analyzing/Solving a recurrence relation
return 1
else
return n*Fact(n-1)
Deriving recurrence relation
Factorial Problem: Given an integer n, find its factorial n!
Breaking bigger problem into smaller problem:
n!= n x (n-1) x (n-2)... x 1
Algorithm:
Let us assume the call to this function with argument 'n' takes
Fact(n): T(n) time
if (n = = 0) n=0 is the base case where recursion stops constant time O(1)
Here, Fact(n-1) is call to the factorial function with n-1
return 1
arguments hence it takes T(n-1) time.
else Multiplication takes O(1) time.:
return n*Fact(n-1)
Total time
T(n)= T(n-1)+O(1) when n>0
T(n)= 1 when n=0
Analyzing Recurrence relation
It means deriving an analytical expression for a given
recurrence relation.
There are broadly three methods:
2 times
T(1)
O(2)
Total Work:
1 time
T(0) T(n)=O(n)+O(n-1)+⋯+O(1)
O(1)
T(n)=O(n(n+1)/2)
T(n)= O(n2)
Example 7:
void test( n ){
if(n==0)
return;
for(i=1;i<n;i=2*i)
print(n);
test(n-1);
}
T(n) = T(n-1) + 2log2n+1, n>0, We can also write,
print(2) runs
log₂(2) = 1 test(1)
time
test(0)
print(1) runs
(Base case,
log₂(1) = 0
no more
times
calls)
Examples
T(n)=T(n−1)+1 → O(n)
T(n)=T(n−1)+n → O(n2 )
T(n)=T(n−1)+logn → O(nlogn)
T(n)=T(n−2)+1 → O(n)
T(n)=T(n−1)+n2 → O(n3)
T(n)=T(n−100)+n → O(n2)
T(n)=2T(n−1)+1 → O(2n)
T(n)=3T(n−1)+n → O(3n)
Master Theorem
(Additional Material)
Master Theorem
T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed to have the same size.
f(n) = cost of the work done outside the recursive call,
which includes the cost of dividing the problem and cost of merging the solutions
Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
Master Theorem
If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time
complexity of a recursive relation is given by
𝒏
T(n) = a T ( ) + f ( n )
𝒃
where, T(n) has the following asymptotic bounds:
1. If 𝒇 𝒏 = 𝑶(𝒏𝒍𝒐𝒈𝒃 𝒂 ∈
), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃𝒂 )
2. If 𝒇 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃 𝒂 ), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃𝒂 ∗ 𝐥𝐨𝐠 𝒏)
3. If 𝒇 𝒏 = 𝛺(𝒏𝒍𝒐𝒈𝒃𝒂 𝝐 ), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒇(𝒏))
𝝐 > 𝟎 is a constant.
Example-1
T(n)=2T(n/2)+1
Example-2
T(n)=4T(n/2)+n
Example-3
T(n)=8T(n/2)+n
Example-4
T(n)=2T(n/2)+n
Example-5
T(n)=4T(n/2)+n2
Example-6
T(n)=2T(n/2)+O(n2)
Example-7
T(n)=3T(n/4)+nlogn
Problem-1 (Tower of Hanoi)
Tower of Hanoi
Tower of Hanoi is a mathematical puzzle where we are
given three rods and n disks.
The objective of the puzzle is to move the entire stack of
disk to another rod,
following the rules given below:
1. Only one disk can be moved at a time.
2. Each move consists of taking the upper disk from one
of the stacks and placing it on top of another stack i.e.
a disk can only be moved if it is the uppermost disk on a
stack.
3. No disk may be placed on top of a smaller disk.
Tower of Hanoi
Algorithm:
Input: Number of disks n, Source rod A, Destination rod C, Auxiliary rod B
Output: Step-by-step sequence of moves to transfer all disks from A to C using B
Base Case:
If n == 0, return (no disks to move).
Recursive Steps:
Move n-1 disks from A to B using C as auxiliary.
Move the largest disk (n) from A to C.
Move n-1 disks from B to C using A as auxiliary.
Tower of Hanoi
If T(n) represents the number of time steps to reach the solution.
Move n-1 disks from rod A to rod B
It takes T(n-1) time steps
Move the n-1 disk from rod B to rod C to get the final outcome on rod C.
It takes T(n-1) time steps
Total time steps: summing above intermediate time steps we get:
T(n) = T(n-1)+1+T(n-1)
T(n) = 2 T(n-1)+O(1) if n>0
T(n) = 1 if n=0 (Base Case)
Iteration way
T(n)=2T(n−1)+1 T(n) = 2 T(n-1)+O(1) if n>0
T(n) = 1 if n=0 (Base Case)
T(n−1)=2T(n−2)+1
T(n)=2(2T(n−2)+1)+1=22T(n−2)+2+1
T(n)=2nT(0)+(2n−1)
T(n)=23T(n−3)+22+2+1
T(n)=2n+2n−1
If we continue, we see the pattern:
T(n)=2(n+1) − 1
T(n)=2kT(n−k)+(2k−1)
For k=n, we reach T(0)=1:
T(n)=2nT(0)+(2n−1) Exponential time complexity O(2n)
Problem 2 – Fibonacci series
Fibonacci series
int fib( n ){
if(n==0 or n==1)
return n;
return fib(n-1)+fib(n-2);
}
0,1,1,2,3,5,8,13,21,34,............
F(n)=F(n−1)+F(n−2),for n≥2
F(0)=0 , Base case
F(1)=1 , Base case
Fibonacci series
T(n-3)
T(n-2) T(n)=1+2+4+8+⋯+2n−1
T(n-4)
T(n-1) T(n)=2n−1
T(n-4)
T(n-3)
T(n-5)
T(n)
T(n-4)
O(2n)
T(n-3)
T(n-5)
T(n-2)
T(n-5)
T(n-4)
T(n-6)
T(n)=T(n−1)+T(n−2)
Step 1: Assume an Exponential Solution
T(n)=rn so
where r is a constant. Substituting into the
recurrence:
rn=rn−1+rn−2 Dividing both sides by rn− 2
(assuming r≠0):
r2−r−1=0
Fibonacci series
Problem 3 – Staircase problem
Staircase problem
You are given a staircase of n steps, you are
at the bottom of the staircase. At any given
time step you can either move one step or
two steps.
Your objective is to reach to the top of the
staircase.
How many total number of distinct ways are
there to go to the top of the staircase.
Staircase problem
You can reach the top most stair either from
the second last stair (n-1) or the third last stair
(n-2).
If T(n) is the number of steps required to
reach the nth step, then T(n)=T(n-1)+T(n-2),
where T(n-1) and T(n-2) are the number of
steps required to climb (n-1)th and (n-2)th
stairs respectively.
You are adding them because going to step n
wither from n-1 or from n-2 steps are two
different scenarios
Staircase problem
int countWays(int n) {
// Base cases: If there are 0 or 1 stairs,
// there is only one way to reach the top.
if (n == 0 || n == 1)
return 1;
return countWays(n - 1) + countWays(n - 2);
}
T(n)=T(n−1)+T(n−2),for n≥2
T(0)=1 ,n=0 Base case
T(1)=1 , n=1 Base case
Staircase problem
You can see that the recurrence of this problem resembles that of the
Fibonacci series recursion.
Hence we can solve this problem using the Fibonacci program
The time complexity will be θ(1.618n).
Bitstrings of Length
Find the total number of possible bitstrings of length n such that there are no two consecutive
zeros.
For example : if n=2, then 01, 10 and 11 are valid bitstrings whereas 00 is not a valid bitstring.
Total number of valid bitstrings is 3.
If n=3 then 000, 100, 001 are not valid bitstring, rest of the others are valid bitstring. Total
number of valid bitstrings is 5.
Try to find the answer using recursion. Formulate recurrence relation.
Bitstrings of Length
int countBitstrings(int n) {
if (n == 1) return 2;
if (n == 2) return 3;
return countBitstrings(n - 1) + countBitstrings(n - 2);
}
T(n)=T(n−1)+T(n−2),for
T(1)=2 , n=1 Base case
T(2)=3 , n=2 Base case
Bitstrings of Length
Computing the First Few Terms: Using the recurrence relation, let's compute the first few
values:
Base Cases: T(1)=2 and T(2)=3
Recursive Computation:
T(3)=T(2)+T(1)=3+2=5 and T(4)=T(3)+T(2)=5+3=8 and T(5)=T(4)+T(3)=8+5=13
T(6)=T(5)+T(4)=13+8=21 and T(7)=T(6)+T(5)=21+13=34
So, the sequence begins as: 2,3,5,8,13,21,34,…
Since this follows a Fibonacci-like recurrence, the values grow exponentially. The general
formula for Fibonacci-like sequences involves the golden ratio ϕ≈1.618: so T(n)≈O(ϕn)
Space Complexity
#include <iostream> Number of function calls = Highest number of Activation calls
using namespace std;
int fact(int n) { Fact(0) =1
return 0;
Call Stack
}
Space Complexity (Fibonacci Sequence)
int fib(int n) { Fib(4) = 3
if (n <= 1) return n; // Base cases
return fib(n - 1) + fib(n - 2); // Recursive calls Fib(3) = 2 Fib(2) = 1
}
int main() { Fib(2) = 1 Fib(1) = 1 Fib(1) = 1 Fib(0) = 0
int num = 4;
cout << fib(num) << endl; Fib(0) = 0 Fib(1) = 1
return 0;
}
Iterative Implementation:
Space Complexity:
O(1) since it only uses a constant amount of space regardless of the input size.
FINISH of Chapter-2
Thank you
Thank
You