0% found this document useful (0 votes)
23 views173 pages

Analyzing Algorithm Efficiency

The document outlines key concepts in algorithm analysis, including definitions of algorithms, pseudocode conventions, and the importance of analyzing time and space complexity. It discusses best, average, and worst-case scenarios for algorithm performance, along with asymptotic notations for measuring efficiency. Additionally, it provides examples of linear search and sorting algorithms to illustrate these concepts.

Uploaded by

GGN Khalsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views173 pages

Analyzing Algorithm Efficiency

The document outlines key concepts in algorithm analysis, including definitions of algorithms, pseudocode conventions, and the importance of analyzing time and space complexity. It discusses best, average, and worst-case scenarios for algorithm performance, along with asymptotic notations for measuring efficiency. Additionally, it provides examples of linear search and sorting algorithms to illustrate these concepts.

Uploaded by

GGN Khalsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 173

Outlines

1. Algorithm vs pseudocode
2. Differences among best, expected, and worst case behaviours of an algorithm
3. Asymptotic analysis
-O notation
-Omega notation
-Theta notation
4. Time Complexity with simple and algorithms
5. Space Complexity with simple and algorithms
6. Recursion with examples
7. -Analyzing Recurrence relation (There are broadly three methods:)
– Iteration based method
– Recursion based method
– Master method
Algorithms

Dr. Ankit Soni


Outlines

 Algorithms vs pseudocode

 Differences among best, expected, and worst case behaviors of an algorithm

 Asymptotic analysis

 Time Complexity with examples

 Space Complexity with examples


Algorithm
 An algorithm is a step-by-step, well-defined set of instructions or rules to solve a particular
problem. It is presented in natural language, mathematical terms, or a structured form that can
be easily understood by humans.
 Characteristics of an Algorithm:
1. Finite Steps: Must terminate after a finite number of steps.
2. Input: Accepts zero or more inputs.
3. Output: Produces at least one output.
4. Definiteness: Each step is clear and unambiguous.
5. Effectiveness: Steps are basic enough to be carried out.

Source: Computer Algorithms, by Ellis Horowitz, Sartaj Sahni and S. Rajasekaran


Algorithm
 After writing algorithms, we use it to implement computer programs/codes.
 A computer program is the expression of an algorithm in a programming language.
 When we write computer programs, we ask several questions:
 Is it correct?
 How easy is to understand the program/code?
 Is the program well documented?
 How easy is it to make changes to the program?
 How much time the program takes while running?
 How much memory the program is utilizing?
 Is the code a general one which can run on Large range of inputs?
 Can the program be run on various machines and OS?
Four Key Areas of Algorithm Study
1. How to Devise Algorithms ? (Algorithm Design)
 Use techniques like Greedy, Dynamic Programming, Divide & Conquer, etc.
 Break problems into smaller parts or make step-by-step optimal decisions.

2. How to Validate Algorithms?


 Prove correctness using methods like loop invariants or mathematical induction.
 Test edge cases to ensure correct output for all valid inputs.

3. How to Analyze Algorithms ? (Algorithm Analysis)


 Measure time and space complexity for efficiency.
 Study best, worst, and average-case performance.

4. How to Test Programs ?


 Debug and test with different inputs, including edge cases.
 Ensure the program is robust and performs as expected.
Pseudocode Conventions
1. Structure and Indentation
 Use proper indentation to show hierarchy (e.g., loops, conditions).
 Start and end blocks clearly (e.g., BEGIN...END).

2. Input and Output


 Represent input as READ or INPUT.
 Represent output as PRINT or OUTPUT.

3. Variables and Assignment


 Use meaningful variable names.
 Represent assignment with ← (e.g., sum ← a + b).

4. Control Structures
 Use IF...THEN...ELSE for conditional statements.
 Use FOR, WHILE, or REPEAT...UNTIL for loops.
Pseudocode Conventions
5. Boolean Conditions
 Write conditions using operators like AND, OR, NOT.
 Use relational operators like <, >, = for comparisons.

6. Functions and Procedures


 Define with PROCEDURE or FUNCTION.
 Clearly state inputs and outputs.

7. Comments
 Use comments to clarify logic (e.g., // This is a comment).

8. General Guidelines
 Avoid language-specific syntax.
 Keep it simple and clear for better readability.
Example: Calculate the Sum of the First n Natural Numbers.
#include <iostream>
using namespace std;
int main() {

int n, sum = 0;
cout << "Enter a positive integer: ";
Input:
cin >> n;
Enter a positive integer: 5
Output:
for (int i = 1; i <= n; i++) {
The sum of the first 5 natural numbers is: 15
sum += i; // Add i to sum
}

cout << "The sum of the first " << n << " natural numbers is: " << sum << endl;
return 0;
}
Pseudocode vs Algorithms
Pseudocode Using Loop Algorithm
Input: A positive integer n.
1. BEGIN Output: The sum of the first n natural numbers
2. INPUT n 1. Start
3. sum ← 0 2. Input the value of n.
4. FOR i ← 1 TO n DO 3. Initialized sum to 0.
5. sum ← sum + i 4. Repeat the following step for I from 1 to n:
6. ENDFOR • Add I to sum.
7. PRINT "The sum is", sum 5. End loop.
8. END 6. Output the value of sum.
7. Stop
Example : Adding two numbers
#include <iostream>
using namespace std;
int main() {
int num1, num2, sum;
cout << "Enter first number: ";
cin >> num1;
cout << "Enter second number: ";
cin >> num2;
sum = num1 + num2;
cout << "The sum of " << num1 << " and " << num2 << " is: " << sum << endl;
return 0;
}
Example : Adding two numbers
pseudocode
Algorithm to Add Two Numbers
1. BEGIN
Input: Two integers num1 and num2.
2. INPUT num1
Output: The sum of the two numbers.
3. INPUT num2
4. sum ← num1 + num2 1. Start
5. PRINT "The sum is", sum 2. Input the first number num1.
6. END 3. Input the second number num2.
4. Add the two numbers:
• sum=num1+num2
5. Output the value of sum.
6. Stop
Analysis of an
Algorithm
Introduction
 What is Analysis of an Algorithm?
 Analyzing an algorithm means calculating/predicting the resources that the algorithm
requires.
 Analysis provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem.
 Two most important resources are computing time (time complexity) and storage space
(space complexity).
 Why Analysis is required?
 By analyzing some of the candidate algorithms for a problem, the most efficient one can be
easily identified.
How?

Empirical/Experimental Theoretical
approach approach

 Programming different competing  Determining mathematically the


techniques & running them on resources needed by each
various inputs using computer. algorithm.
 Implementation of different  Uses the algorithm instead of an
techniques may be difficult. implementation.
 The same hardware and software  The speed of an algorithm can be
environments must be used for determined independent of the
comparing two algorithms. hardware/software environment.
 Results may not be indicative of the  Characterizes running time as a
running time on other inputs not function of the input size 𝒏,
included in the experiment. considers all possible values.
Linear Search – Example

Search for 𝟏 in given array 𝟐 𝟗 𝟑 𝟏 𝟖

Comparing value of ith index with the given element one by one, until we get the required
element or end of the array
Step 1: i=1 Step 3: i=3

𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖
i i
𝟐≠𝟏 𝟑≠𝟏
Step 2: i=2 Step 4: i=4

𝟐 𝟗 𝟑 𝟏 𝟖 𝟐 𝟗 𝟑 𝟏 𝟖

i i
𝟗≠𝟏 Element found at ith index, i=4
Linear Search – Algorithm
# Input : Array A, element x
# Output : First index of element x in A or -1 if not found

Algorithm: Linear_Search
for i = 1 to last index of A
if A[i] equals element x
return i
return -1
Linear Search - Analysis
 The required element in the given array can be found at,
1. E.g. 2: It is at the first position
Best Case: minimum comparison is required
2. E.g. 3 or 1: Anywhere after the first position
Average Case: average number of comparison is required
3. E.g. 7: Last position or element does not found at all
Worst Case
Worst Case: maximum comparison is required

Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕

Best Case
Average Case
Linear Search - Analysis
 The required element in the given array can be found at,

Case 1: element 2 which is at Case 2: element 3 anywhere Case 3: element 7 at last


the first position so minimum after the first position so, an position or element does not
comparison is required average number of found at all, maximum
comparison is required comparison is required

Best Case Average Case Worst Case

Worst Case

Search for 𝟕
𝟐
𝟑 𝟐 𝟗 𝟑 𝟏 𝟖 𝟕

Best Case
Average Case
Analysis of Algorithm

Best Case Average Case Worst Case


Resource usage is minimum Resource usage is average Resource usage is maximum

Algorithm’s behavior under Algorithm’s behavior under Algorithm’s behavior under


optimal condition random condition the worst condition
Minimum number of steps Average number of steps or Maximum number of steps
or operations operations or operations
Lower bound on running Average bound on running Upper bound on running
time time time
Generally do not occur in Average and worst-case performances are the most used in
real algorithm analysis.
Book Finder Example  Suppose, you are writing a program to find a
book from the shelf.
 For any required book, it will start checking
books one by one from the bottom.
 If you wanted Engineering it would only take 3
actions (or tries) because it’s the third book in
the sequence.
 If Mathematics — it’s the last book so it would
have to check all 5 books.
 What if there are total 10 books? How about
10,00,000 books? It would take 1 million tries.
Number Sorting - Example
 Suppose you are sorting numbers in Ascending / Increasing order.
 The initial arrangement of given numbers can be in any of the following three orders.
1. Numbers are already in required order, i.e., Ascending order
No change is required – Best Case
2. Numbers are randomly arranged initially.
Some numbers will change their position – Average Case
3. Numbers are initially arranged in Descending or Decreasing order.
All numbers will change their position – Worst Case

𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓
Number Sorting - Example
 Suppose you are sorting numbers in Ascending / Increasing order.
 The initial arrangement of given numbers can be in any of the following three orders.

Case 1: Numbers are already Case 2: Numbers are Case 3: Numbers are initially
in required order, i.e., randomly arranged initially. arranged in Descending order
Ascending order Some numbers will change so, all numbers will change
No change is required their position their position

Best Case Average Case Worst Case

𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟗 𝟓 𝟏𝟐 𝟑𝟐 𝟐𝟑 𝟒𝟏 𝟓 𝟗 𝟏𝟐 𝟐𝟑 𝟑𝟐 𝟒𝟏

𝟒𝟏 𝟑𝟐 𝟐𝟑 𝟏𝟐 𝟗 𝟓
Best, Average, & Worst Case

Problem Best Case Average Case Worst Case


Linear Search Element at the first Element in any of the Element at last position
position middle positions or not present
Book Finder The first book Any book in-between The last book

Sorting Already sorted Randomly arranged Sorted in reverse order


Asymptotic
Notations
Introduction
 The theoretical approach of analyzing an algorithm to measure the efficiency does not depend
on the implementation of the algorithm.
 In this approach, the running time of an algorithm is describes as Asymptotic Notations.
 Computing the running time of algorithm’s operations in mathematical units of computation
and defining the mathematical formula of its run-time performance is referred to as
Asymptotic Analysis.
 An algorithm may not have the same performance for different types of inputs. With the
increase in the input size, the performance will change.
 Asymptotic analysis accomplishes the study of change in performance of the algorithm with
the change in the order of the input size.
 Using Asymptotic analysis, we can very well define the best case, average case, and worst
case scenario of an algorithm.
Asymptotic Notations
 Asymptotic notations are mathematical notations used to represent the time complexity of
algorithms for Asymptotic analysis.
 Following are the commonly used asymptotic notations to calculate the running time
complexity of an algorithm.
1. Ο Notation
2. Ω Notation
3. θ Notation

 This is also known as an algorithm’s growth rate.


 Asymptotic Notations are used,
1. To characterize the complexity of an algorithm.
2. To compare the performance of two or more algorithms solving the same problem.
Example
 Out of two given algorithms, which one is better???
 Imagine a text editor that takes 1000 pages but can spell check only one page in half an
hour,
Or
 Imagine a image editor that takes 1 hour to rotate your image left or right by 90 degrees
Would you be using it???

• So instead of measuring the actual running time, input size is considered, that is going to be
given to the algorithm, and we calculate how the time required for execution of the program
increases with the input size.
Any other example
 Autonomous Vehicles: In autonomous vehicles, real-time calculations are essential for route
optimization, obstacle detection, and decision-making to ensure safety. Delays in processing,
such as taking too long to brake in an emergency or avoid an obstacle, could lead to accidents,
making the vehicle unsafe and unreliable for everyday use.
 E-commerce Checkout: A delayed checkout process can lead to cart abandonment, resulting in
lost sales and frustrated customers.
 Online Banking Systems: Slow transaction processing can cause frustration, leading to
abandoned payments and reduced customer trust.
1. 𝐎-Notation (Big 𝐎 notation) (Upper Bound)
 The notation Ο(𝑛) is the formal way to express the upper bound of an algorithm's running time.
 It measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
 For a given function 𝑔(𝑛), we denote by Ο(𝑔(𝑛)) the set of functions,

Ο(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
Big(𝐎) Notation  𝑔(𝑛) is an asymptotically upper bound for
𝑓(𝑛).

 𝑓(𝑛) = 𝑂(𝑔(𝑛)) implies:


𝒄. 𝒈(𝒏)
𝒇 𝒏 “ ≤ ” 𝒄. 𝒈(𝒏)

 An upper bound 𝑔(𝑛) of an algorithm defines


𝒇(𝒏)
the maximum time required, we can always
solve the problem in at most 𝑔(𝑛) time.

 Time taken by a known algorithm to solve a


problem with worse case input gives the
upper bound.
𝒏
𝒏𝟎 𝒇(𝒏) = 𝑶(𝒈(𝒏))
2. 𝛀-Notation (Omega notation) (Lower Bound)
 Big Omega notation (Ω ) is used to define the lower bound of any algorithm or we can say the
best case of any algorithm.
 This always indicates the minimum time required for any algorithm for all input values,
therefore the best case of any algorithm.
 When a time complexity for any algorithm is represented in the form of big-Ω, it means that the
algorithm will take at least this much time to complete it's execution. It can definitely take
more time than this too.
 For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,

Ω(g(n)) = {f(n):there exist positive constants c and 𝑛 such that 0 ≤ cg n ≤ f n for all n ≥ 𝑛 }
2. 𝛀-Notation (Omega notation) (Lower Bound)
 For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,

Ω(g(n)) = {f(n):there exist positive constants c and 𝑛 such that 0 ≤ cg n ≤ f n for all n ≥ 𝑛 }
Big(𝛀) Notation  𝑔(𝑛) is an asymptotically lower bound for
𝑓(𝑛).

𝒇(𝒏)  𝑓(𝑛) = Ω(𝑔(𝑛)) implies:


𝒇(𝒏)“ ≥ ” 𝒄. 𝒈(𝒏)

𝒄. 𝒈(𝒏)  A lower bound 𝑔(𝑛) of an algorithm defines


the minimum time required, it is not possible
to have any other algorithm (for the same
problem) whose time complexity is less than
𝑔(𝑛) for random input.

𝒏
𝒏𝟎 𝒇(𝒏) = 𝜴(𝒈(𝒏))
3. 𝛉-Notation (Theta notation) (Same order)
 The notation θ(n) is the formal way to enclose both the lower bound and the upper bound of an
algorithm's running time.
 Since it represents the upper and the lower bound of the running time of an algorithm, it is used
for analyzing the average case complexity of an algorithm.
 The time complexity represented by the Big-θ notation is the range within which the actual
running time of the algorithm will be.
 So, it defines the exact Asymptotic behavior of an algorithm.
 For a given function 𝑔(𝑛), we denote by θ(𝑔(𝑛)) the set of functions,

θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants c , c and n0 such that 0 ≤ 𝑐 𝑔 𝑛 ≤ 𝑓 𝑛 ≤ 𝑐 𝑔 𝑛 for all 𝑛 ≥ 𝑛 }
𝛉-Notation  𝜃(𝑔(𝑛)) is a set, we can write
𝒄𝟐 . 𝒈(𝒏)
𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to indicate that 𝑓(𝑛) is a
member of 𝜃(𝑔(𝑛)).

𝒇(𝒏)  𝑔(𝑛) is an asymptotically tight bound for


𝑓(𝑛).
𝒄𝟏 . 𝒈(𝒏)
 𝑓(𝑛) = 𝜃(𝑔(𝑛)) implies:
𝒇(𝒏)“ = ” 𝒄. 𝒈(𝒏)

𝒏
𝒏𝟎 𝒇(𝒏) = 𝜽(𝒈(𝒏))
Asymptotic Notations
1. O-Notation (Big O notation) (Upper Bound) worst Case

Ο(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛 such 𝐟(𝐧) = 𝐎(𝐠(𝐧))
that 𝟎 ≤ 𝒇(𝒏) ≤ 𝒈(𝒏) for all 𝑛 ≥ 𝑛 }

2. Ω-Notation (Omega notation) (Lower Bound)  Best Case

Ω(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 and 𝑛 such that 𝐟 𝐧 = Ω(𝐠(𝐧))
𝟎 ≤ 𝒄𝒈 𝒏 ≤ 𝒇 𝒏 for all 𝑛 ≥ 𝑛 }

3. θ-Notation (Theta notation) (Same order)  Average Case

θ(𝑔(𝑛)) = {𝑓(𝑛) : there exist positive constants 𝑐 , 𝑐 and 𝑛 such that 𝐟(𝐧) = 𝛉(𝐠(𝐧))
𝟎 ≤ 𝐜𝟏 𝐠 𝐧 ≤ 𝐟 𝐧 ≤ 𝐜𝟐 𝐠 𝐧 for all 𝑛 ≥ 𝑛 }
Asymptotic Notations
Asymptotic Notations – Examples

 Example 1:  Example 2:
𝑓(𝑛) = 𝑛 and 𝑔 𝑛 = 𝑛 𝑓 𝑛 = 𝑛 and 𝑔 𝑛 = 𝑛

Algo. 1 running Algo. 2 running Algo. 1 running Algo. 2 running


time time time time

𝑓 𝑛 ≥ 𝑔 𝑛 ⟹ 𝑓 𝑛 = Ω(𝑔(𝑛)) 𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = O(𝑔(𝑛))

𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝒏 𝒏 𝒇(𝒏) = 𝒏 𝒈(𝒏) = 𝒏𝟐


1 1 1 1 1 1
2 4 2 2 2 4
3 9 3 3 3 9
4 16 4 4 4 16
5 25 5 5 5 25
Asymptotic Notations – Examples
 Example 3: 𝑓 𝑛 = 𝑛 and 𝑔 𝑛 = 2
𝑓 𝑛 ≤ 𝑔 𝑛 ⟹ 𝑓 𝑛 = O(𝑔(𝑛))

𝒏 𝒇(𝒏) = 𝒏𝟐 𝒈(𝒏) = 𝟐𝒏
1 1 2 𝑓(𝑛) < 𝑔(𝑛)
2 4 4 𝑓(𝑛) = 𝑔(𝑛)
3 9 8 𝑓(𝑛) > 𝑔(𝑛)
4 16 16 𝑓(𝑛) = 𝑔(𝑛)
Here for 𝑛 ≥ 4,
5 25 32 𝑓(𝑛) < 𝑔(𝑛)
𝑓 𝑛 ≤𝑔 𝑛
6 36 64 𝑓(𝑛) < 𝑔(𝑛)
𝑠𝑜, 𝑛0 = 4
7 49 128 𝑓(𝑛) < 𝑔(𝑛)
 Example 4:
Asymptotic Notations – Examples 𝐟(𝐧) = 𝟑𝟎𝐧 + 𝟖 is in the order of n, or O(n)
𝐠(𝐧) = 𝒏𝟐 + 𝟏 is order n , or O(n )

𝒇(𝒏) = 𝑶(𝒈(𝒏))
g (n)=n2+1
Value of function 

f(n)=30n+8
 In general, any 𝑂(𝑛 ) function is faster-
growing than any 𝑂(𝑛) function.

Base value 𝑛
Increasing n 
Big-O and Growth Rate
 The big-O notation gives an upper bound on the growth rate of a function
 The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the growth
rate of g(n)
 We can use the big-O notation to rank functions according to their growth rate:
Big-O and Growth Rate
Common Orders of Magnitude

1 𝒍𝒐𝒈 𝒏 𝒏 𝒏𝒍𝒐𝒈 𝒏 𝒏𝟐 𝒏𝟑 𝟐𝒏 𝒏!
1 2 4 8 16 64 16 24
1 4 16 64 256 4096 65536 2.09 x 1013
1 6 64 384 4096 262144 1.84 × 1019 1.26 x 1029
1 8 256 2048 65536 16777216 1.15 × 1077 ∞
1 10 1024 10240 1048576 1.07 × 109 1.79 × 10308 ∞
1 12 4096 49152 16777216 6.87 × 1010 101233 ∞

n = 4 , 16, 64, 256, 1024, 4096……………………..


n = 𝟐𝟐 , 𝟐𝟒 , 𝟐𝟖 , 𝟐𝟏𝟎 , 𝟐𝟏𝟐 …………………………..
Growth of Function
𝑛! 𝑒𝑛 2𝑛 𝑛3 𝑛2 𝑛𝑙𝑜𝑔𝑛 𝑛 𝑛 𝑙𝑜𝑔𝑛 1

 For each of the following pairs of functions, either 𝑓(𝑛) is 𝑂(𝑔(𝑛)), 𝑓(𝑛) is Ω(𝑔(𝑛)), or
𝑓(𝑛) = θ(𝑔(𝑛)). Determine which relationship is correct.

𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑛𝟐 𝑓(𝑛) = 10; 𝑔(𝑛) = log 10


𝑓(𝑛) = log log 𝑛; 𝑔(𝑛) = log 𝑛 𝑓(𝑛) = 2 ; 𝑔(𝑛) = 10𝑛
𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔 𝑛 𝑓 𝑛 = 2𝑛; 𝑔(𝑛) = 3𝑛
𝑓(𝑛) = 𝑛 log 𝑛 + 𝑛; 𝑔(𝑛) = log 𝑛 𝑓(𝑛) = 𝑛; 𝑔(𝑛) = 𝑙𝑜𝑔 𝑛
𝑓 𝑛 = 5𝑛 + 𝑛2; 𝑔 𝑛 = 1 + 𝑛 + 𝑛2

𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Asymptotic Notations in Equations
 Consider an example of travelling cost of flight and Train:
Cost = Cost for flight + Cost for Train Negligible
Total Cost ≈ Cost for flight (approximation)
 Maximum Rule: Let, 𝑓, 𝑔: 𝑁 → 𝑅 the max rule says that:

𝑂( 𝑓(𝑛)+𝑔(𝑛))=𝑂(max(𝑓(𝑛),𝑔(𝑛)))

1. n + 100n + 10n + 50 is 𝐎(𝐧𝟒 )


2. 10n + 2n is 𝐎(𝐧𝟑 )
3. n - n is 𝐎(𝐧𝟑 )
 The low order terms in a function are relatively insignificant for large 𝒏
𝑛 + 100𝑛 + 10𝑛 + 50 ≈ 𝑛
𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Examples
Given functions f(n) and g(n), we say that f(n) is
O(g(n)) if there are positive constants
c and n0 such that
f(n) ≤ cg(n) for n ≥ n0
Example: f(n) = 2n + 10 and g(n)= n
2n + 10 ≤ cn
(c − 2) n ≥ 10
n ≥ 10/(c − 2)
Pick c = 3 and n0 = 10

𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
More Big-O Examples
 3n3 + 20n2 + 5 3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
 7n-2
this is true for c = 4 and n0 = 21
 3 log n + 5

7n-2 is O(n)
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0
this is true for c = 6 and n0 = 1

3 log n + 5 is O(log n)
need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
this is true for c = 8 and n0 = 2

𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Expressing Function in terms of 𝑂, Ω, θ notation
 Find Big O, Omega Ω, Theta θ notation for f n = 2n3 + 7𝑛2 + 𝑛 + 7
2n3 + 7𝑛2 + 𝑛 + 7 ≤ 2n3 + 7𝑛2 + 𝑛 + 𝑛
≤ 2n3 + 7𝑛2 + 2𝑛
≤ 2n3 + 7𝑛2 + 𝑛2
≤ 2n3 + 8𝑛2
≤ 2n3 + n3
≤ 3n3
Thus, C = 3, 𝑔 n = n3
Big O Notation: Omega Ω Notation: Theta θ Notation:
f n ≤ 3𝑔(n3) f n ≥ 2𝑔(n3) 2𝑔(n3) ≤ f n ≤ 3𝑔(n3)
f n = O(n3) f n = Ω(n3) f n = θ(n3)

𝑶(𝟏) < 𝑶(𝒍𝒐𝒈𝒏) < 𝑶(√𝒏) < 𝑶(𝒏) < 𝑶(𝒏𝒍𝒐𝒈𝒏) < 𝑶(𝒏𝟐) < 𝑶(𝒏𝟑) < 𝑶(𝟐𝒏) < 𝑶(𝒏!)
Proof why all are same
 2n3 + 7𝑛2 + 𝑛 + 7
Example of Big- O
Q1: 3n+2 = O(g(n)) what is the possible value of g(n)???

A. g(n) = 1
B. g(n) = 3n+2
C. g(n) = n
D. g(n) = n2
Example of Big- O
Q2: 3n+2 = O(n) why???
How:
A. 3n+2 <= n
B. 3n+2 <= 2n
C. 3n+2 <= 3n
D. 3n+2 <= 4n for all n>=2. [Here n0=2 and c=4 from the definition]
Example of Big- O
Q3: 100n+6=O(n) Which one is correct and what is possible value of c and n0
A. 100n+6<=20n then n>? and c>?
B. 100n+6<=50n then n>? and c>?

C. 100n+6<cn then n>? and c>?

D. 100n+6<=cn then n>? and c>? for all n>=6. [Here n0=6 and c=101 from the definition]

E. 100n+6<=cn2 then n>? and c>? for all n>=2. [Here n0=2 and c=101 from the definition]
Example of Big- O
Q4: 10n2+4n+2 In terms in Big – O notation ?
A. Error
B. O(1)
C. O(n)
D. O(n2)
Example of Big- O
Q5: 10n2+4n+2 = O(n2) then n0=? and c=?
A. 10n2+4n+2 <= 2n ,where n0 = ____ and c = ____
B. 10n2+4n+2 <= 5n2 ,where n0 = ____ and c = ____
C. 10n2+4n+2 <= 8n3 ,where n0 = ____ and c = ____ Here n0= 2 and c= 8 from the definition

D. 10n2+4n+2 <= 11n2 ,where n0 = ____ and c = ____ Here n0= 5 and c= 11 from the definition
Example of Big- O
Q6: 6*2n+n2 =O(g(n)) What is g(n)???
A. O(2n)
B. O(nn)
C. O(n2)
D. O(n)

What is the value of c and n0 A option?

for all n>=4 [Here n0=4 and c=7]


Example of Big- Ω
 3n+2=Ω(g(n)) what is g(n)?

 3n+2=Ω(n) why?

 3n+2>=cn.
[Here n0=1 and c=3 from the definition]

 100n+6=Ω(g(n)) as 100n+6>=100n for all n>=1

 10n2+4n+2=Ω(n2) as 10n2+4n+2>=10n2 for n>=1

 6*2n+n2 =Ω(2n) Since 6*2n+n2>=6*2n for all n>=1


Example of Big- Ω
 Here also we can say that 3n+3=Ω(1) because 3n+3>=1
 6*2n+n2 =Ω(2n),
 6*2n+n2 =Ω(n100),
 6*2n+n2 =Ω(n3),
 6*2n+n2 =Ω(n2),
 6*2n+n2 =Ω(n),
 6*2n+n2 =Ω(1)
• Omega gives the idea of at least or lower bound on the growth rate.
• For a given function 6*2n+n2 the lower bound can be as shown above, but the desired lower
bound is Ω(2n)
• So we want to make the lower bound as large as possible.
Example of Big- Θ
 3n+2=Θ(n) as
3n+2>=3n for n>=2
and
3n+2<=4n for n>=2. [Here c1=3, c2=4 and n0=2]
 100n+6=Θ(n)
 10n2+4n+2=Θ(n2)
 6*2n+n2 =Θ(2n)
Example of Big- Θ
 Here
 6*2n+n2 =Θ(2n),
 6*2n+n2 !=Θ(n100),
 6*2n+n2 !=Θ(n3),
 6*2n+n2 !=Θ(n2),
 6*2n+n2 !=Θ(n),
 6*2n+n2 !=Θ(1)
 Theta notation is more precise than big Oh and omega notations.
 This depicts a function g(n) which is both upper and lower bound on f(n)
Analyzing Control
Statements
Asymptotic Time Complexity
for (i=0;i<10;i++){
print(i) O(1) WHY?
}
● What is the time complexity of above statement?

• Number of statements are fixed and do not depend upon the input
characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=0;i<1000;i++){
print(i) O(1) WHY?
}
● What is the time complexity of above statement?

• Here also, Number of statements are fixed and do not depend upon
the input characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=1;i<1000;i++){
print(i) O(1) WHY?
}
● What is the time complexity of above statement?

• Here also, Number of statements are fixed and do not depend upon
the input characterstics (number of instances are fixed)
• We can say that the statements are run fixed number of time.
Asymptotic Time Complexity
for (i=1;i<=n;i++){
// some statement O(n) WHY?
}
● What is the time complexity of above statement?

• The loop runs from i = 1 to i = n.


• In each iteration, the value of i is incremented by 1.
• Therefore, the loop executes n iterations.
Asymptotic Time Complexity
for (i=0;i<n;i++){
// some statement O(n) WHY?
}
● What is the time complexity of above statement?

• The loop runs from i = 0 to i < n.


• In each iteration, the value of i is incremented by 1.
• Therefore, the loop executes n+1 iterations.
Asymptotic Time Complexity
 Time complexity of an algorithm with for loop iterating from 1 to n steps
Asymptotic Time Complexity- for loop(1-n)
# Input : int A[n], array of n integers
# Output : Sum of all numbers in array A

Algorithm: int Sum(int A[], int n)


{
int s=0; n+1
for (int i=0; i<n; i++)
1
s = s + A[i];
return s; n

} Total time taken = n+1+n+2 = 2n+3


Time Complexity f(n) = 2n+3
O(n)
Asymptotic Time Complexity- Nested for loop(1-n),(1-m)
 Time complexity of an algorithm with nested for loops (here two for loops) iterating from 1 to
n and 1 to m steps respectively.
Asymptotic Time Complexity- Nested for loop(1-n),(1-n)
𝒄𝟏 𝒏+𝟏
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒄𝟐 𝒏 𝒏 + 𝟏
𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝒄𝟑 ∗ 𝒏 ∗ 𝒏

 Analysis
𝑇 𝑛 =𝑐 𝑛+1 +𝑐 𝑛 𝑛+1 +𝑐 𝑛 𝑛
𝑇(𝑛) = 𝑐 𝑛 + 𝑐 + 𝑐 𝑛2 + 𝑐 𝑛 + 𝑐 𝑛2
𝑇(𝑛) = 𝑛2(𝑐 + 𝑐 ) + 𝑛(𝑐 + 𝑐 ) + 𝑐
𝑇(𝑛) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐
𝟐
𝑻(𝒏) = 𝑶 𝒏
Asymptotic Time Complexity
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.

for (i=0;i<n;i++){ n+1 steps O(n)


//some statement some FIXED steps n times
}

for (i=0;i<n;i++){ n+1 steps


for (j=0;j<n;j++){ n+1 steps for each (n+1 steps of outer loop)
//some statement Total n2
}
some FIXED steps n2 times for both loops
}
O(n2)
Asymptotic Time Complexity
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.

for (i=0;i<n;i+2){ n+1 steps O(n)


//some statement some FIXED steps n times
}

for (i=0;i<n;i+2){ O(n/2) steps


for (j=0;j<n;j+2){ O(n/2) steps for each
//some statement O(n/2) steps of outer loop)
} some FIXED steps for O(n2) O(n2)
}
Asymptotic Time Complexity
for (i=1; i< n; 2*i){
// some statement some FIXED steps
(constant time)
}
● What is the time complexity of above statement?

O(log2n) WHY?
Asymptotic Time Complexity
for (i=1; i< n; 2 * i){
// some statement some FIXED steps
(constant time)
O(log2n) WHY?
}

Steps i= i<n • i < n so


1 1 T
• 2 <n
2 2*𝟐𝒐 =𝟐𝟏 T
3 2*𝟐𝟏 = 𝟐𝟐 T • log22k−1 <log2n
4 2*𝟐𝟐 = 𝟐𝟑 T
• 𝑘 − 1<log2n
……….. ………… T
k 𝟐 ∗ 𝟐𝒌 𝟐
= 𝟐𝒌 𝟏
F 𝟐𝒌 𝟏
≥𝒏 • 𝒌<log2n+1
Asymptotic Time Complexity
for (i=0; i< n; i=i* 2){
// some statement some FIXED steps
(constant time)
}
● What is the time complexity of above statement?

O(log2n) WHY? Loop will not stop because i is


always 0, multiplying i with 2
will update i as 0 only.
BE CAREFUL For such
statements
Asymptotic Time Complexity
𝒊
for (𝒊 =n; 𝒊 > 1 ; ){
𝟐 some FIXED steps
(constant time)
// some statement
}
● What is the time complexity of above statement?

O(log2n) WHY?
Asymptotic Time Complexity
𝒊
for (𝒊 =n; 𝒊 > 1 ; ){
𝟐 O(log2n) WHY?
// some statement some FIXED steps
(constant time)
}
Steps i= i>1 • i ≤1 so
1 n T • 𝑛/2 ≤1
2 n/2 T
• 𝑛≤ 2
3 n/𝟐𝟐 T
4 n/𝟐𝟑 T • 𝒌 ≥ log2n+1
……….. ………… T
k 𝒏/𝟐𝒌 𝟏
F 𝒏/𝟐𝒌 𝟏
≤1
Asymptotic Time Complexity
for (𝒊 =2; 𝒊 < n ; 𝒊𝟐 ){
// some statement some FIXED steps
(constant time)
}
● What is the time complexity of above statement?

O(log2(log2n)) WHY?
Asymptotic Time Complexity
for (𝒊 =2; 𝒊 < n ; 𝒊𝟐 ){
// some statement O(log2(log2n)) WHY?
some FIXED steps
} (constant time)
• i ≥ n so
Steps i= i>1
𝟐𝒌
1 2 T • 𝟐 ≥𝒏
2 𝟐𝟐 T • 2 𝑙𝑜𝑔22 ≥ log2 𝑛
3 𝟒𝟐 T
• 2 ≥ log2 𝑛
4 𝟏𝟔𝟐 T
……….. ………… T • 𝒌 ≥ 𝒍𝒐𝒈𝟐(𝐥𝐨𝐠𝟐 𝒏)
k 𝟐𝟐
𝒌 𝒌
F: 𝟐𝟐 ≥ 𝒏
Analyzing Control Statements
Example 1: Example 3:

𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝐜 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒄𝟏 𝒏 + 𝟏
 Statement is executed once only 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒄𝟐 𝒏 𝒏 + 𝟏
 So, The execution time 𝑇(𝑛) is some 𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝒄 ∗ 𝒏 ∗ 𝒏
constant 𝐜 ≈ 𝑶(𝟏) 𝟑

 Analysis
Example 2: 𝑇 𝑛 =𝑐 𝑛+1 +𝑐 𝑛 𝑛+1 +𝑐 𝑛 𝑛
𝑇(𝑛) = 𝑐 𝑛 + 𝑐 + 𝑐 𝑛2 + 𝑐 𝑛 + 𝑐 𝑛2
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝐜𝟏 ∗ 𝒏 + 𝟏 𝑇(𝑛) = 𝑛2(𝑐 + 𝑐 ) + 𝑛(𝑐 + 𝑐 ) + 𝑐
𝑠𝑢𝑚 = 𝑎 + 𝑏; 𝑇(𝑛) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐
𝐜𝟐 ∗ 𝒏 𝑻(𝒏) = 𝑶 𝒏𝟐
 Total time is denoted as,
𝑻 𝒏 = 𝒄𝟏 𝒏 + 𝒄𝟏 + 𝒄𝟐 𝒏
𝑻(𝒏) = 𝒏(𝒄𝟏 + 𝒄𝟐 ) + 𝒄𝟏 ≈ 𝑶(𝒏)
Analyzing Control Statements
Example 4: Example 6:
𝑙 = 0 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑗 𝑑𝑜 𝜽 𝒏𝟐
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑖 𝑑𝑜 𝑠𝑢𝑚 = 𝑠𝑢𝑚 + 𝑗 ∗ 𝑘
𝑓𝑜𝑟 𝑘 = 𝑗 𝑡𝑜 𝑛 𝑑𝑜
𝑙 = 𝑙 + 1 𝑓𝑜𝑟 𝑙 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝜽 𝒏
𝑠𝑢𝑚 = 𝑠𝑢𝑚 − 𝑙 + 1
𝒕 𝒏 = 𝜽 𝒏𝟑
printf(“sum is now %d”,𝒔𝒖𝒎) 𝜽 𝟏
Example 5:
𝑙 = 0 𝒕 𝒏 = 𝜽 𝒏𝟐 + 𝜽 𝒏 + 𝜽 𝟏
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 𝑑𝑜 𝒕 𝒏 = 𝜽 𝒏𝟐
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑛 𝑑𝑜
𝑙 = 𝑙 + 1
𝒕 𝒏 = 𝜽 𝒏𝟔
Asymptotic Control Statements
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.

for (i=0;i<n;i++){ O(n) steps


for (j=1;j<n;j*2){ O(log2n) steps for each
//some statement O(n) steps of outer loop)
} some FIXED steps for nlog2n
}
O(nlog2n)
Asymptotic Control Statements
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.

for (i = 0; i < n; i++){ O(n) steps


for (j = 0;j <= i ; j++){ O(i) steps for each O(n) steps of outer
//some statement loop)
} some FIXED steps
} Total steps=
For i=0, inner loop runs once (j=0) O(n2)
For i=1, inner loop runs two times (j=0,1)
For i=2, inner loop runs three times (j=0,1,2)
...
For i=n-1, inner loop runs n-1 times (j=0,1,2,...,n-1)
Summing all above iterations: 1+2+3+....n-1=n(n-1)/2=O(n2)
Asymptotic Control Statements
Along the same line as previous slides, we can say, the asymptotic time complexity of
following algorithms are O(n) and O(n2) respectively.

for (i = 1; i < n; i=i*2){ O(log2n) steps for outer loop


for (j = 1;j < n ; j=j*2){ O(log2n) steps for inner loop)
//some statement some FIXED steps
}
}
O((log2n)2)
Analyzing Control Statements
int i = 0; int i = 0;
while (i < 5) { while (i < n) {
// statements O(1) // statements O(n)
i *= 2; i++;
} }

int i = 0; int i = 0;
while (i < n) { while (i < n) {
// statements // statements
i *= 2; i *= 3;
O(log2n) O(log3n)
} }
Analyzing Control Statements
int i=1;
while(i<n)
{
j=n;
while(j>1)
{
j=j/2;
O(log2n)
}
i=2*i
}
Space Complexity
 Whenever a solution to a problem is written some memory is required to complete. For any
algorithm memory may be used for the following:
1. Variables (This include the constant values, temporary values)
2. Program Instruction
3. Execution
 Space complexity is the amount of memory used by the algorithm (including the input values to
the algorithm) to execute and produce the result.
 Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the extra
space or the temporary space used by the algorithm during it's execution.

 Space Complexity = Auxiliary Space + Input space


Memory Usage while Execution
 While executing, algorithm uses memory space for three reasons:
1. Instruction Space: It's the amount of memory used to save the compiled version of instructions.
2. Environmental Stack: Sometimes an algorithm(function) may be called inside another
algorithm(function). In such a situation, the current variables are pushed onto the system stack, where
they wait for further execution and then the call to the inside algorithm(function) is made.
 For example, If a function A() calls function B() inside it, then all th variables of the function A()
will get stored on the system stack temporarily, while the function B() is called and executed
inside the funciton A().
 Data Space,
 Amount of space used by the variables and constants.
 But while calculating the Space Complexity of any algorithm, we usually consider only Data Space and
we neglect the Instruction Space and Environmental Stack.
Calculating the Space Complexity
 For calculating the space complexity, we need to know the value of memory used by different
type of datatype variables, which generally varies for different operating systems, but the
method for calculating the space complexity remains the same.

TYPE SIZE
Bool, char 1 bytes
short 2 bytes
Int, long, float 4 bytes
Double, long double 8 bytes
Examples-1
void basicFunction(int n) {
int x = 10; // O(1) space
int y = 20; // O(1) space
int sum = x + y; // O(1) space
cout << sum; Space complexity:
}
Total space for variables (x, y, sum): 4 + 4 + 4 = 12 bytes

Space Complexity: O(1) (Constant space)

Memory Usage: 12 bytes


Examples-2: Linear Space (Array Storage)
// n is the length of array a[]
int sum(int a[], int n)
Space complexity:
{
int x = 0; // 4 bytes for x Array arr[]: 4 * n bytes
for(int i = 0; i < n; i++) // 4 bytes for i
Other Variables : 4 bytes each for x,n, i and the return
{
x = x + a[i]; value.
}
return(x); Space Complexity: O(n)

}
Memory Usage: 4n + 12
Examples-2: 2D Array (Matrix)
void matrixFunction(int n) {
int matrix[n][n]; // 4 bytes * n * n
Space complexity:
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) { Matrix matrix[n][n]: 4 * n * n bytes
matrix[i][j] = i * j;
Other Variables (i, j): 4 bytes each (constant)
}
} Space Complexity: O(n2)
cout << matrix[n-1][n-1];
} Memory Usage: 4n2 bytes for the matrix
Example-3: Singly Linked List
 Node Structure:
Example for n = 5
 Each Node consists of: Number of Nodes: 5
 data: 4 bytes (integer) Space for Nodes: 12 * 5 = 60 bytes
Pointers (head and current): 8 + 8 = 16 bytes
 next: 8 bytes (pointer to the next node)
Total Memory Usage: 60 + 16 = 76 bytes
 Total size of one node: 4 + 8 = 12 bytes

 Number of Nodes: n (one node for each element in the list).


 Space Used for the Linked List:
 For n nodes: 12 * n bytes

 Additional Space: • Total Space Complexity:


• Dynamic Memory: O(n) (for storing nodes)
 Node* head: 8 bytes
• Fixed Memory: O(1) (for head and current pointers)
 Node* current: 8 bytes • Total Memory Usage: 12 * n + 16 bytes
Example-3: Doubly Linked List
 Node Structure:
Example for Doubly Linked List (n = 5)
 data: 4 bytes
Number of Nodes: 5
 next: 8 bytes Space for Nodes: 20 * 5 = 100 bytes
 prev: 8 bytes Pointers (head and current): 8 + 8 = 16 bytes
Total Memory Usage: 100 + 16 = 116 bytes
 Total size of one node: 4 + 8 + 8 = 20 bytes

 Space Used for the Doubly Linked List:


 For n nodes: 20 * n bytes

 Additional Pointers:
 Node* head: 8 bytes • Total Space Complexity:
• Dynamic Memory: O(n)
 Node* current: 8 bytes
• Fixed Memory: O(1)
• Memory Usage for Doubly Linked List: 20 * n + 16 bytes
Linear Arrays
 We already saw various algorithms to perform certain operations on arrays in our previous
lectures.
 Runtime
 – Accessing an element: O(1)
 – Inserting an element at the first index: O(n)
 – Inserting an element at ith index: O(n)
 – Searching an element (unsorted array): O(n)
 – Deleting an element at the first index: O(n)
 – Deleting an element at ith index: O(n)
Singly Linked Lists
 Runtime
 – Accessing an element: O(n)
 – Inserting an element at the first index (at Head node): O(1)
 – Inserting an element at ith index: O(n)
 – Searching an element: O(n)
 – Deleting an element at the first index (at Head node): O(1)
 – Deleting an element at ith index: O(n)
Stack
 Runtime
 – Push operation: O(1)
 ● Hence runtime to insert n items will require O(n) time.
 – Pop operation: O(1)
 – Checking whether stack is full or empty: O(1)
 ● Extra space required for above operations: O(1)
Queue
 Runtime
 – Enqueue operation: O(1)
 Hence runtime to insert n items will require O(n) time.
 – Dequeue operation: O(1)
 – Checking whether queue is full or empty: O(1)
 Extra space required for above operations: O(1)
Factorial-Iterative algorithm
factorial_interative(n) Time Complexity:
Explanation
{ • Initialization:
result = 1 is executed once before the loop starts → O(1).
result = 1 • Loop Condition:
The condition i <= n is checked n−1 times → O(n).
for (i = 2; i <= n; i++) Incrementing i is also performed n−1 times → O(n).
result = result * i; • The key operation result = result * i is performed n−1 times →
O(n).
return result; • Return Statement:
• Returning the value of result is a single operation → O(1).
} Total Time Complexity
Adding all the steps:
O(1)+O(n)+O(n)+O(n)+O(1) = O(n)
Thus, the overall time complexity of the iterative factorial algorithm is
O(n).
Factorial-Iterative algorithm
factorial_interative(n)
{ Space Complexity:
result = 1 Variables:
n: Integer (4 bytes)
for (i = 2; i <= n; i++)
result: int (4 bytes)
result = result * i; i: Integer (4 bytes)
return result; Fixed Memory Usage:
} Total for variables: 4 + 4 + 4 = 12 bytes
Space Complexity: O(1) (constant space).
Factorial-Recursive algorithm
factorial(int n)
{
if (n == 0 || n == 1) Runtime analysis?
return 1;
How to do the runtime analysis
for recursive function calls??
return n * factorial(n - 1);
}
Factorial based
Algorithms
Recursion
 Recursion is defined as a process which calls itself directly or indirectly the corresponding
function is called a recursive function.
 Recursion involves calling the same function within itself, which leads to a call stack.
 A recursive function must have a base case or stopping criteria to avoid infinite recursion.
 The primary property of recursion is the ability to solve a problem by breaking it down into
smaller sub problems, each of which can be solved in the same way.
 Many algorithms (divide and conquer) are recursive in nature
Recursion
 Breaking bigger problem into smaller problem :
 23 = 2x22
 = 2x2x2
 an= a x an-1
 = a x a x an-2
 = a x a x a x an-3
...
 = a x a x a x . . . x a (n times)
Recursion
 Factorial Problem: Given an integer n, find its factorial n!
 Breaking bigger problem into smaller problem:
 n!= n x (n-1) x (n-2)... x 1
Algorithm:
Fact(n): RECURSIVE FUNCTION CALL
if (n = = 0) 1. Deriving recurrence relation
2. Analyzing/Solving a recurrence relation
return 1
else
return n*Fact(n-1)
Deriving recurrence relation
 Factorial Problem: Given an integer n, find its factorial n!
 Breaking bigger problem into smaller problem:
 n!= n x (n-1) x (n-2)... x 1
Algorithm:
Let us assume the call to this function with argument 'n' takes
Fact(n): T(n) time
if (n = = 0) n=0 is the base case where recursion stops constant time O(1)
Here, Fact(n-1) is call to the factorial function with n-1
return 1
arguments hence it takes T(n-1) time.
else Multiplication takes O(1) time.:
return n*Fact(n-1)
Total time
T(n)= T(n-1)+O(1) when n>0
T(n)= 1 when n=0
Analyzing Recurrence relation
 It means deriving an analytical expression for a given
 recurrence relation.
 There are broadly three methods:

1. – Iteration based method


2. – Recursion-Tree Based method
3. – Master method
Iteration based method
 Iteration based method: the steps involved:
 We iteratively insert the smaller values to reach to the base case of the recurrence relation.
 The Iteration Method (also known as the Substitution Method) is a simple and effective way to
analyze recurrence relations by expanding them step by step until a pattern emerges. Here’s
how it works:
 Steps for Iteration Method:
1.Expand the recurrence relation step by step by substituting previous terms.
2.Look for a pattern that emerges after several expansions.
3.Generalize the pattern in terms of n.
4.Solve for the base case to determine the exact closed-form expression.
Example:
void test( n ){
if(n==0)
return;
cout<<n<<endl;
test(n-1);
}

T(n) = T(n-1)+1, when n>0


T(n) = 1 , when n=0
Iteration based
 Solving the Recurrence:
T(n)= T(n-1)+O(1) when n>0
 Let's expand the recurrence step by step:
T(n)= O(1) when n=0
 T(n)=T(n−1)+O(1)
 T(n−1)=T(n−2)+O(1)
 T(n-2) = T(n-3) + O(1)
 …..
 Expanding this process until we reach the base case(n=0):
 T(n) = T(n−1)+O(1) = T(n−2)+O(1)+O(1) =⋯ = T(0) + nO(1)
 T(n) = T(0) + O(n) = O(1) + nO(1) = (1+n)O(1) = O(n)
 Thus, the asymptotic complexity is: T(n)=O(n)
Recursion Tree method
 The recursion tree method is a technique used to visualize and solve recurrence relations. It's
particularly helpful in analyzing the time complexity of recursive algorithms. The method
breaks down the recurrence relation into a tree structure, where each node represents the cost
of a subproblem, and the edges represent recursive calls.
Steps
 Construct the Recursion Tree:
 The root of the tree represents the original problem (with the given input size).
 The child nodes represent the subproblems created by each recursive call.
 The process continues until the base case is reached, where the recursion stops.

 Calculate the Cost at Each Level:


 Each node in the recursion tree represents the work done at that recursive call.
 The cost at each level depends on the number of recursive calls and the work done at each call.

 Sum the Costs Across All Levels:


 The total work done is the sum of the costs at all levels of the tree.
 Analyze how many levels the tree has and the work done at each level.

 Find the Total Work and Derive the Time Complexity:


 After calculating the cost at each level, sum the costs across all levels to find the total time complexity.
Example:
void test(int n) {
if (n == 0)
return;
test(n - 1);
printf("%d\n", n);
}
The recurrence relation for this function is:

T(n)=T(n−1)+O(1) for n>0


T(0)= O(1)
Recursion Tree method
T(n) Recursion depth
Or
T(n-1) O(1)
Height of the tree is n+1
T(n-2) O(1)
so
Total runtime
T(n-3) O(1) = O(n)* O(1)
… O(1) = O(n)
… O(1)
Total Work:
T(1) The total work is the sum of the work at all levels.
Since each level has a cost of O(1), and there are 𝑛
n levels (from n down to 0), the total cost is:
T(0) O(1)
T(n)=O(1)+O(1)+⋯+O(1)(n times)=O(n)
Examples 1
 Analyze the following recurrence:
 T(n) = 2T(n/2)+ n, if n>1
 T(n) = O(1), if n=1
 Iteration based method (unfolding method)
1. Iteration based
1. Iteration based

T(n) = n+nlog2n  O(nlog2n)


Examples 2 with recursion tree
 Now analyzing previous example using recursion tree
 T(n) = 2T(n/2)+ n, if n>1
 T(n) = O(1), if n=1
2. Recursion tree
Example 3: Head Recursion
void test( n ){
if(n==0) What would be the
recursive relation?
return;
test(n-1);
cout<<n<<endl;
} T(n)= T(n-1)+O(1) when n>0
T(n)= 1 when n=0
3. Iteration based
 Solving the Recurrence:
T(n)= T(n-1)+O(1) when n>0
 Let's expand the recurrence step by step:
T(n)= O(1) when n=0
 T(n)=T(n−1)+O(1)
 T(n−1)=T(n−2)+O(1)
 T(n-2) = T(n-3) + O(1)
 …..
 Expanding this process until we reach the base case(n=0):
 T(n) = T(n−1)+O(1) = T(n−2)+O(1)+O(1) =⋯ = T(0) + nO(1)
 T(n) = T(0) + O(n) = O(1) + nO(1) = (1+n)O(1) = O(n)
 Thus, the asymptotic complexity is: T(n)=O(n)
Example 4: Tail Recursive
void test( n ){
if(n==0)
return;
cout<<n<<endl;
test(n-1);
}

What would be the T(n) = T(n-1)+O(1), when n>0


T(n) = O(1) , when n=0
recursive relation?
4. Iteration based
 Solving the Recurrence:
T(n)= T(n-1)+O(1) when n>0
 Let's expand the recurrence step by step:
T(n)= O(1) when n=0
 T(n)=T(n−1)+O(1)
 T(n−1)=T(n−2)+O(1)
 T(n-2) = T(n-3) + O(1)
 …..
 Expanding this process until we reach the base case(n=0):
 T(n) = T(n−1)+O(1) = T(n−2)+O(1)+O(1) =⋯ = T(0) + nO(1)
 T(n) = T(0) + O(n) = O(1) + nO(1) = (1+n)O(1) = O(n)
 Thus, the asymptotic complexity is: T(n)=O(n)
Example 5: Mid Recursive
void test( n ){
if(n==0) What would be the
recursive relation?
return;
cout<<n<<endl;
test(n-1);
cout<<n<<endl; T(n)=T(n−1)+O(1)+O(1) when n>0
}
T(n)=T(n−1)+O(1)+O(1) when n>0
T(n)= 1 when n=0
5. Iteration based
 Solving the Recurrence:
T(n)= T(n-1)+O(1) when n>0
 Let's expand the recurrence step by step:
T(n)= O(1) when n=0
 T(n)=T(n−1)+O(1)
 T(n−1)=T(n−2)+O(1)
 T(n-2) = T(n-3)+O(1)
 …..
 Expanding this process until we reach the base case(n=0):
 T(n) = T(n−1)+O(1) = T(n−2)+O(1)+O(1) =⋯ = T(0) + nO(1)
 T(n) = T(0) + O(n) = O(1) + nO(1) = (1+n)O(1) = O(n)
 Thus, the asymptotic complexity is: T(n)=O(n)
Example 6:
void test( n ){
if(n==0)
return;
for(i=0;i<n;i++)
print(n);
test(n-1);
}
T(n) = T(n-1) + 2n+1, n>0, We can also write in asymptotic notation
T(n) = T(n-1) + O(n), n>0
T(n) = 1 , n=0
6. Iteration based
 Solving the Recurrence:
T(n)= T(n-1)+O(n) , when n>0
 T(n)=T(n−1)+O(n)
T(n)= O(1) , when n=0
 T(n−1)=T(n−2)+O(n-1)
 T(n-2) = T(n-3) + O(n-2)
 …..
 Expanding this process until we reach the base case:
 T(n) = T(n−k)+ O(n-(k-1))+ O(n-(k-2)) +........+O(n-1) + O(n) Now assume n-k = 0
 T(n)= T(0) + O(n)+O(n−1)+O(n−2)+⋯+O(1)
 T(n) = T(0) + O(1+2+3+.........+ (n-1) + n) = O(1) + O(n(n+1)/2)
 Thus, the asymptotic complexity is: T(n)=O(n2)
6: Iteration based
 T(n) = T(n-1) + O(n), n>0
 T(n) = O(1) , n=0
6 Recursion Tree method
The recursion tree has depth = n.
so
Total runtime
T(3)
= n times O(n)
3 times
= O(n2)
T(2)
O(3)

2 times
T(1)
O(2)
Total Work:
1 time
T(0) T(n)=O(n)+O(n-1)+⋯+O(1)
O(1)
T(n)=O(n(n+1)/2)
T(n)= O(n2)
Example 7:

void test( n ){
if(n==0)
return;
for(i=1;i<n;i=2*i)
print(n);
test(n-1);
}
T(n) = T(n-1) + 2log2n+1, n>0, We can also write,

T(n) = T(n-1) + O(log2n), n>0


T(n) = O(1) , n=0
7: Iteration based
 Expanding step by step:
Using Stirling’s approximation:
 T(n)=T(n−1)+O(logn)
log(n!)=nlogn-n
 T(n−1)=T(n−2)+O(log(n−1)) Thus, the final time complexity is:
 T(n−2)=T(n−3)+O(log(n−2)) T(n)=O(nlogn)
 Expanding further:
 T(n)=O(logn)+O(log(n−1))+O(log(n−2))+⋯+O(log1)
 Since the sum of logarithms follows:
𝒏
 𝒌
∑ 𝟏 O(logk)=O(log(n!)
7: Iteration based
T(n) = T(n-1) + O(log2n), n>0
T(n) = O(1) , n=0
7: Recursion Tree method

test(4) Recursion Depth: O(n)


Work per level: O(log n)
print(4) runs
log₂(4) = 2 test(3)
Total Complexity:
times T(n)=O(nlogn)
print(3) runs
log₂(3) ≈ test(2)
1.58 times

print(2) runs
log₂(2) = 1 test(1)
time

test(0)
print(1) runs
(Base case,
log₂(1) = 0
no more
times
calls)
Examples
 T(n)=T(n−1)+1 → O(n)
 T(n)=T(n−1)+n → O(n2 )
 T(n)=T(n−1)+logn → O(nlogn)
 T(n)=T(n−2)+1 → O(n)
 T(n)=T(n−1)+n2 → O(n3)
 T(n)=T(n−100)+n → O(n2)
 T(n)=2T(n−1)+1 → O(2n)
 T(n)=3T(n−1)+n → O(3n)
Master Theorem

Master Method for Solving Recurrences

(Additional Material)
Master Theorem
 T(n) = aT(n/b) + f(n),
 where,
 n = size of input
 a = number of subproblems in the recursion
 n/b = size of each subproblem. All subproblems are assumed to have the same size.
 f(n) = cost of the work done outside the recursive call,
 which includes the cost of dividing the problem and cost of merging the solutions
 Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
Master Theorem
 If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time
complexity of a recursive relation is given by
𝒏
 T(n) = a T ( ) + f ( n )
𝒃
 where, T(n) has the following asymptotic bounds:
1. If 𝒇 𝒏 = 𝑶(𝒏𝒍𝒐𝒈𝒃 𝒂 ∈
), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃𝒂 )
2. If 𝒇 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃 𝒂 ), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈𝒃𝒂 ∗ 𝐥𝐨𝐠 𝒏)
3. If 𝒇 𝒏 = 𝛺(𝒏𝒍𝒐𝒈𝒃𝒂 𝝐 ), 𝑡ℎ𝑒𝑛 𝑻 𝒏 = 𝜽(𝒇(𝒏))
 𝝐 > 𝟎 is a constant.
Example-1

T(n)=2T(n/2)+1
Example-2

T(n)=4T(n/2)+n
Example-3

T(n)=8T(n/2)+n
Example-4

T(n)=2T(n/2)+n
Example-5

T(n)=4T(n/2)+n2
Example-6

T(n)=2T(n/2)+O(n2)
Example-7

T(n)=3T(n/4)+nlogn
Problem-1 (Tower of Hanoi)
Tower of Hanoi
 Tower of Hanoi is a mathematical puzzle where we are
given three rods and n disks.
 The objective of the puzzle is to move the entire stack of
disk to another rod,
 following the rules given below:
 1. Only one disk can be moved at a time.
 2. Each move consists of taking the upper disk from one
of the stacks and placing it on top of another stack i.e.
a disk can only be moved if it is the uppermost disk on a
stack.
 3. No disk may be placed on top of a smaller disk.
Tower of Hanoi
 Algorithm:
Input: Number of disks n, Source rod A, Destination rod C, Auxiliary rod B
Output: Step-by-step sequence of moves to transfer all disks from A to C using B
 Base Case:
 If n == 0, return (no disks to move).
 Recursive Steps:
 Move n-1 disks from A to B using C as auxiliary.
 Move the largest disk (n) from A to C.
 Move n-1 disks from B to C using A as auxiliary.
Tower of Hanoi
If T(n) represents the number of time steps to reach the solution.
 Move n-1 disks from rod A to rod B
 It takes T(n-1) time steps

 Move the largest disk from rod A to rod C


 It takes 1 time step

 Move the n-1 disk from rod B to rod C to get the final outcome on rod C.
 It takes T(n-1) time steps
 Total time steps: summing above intermediate time steps we get:

 T(n) = T(n-1)+1+T(n-1)
 T(n) = 2 T(n-1)+O(1) if n>0
 T(n) = 1 if n=0 (Base Case)
Iteration way
 T(n)=2T(n−1)+1 T(n) = 2 T(n-1)+O(1) if n>0
T(n) = 1 if n=0 (Base Case)
 T(n−1)=2T(n−2)+1
 T(n)=2(2T(n−2)+1)+1=22T(n−2)+2+1
 T(n)=2nT(0)+(2n−1)
 T(n)=23T(n−3)+22+2+1
 T(n)=2n+2n−1
If we continue, we see the pattern:
 T(n)=2(n+1) − 1
T(n)=2kT(n−k)+(2k−1)
For k=n, we reach T(0)=1:
 T(n)=2nT(0)+(2n−1) Exponential time complexity O(2n)
Problem 2 – Fibonacci series
Fibonacci series
int fib( n ){
if(n==0 or n==1)
return n;
return fib(n-1)+fib(n-2);
}
0,1,1,2,3,5,8,13,21,34,............

F(n)=F(n−1)+F(n−2),for n≥2
F(0)=0 , Base case
F(1)=1 , Base case
Fibonacci series
T(n-3)
T(n-2) T(n)=1+2+4+8+⋯+2n−1
T(n-4)
T(n-1) T(n)=2n−1
T(n-4)
T(n-3)
T(n-5)
T(n)
T(n-4)
O(2n)
T(n-3)
T(n-5)
T(n-2)
T(n-5)
T(n-4)
T(n-6)
 T(n)=T(n−1)+T(n−2)
 Step 1: Assume an Exponential Solution
T(n)=rn so
 where r is a constant. Substituting into the
recurrence:
rn=rn−1+rn−2 Dividing both sides by rn− 2
(assuming r≠0):

r2−r−1=0
Fibonacci series
Problem 3 – Staircase problem
Staircase problem
 You are given a staircase of n steps, you are
at the bottom of the staircase. At any given
time step you can either move one step or
two steps.
 Your objective is to reach to the top of the
staircase.
 How many total number of distinct ways are
there to go to the top of the staircase.
Staircase problem
 You can reach the top most stair either from
the second last stair (n-1) or the third last stair
(n-2).
 If T(n) is the number of steps required to
reach the nth step, then T(n)=T(n-1)+T(n-2),
 where T(n-1) and T(n-2) are the number of
steps required to climb (n-1)th and (n-2)th
stairs respectively.
 You are adding them because going to step n
wither from n-1 or from n-2 steps are two
different scenarios
Staircase problem
int countWays(int n) {
// Base cases: If there are 0 or 1 stairs,
// there is only one way to reach the top.
if (n == 0 || n == 1)
return 1;
return countWays(n - 1) + countWays(n - 2);
}

 T(n)=T(n−1)+T(n−2),for n≥2
 T(0)=1 ,n=0 Base case
 T(1)=1 , n=1 Base case
Staircase problem
 You can see that the recurrence of this problem resembles that of the
Fibonacci series recursion.
 Hence we can solve this problem using the Fibonacci program
 The time complexity will be θ(1.618n).
Bitstrings of Length
 Find the total number of possible bitstrings of length n such that there are no two consecutive
zeros.
 For example : if n=2, then 01, 10 and 11 are valid bitstrings whereas 00 is not a valid bitstring.
Total number of valid bitstrings is 3.
 If n=3 then 000, 100, 001 are not valid bitstring, rest of the others are valid bitstring. Total
number of valid bitstrings is 5.
 Try to find the answer using recursion. Formulate recurrence relation.
Bitstrings of Length
int countBitstrings(int n) {
if (n == 1) return 2;
if (n == 2) return 3;
return countBitstrings(n - 1) + countBitstrings(n - 2);
}

 T(n)=T(n−1)+T(n−2),for
 T(1)=2 , n=1 Base case
 T(2)=3 , n=2 Base case
Bitstrings of Length
 Computing the First Few Terms: Using the recurrence relation, let's compute the first few
values:
 Base Cases: T(1)=2 and T(2)=3
 Recursive Computation:
 T(3)=T(2)+T(1)=3+2=5 and T(4)=T(3)+T(2)=5+3=8 and T(5)=T(4)+T(3)=8+5=13
 T(6)=T(5)+T(4)=13+8=21 and T(7)=T(6)+T(5)=21+13=34
 So, the sequence begins as: 2,3,5,8,13,21,34,…
 Since this follows a Fibonacci-like recurrence, the values grow exponentially. The general
formula for Fibonacci-like sequences involves the golden ratio ϕ≈1.618: so T(n)≈O(ϕn)
Space Complexity
#include <iostream> Number of function calls = Highest number of Activation calls
using namespace std;
int fact(int n) { Fact(0) =1

if (n == 0) return 1; // Base case Fact(1)


return n * fact(n - 1); // Recursive call
Fact(2)
}
Fact(3)
int main() {
Fact(4)
int num = 4;
cout << "Factorial of " << num << " is " << fact(num) << endl; Main()

return 0;
Call Stack
}
Space Complexity (Fibonacci Sequence)
int fib(int n) { Fib(4) = 3
if (n <= 1) return n; // Base cases
return fib(n - 1) + fib(n - 2); // Recursive calls Fib(3) = 2 Fib(2) = 1
}
int main() { Fib(2) = 1 Fib(1) = 1 Fib(1) = 1 Fib(0) = 0
int num = 4;
cout << fib(num) << endl; Fib(0) = 0 Fib(1) = 1
return 0;
}

Call Stack = fail Space Complexity = Depth of tree = O(n)


Iterative approach
 // Iterative approach for factorial
int factorial_iterative(int n) {
int result = 1;
for (int i = 2; i <= n; ++i) {
result *= i;
}
return result;
}
Space Complexity
Recursive Implementation:
Space Complexity:
O(n) because each recursive call uses an additional stack frame.

Iterative Implementation:
Space Complexity:
O(1) since it only uses a constant amount of space regardless of the input size.
FINISH of Chapter-2

Thank you
Thank
You

You might also like