Analysis of Algorithms
Analysis of Algorithms
An algorithm is a set of rules for solving a problem or performing a task. FUNCTION findMax(list):
An algorithm is a finite set of precise instructions for performing a computation or for solving a max ← list[0]
problem. FOR each num IN list:
Characteristics: IF num > max THEN
• Should have a clear starting and ending point. max ← num
• Clear, unambiguous, finite steps with a defined input and output
RETURN max
• Algorithms can be expressed in natural language, pseudocode, or programming
Flowchart
languages.
A flowchart is a visual representation of an algorithm using shapes and arrows to depict the
flow of control.
Characteristics:
• It uses standardized symbols (like ovals for start/end, rectangles for processes, diamonds
Example find the maximum number in a list: for decisions) to show the steps and the order in which they occur.
Pseudocode
Characteristics:
• Algorithm: A clear set of steps to solve a problem. arr[0] = 0; for(i=0; i<N; i++)
• Pseudocode: A text-based version of an algorithm that is easier to read and understand. arr[1] = 0; arr[i] = 0;
• Flowchart: A visual representation that illustrates the steps and flow of the algorithm arr[2] = 0;
Suppose, for example, we have four algorithms to solve a problem P: A1, A2, A3, and A4, and ...
each of them has an execution time of T1, T2, T3, and 4. Which one will you choose? arr[N-1] = 0;
Of course, we will choose the shortest execution time So, we need to do analysis of algorithms. Such that analysis is independent of machine time,
How do we Compare algorithms? programming style, etc.
Determine how running time increases as the size of the problem increases.
Time complexity
The time complexity of an algorithm is defined as the number of operations done by the
algorithm to solve a problem as a function of the problem size.
Not good: Space complexity
a) It is necessary to implement the algorithm, which may be difficult.
The space complexity of an algorithm is defined as the amount of storage used by the algorithm
b) Results may not be indicative of the running time on other inputs
to solve a problem as a function of the problem size.
not included in the experiment.
Notice that the term complexity here has nothing to do with an algorithm being simple or
c) To compare two algorithms, the same hardware and software environments must
complicated.
be used and the same inputs.
2. Count the number of statements executed? We consider what is called worst case, average case, and best-case complexity.
Not good: number of statements vary with the programming language as well as
the style of the individual programmer.
Worst case complexity we can determine the maximum number of primitive operations executed by an algorithm, as a
It is the maximum number of operations done by an algorithm to solve a problem as a function function of the input size.
of the problem size, for all inputs of size n. Computing running time
It is the average number of operations done by an algorithm to solve a problem as a function of Cost Cost
the problem size, for all inputs of size n. arr[0] = 0; c1 for(i=0; i<N; i++) c2
Best case complexity arr[1] = 0; c1 arr[i] = 0; c1
It is the minimum number of operations done by an algorithm to solve a problem as a function arr[2] = 0; c1
of the problem size, for all inputs of size n.
...
Theoretical Analysis
arr[N-1] = 0; c1
• Algorithms uses a high-level description of the algorithm instead of an Implementation.
----------- -------------
• Characterizes running time as a function of the input size, n
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
• Considers all possible inputs.
• Theoretical a analysis allows us to evaluate the speed of an algorithm independent of the (c2 + c1) x N + c2
Primitive Operations:
Examples:
• Evaluating an expression
• Assigning a value to a variable
• Indexing into an array
• Calling a method Returning from a method. Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst case.
To do Theoretical Analysis we consider that the primitive operations take a constant amount of The concepts worst case, average case, and best-case complexity will be clarified by simple
time. examples. sequential search of an array
Write an algorithm to find the index of a given key in an array of n elements and find the
complexity of your algorithm.
Counting Primitive Operations
input: Time complexity:
a: an array to search in (sorted or unsorted) The best case is 1 comparison, and it happens if the key equals the 1st array element.
n: the array size (input size) The worst case is n comparisons, and it happens if the key equals the last array element, or if
... ...
i=1
operation to count: the comparison between the key and an array element.
space complexity: n
Motivation
is easy to verify that (numerically, graphically, or otherwise) for n large enough (n >>1) f(n)will
be smaller than g(n). Hence, algorithm B is more efficient (faster) than algorithm A for large
values of n. This is true regardless of the actual values of the constants.
1
Order of magnitude (asymptotic) notation Example
(O, Θ, Ω, and o (big O, big theta, big omega, small o) a) Show that 30n+8 is O(n).
Let f(n) and g(n) be 2 functions of the integer n. We say that f(n) is O(g(n)) if a Assume n> n0=8. Then cn = 31n = 30n + n > 30n+8, so 30n+8 < cn.
positive constant c and a positive integer n0 such that: Note 30n+8 isn’t less than n anywhere (n>0). And It isn’t even
f(n) ≤ c g(n) n ≥ n0. This means that the order of the function f(n) ≤ that of less than 31n everywhere. But it is less than 31n everywhere to the right of
g(n). n=8.
Equivalent Definition
𝑓(𝑛)
f(n) is O(g(n)) if lim (
𝑔(𝑛)
) = 𝑐 ≠ ∞⬚ (i.e. c =0 or = constant)
𝑛→∞
• This means that the order of the function f(n) ≤ that of g(n)(ie. g(n)
function will grow faster than an f(n) function).
• This means the time of algorithm f(n) will be slower than the time of
algorithm g(n). algorithm.
b) Show that n2+1 is O(n2).
2 3
Some useful Mathematical facts log log n is O(log n )
1. = 2n is not O(n1000)
❖ c>0, O(cf)=O(f+c)=O(f−c)=O(f)
You can easily verify that the following statements are true:
6n+5 is O(n)
3 n2 + 2 n is O(n2)
106 n2 is O(n2)
log n is O(n)
4 5
Example: The following are some of the common functions listed in Big O Rules
increasing order of magnitude. ➢ If f (n) is a polynomial of degree d, then f (n) is O (nd ), i. e.,
1) Drop lower order terms
2) Drop constant factors
3) Use the smallest possible class of functions
Say 2n is O(n) instead of 2n is O (n2 )
4) Use the simplest expression of the class
Say 3n +5 is O(n) instead of 3n +5 is O(3n)
Some example
n! • We say that n4 + 100n2 + 10n + 50 is of the order of n4 or O(n4)
6 7
Relatives of Big O
Definition Θ
• If fO(g) and gO(f) then we say “g and f are of the same order” or “f
is (exactly) order g” and write f(g).
f(n) is o(g(n)) if
This means that the order of the function f(n) < that of g(n).
This means that the order of the function f(n) = that of g(n).
Definition Ω
f(n) is Ω(g(n)) if f(n) ≥ c g(n) n ≥ n0. This means that the order of the
function f(n) ≥that of g(n).
This means that the order of the function f(n) ≥ that of g(n).
8 9
Asymptotic Analysis of an algorithm: The asymptotic analysis of an algorithm determines the
running time in big Oh notation. Running time of various statements
To perform the asymptotic analysis, We find the worst case number of primitive operations
executed as a function of the input size, We express this function with big Oh notation
Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst case.
Since constant factors and lower order terms are eventually dropped anyhow, we can ignore
them when counting primitive operations.
Example1
i = 0;
while (i<N) {
X=X+Y; // O(1)
result = mystery(X); // O(N), just an example...
i++; // O(1)
}
• The body of the while loop: O(N)
• Loop is executed: N times
N x O(N) = O(N2)
10 11
Example2
if (i<j)
for ( i=0; i<N; i++ )
X = X+i;
else
X=0;
Max ( O(N), O(1) ) = O (N)
Example3
for (i=1, i≤n; i++)
for (j=1, j ≤n; j++)
print “hello”
O(n)xO(n)=O(n2)
Example 4
12 13
Analysis of Recursive Algorithms EX 02: Write Pseudocode and draw flowchart to Find the Area and Perimeter of a
EX 01: Write Pseudocode and draw flowchart to find the sum of two numbers entered by Rectangle and do asymptotic analysis of this algorithm.
the user and do asymptotic analysis of this algorithm .
Pseudocode Flowchart:
Start
Start
Declare variables num1, num2 and sum.
A =L * W
P = 2*(L+W)
Display A and P
End
EX 03: Write Pseudocode and draw flowchart to find the maximum number from two Ex 04: Write Pseudocode and draw flowchart to that calculates the mathematical
numbers and do asymptotic analysis of this algorithm. formulas and do asymptotic analysis of this algorithm
Pseudocode Flowchart
Start
x 2 , x 0.
y= 2
Declare variables num1 and num2. − x , x 0.
Read the values of num1 and num2
if num1 > num2 then
print "num1 is maximum" Algorithm Flowchart
else There are five methods to solve recurrence relations that represent the running time of
recursive methods:
return fibonacci(n – 1) + fibonacci(n – 2);
➢ Iteration method (unrolling and summing)
} ➢ Substitution method (Guess the solution and verify by induction)
➢ The base case is reached when n == 1 or n == 2. The method performs one comparison ➢ Recursion tree method
and one return statement. Therefore each of T(1) and T(2) is some constant c. ➢ Master theorem (Master method)
➢ When n > 2, the method performs TWO recursive calls, one with the parameter n - 1 , ➢ Using Generating functions or Characteristic equations
another with parameter n – 2, and some constant # of basic operations. In this course, we will use the Iteration method and a simplified Master theorem.
Hence, the recurrence relation is: Solving Recurrence Relations - Iteration method
T(n) = c if n = 1 or n = 2 Steps:
T(n) = T(n – 1) + T(n – 2) + b if n > 2 ➢ Expand the recurrence
Example 4: Write the recurrence relation for the following method: ➢ Express the expansion as a summation by plugging the recurrence back into itself until
you see a pattern.
long power (long x, long n) {
➢ Evaluate the summation
if(n == 0)
In evaluating the summation one or more of the following summation formulae may be used:
return 1;
Arithmetic series (Arithmetic series and Special Cases of Geometric Series):
else if(n == 1) Arithmetic series: Special Cases of Geometric Series:
return x;
else if ((n % 2) == 0)
return power (x, n/2) * power (x, n/2);
else
return x * power (x, n/2) * power (x, n/2);
}
The base case is reached when n == 0 or n == 1. The method performs one comparison and
one return statement. ThereforeT(0) and T(1) is some constant c.
At every step the problem size reduces to half the size. When the power is an odd number,
additional multiplication is involved. To work out time complexity, let us consider the worst
case, that is we assume that at every step an additional multiplication is needed. Thus total
number of operations T(n) will reduce to number of operations for n/2, that is T(n/2) with
seven additional basic operations (the odd power case)
Hence, the recurrence relation is:
T(n) = c if n = 0 or n = 1
T(n) = 2T(n /2) + b if n > 2
Analysis Of Recursive Factorial method
Example1: Form and solve the recurrence relation for the running time of factorial method Analysis Of Recursive Binary Search
and hence determine its big-O complexity: public int binarySearch (int target, int[] array,
long factorial (int n) { int low, int high) {
if (n == 0) if (low > high)
return 1; return -1;
else else {
return n * factorial (n – 1); int middle = (low + high)/2;
} if (array[middle] == target)
T(0) = c (1) return middle;
T(n) = b + T(n - 1) (2) else if(array[middle] < target)
= b + b + T(n - 2) by substituting T(n – 1) in (2) return binarySearch(target, array, middle + 1, high);
= b +b +b + T(n - 3) by substituting T(n – 2) in (2) else
… return binarySearch(target, array, low, middle - 1);
= kb + T(n - k) }
The base case is reached when n – k = 0 → k = n, we then have: }
T(n) = nb + T(n - n) The recurrence relation for the running time of the method is:
= bn + T(0) T(1) = a if n = 1 (one element array)
= bn + c T(n) = T(n / 2) + b if n > 1
Therefore the method factorial is O(n) Expanding:
T(1) = a (1)
T(n) = T(n / 2) + b (2)
= [T(n / 2 ) + b] + b = T (n / 22) + 2b
2
by substituting T(n/2) in (2)
3 3
= [T(n / 2 ) + b] + 2b = T(n / 2 ) + 3b by substituting T(n/22) in (2)
= ……..
= T( n / 2k) + kb
The base case is reached when (n/ 2k )= 1 ➔ n = 2k ➔ k = log2 n, then have:
T(n) = T(1) + b log2 n
= a + b log2 n
Therefore, Recursive Binary Search is O(log n)
Analysis of Recursive Algorithms Analysis Of Recursive Towers of Hanoi Algorithm
Analysis Of Recursive Selection Sort Problem Statement
private static void selectionSort(int[] x, int n) Move all the disks stacked on the first tower over to the last tower using a helper tower in the
{ middle. While moving the disks, certain rules must be followed. These are :
int minPos; 1. Only one disk can be moved.
if (n > 0) 2. A larger disk can not be placed on a smaller disk.
{
minPos = findMinPos(x, n);
swap(x, minPos, n);
selectionSort(x, n - 1);
}
}
private static int findMinPos (int[] x, int n)
{ Tower of Hanoi Default Setup
int k = n;
for(int i = 0; i < n; i++) We solve this question using simple recursion. To get the three disks over to the final tower you
if(x[i] < x[k]) k = i; need to :
return k; Take the disk number 1 and 2 to tower B.
} Move disk number 3 to tower C.
private static void swap(int[] x, int minPos, int n) Take disk number 1 and 2 from B to C.
{ Of course, you can’t do it like this because of the constraints. However, we can use this to create
int temp=x[n]; x[n]=x[minPos]; x[minPos]=temp; a function that does it recursively.
} public static void hanoi(int n, char from, char to, char temp)
findMinPos is O(n), and swap is O(1), therefore the recurrence relation for the {
if (n == 1)
running time of the selectionSort method is:
System.out.println(from + " --------> " + to);
T(0) = a (1) else
{
T(n) = T(n – 1) + n + c if n > 0 (2)
hanoi(n - 1, from, temp, to);
= [T(n-2) +(n-1) + c] + n + c = T(n-2) + (n-1) + n + 2c by substituting T(n-1) in (2) System.out.println(from + " --------> " + to);
hanoi(n - 1, temp, to, from);
= [T(n-3) + (n-2) + c] +(n-1) + n + 2c= T(n-3) + (n-2) + (n-1) + n + 3c by substituting T(n-2)
}
= T(n-4) + (n-3) + (n-2) + (n-1) + n + 4c }
= …… = T(n-k) + (n-k + 1) + (n-k + 2) + …….+ n + kc
The base case is reached when n – k = 0 → k = n, we then have :
T(n)= T(0)+1+2+3+ +(n-2)+(n-1)+ n +nc =a+ 1+2+3+ +(n-2) + (n-1)+ n +nc= a+nc+∑𝑛𝑖=1 𝑖
𝑛(𝑛+1) 𝑛2 1
=a+nc+ = + (𝑐 + ) 𝑛 + 𝑎
2 2 2
The recurrence relation for the running time of the method hanoi is:
T(n) = a if n = 1 Example1: Find the big-Oh running time of the following recurrence. Use the Master
T(n) = 2T(n - 1) + b if n > 1 Theorem:
Expandions:
T(n) = 2[2T(n – 2) + b] + b = 22 T(n – 2) + 3b by substituting T(n – 1)
2 3
= 2 [2T(n – 3) + b] + 3b = 2 T(n – 3) + 7b by substituting T(n-2) Solution: a = 3, b = 4, c = ½ → a > bc → Case 1
= 23 [2T(n – 4) + b] + 7b = 24 T(n – 4) + 15b by substituting T(n – 3)
Hence 𝑇(𝑛) ∈ 𝑂(𝑛log4 3 )
=2k T(n – k) + (2k -1)b
Example2: Find the big-Oh running time of the following recurrence. Use the Master
The base case is reached when n – k = 1 → k = n – 1, we then have:
Theorem:
T(n)= (2n-1)a+(2n-1-1)b=(2n-1)(a+b)-b
T(1) = 1
Therefore, The method hanoi is O(2n)
T(n) = 2T(n / 2) + n
Master Theorem (Master Method)
Solution: a = 2, b = 2, c = 1 → a = bc → Case 2
The master method provides an estimate of the growth rate of the solution for recurrences of the Hence T(n) is O(n log n)
form:
Example3: Find the big-Oh running time of the following recurrence. Use the Master Theorem:
T(1) = 1
T(n) = 4T(n / 2) + kn3 + h where k ≥ 1 and h 1
where a ≥ 1, b > 1 and the overhead function f(n) > 0 c
Solution: a = 4, b = 2, c = 3 → a < b → Case 3
If T(n) is interpreted as the number of steps needed to execute an algorithm for an input of size
Hence T(n) is O(n3)
n, this recurrence corresponds to a “divide and conquer” algorithm, in which a problem of size
n is divided into a sub-problems of size n / b, where a, b are positive constants T(n) = aT(n/b) Example4: Find the big-Oh running time of the following recurrence. Use the Master Theorem:
+ f(n) T(0) = 1 n=0
The Simplified Master Method for Solving Recurrences: T(n) = 5 + T(n – 1) n>0
• Consider recurrences of the form:
a=1, k=5 , c=0, b=1
T(1) = 1
Not that b is not ≥1 so we can’t use the Master Theorem
T(n) = aT(n/b) + knc
for constants a ≥ 1, b > 1, c 0, k ≥ 1 then:
1. 𝑇(𝑛) ∈ 𝑂(𝑛log b𝑎 ) if a > bc
2. 𝑇(𝑛) ∈ 𝑂(𝑛c log 𝑛) if a = bc
3. 𝑇(𝑛) ∈ 𝑂(𝑛c ) if a < bc
Note: Since k do not affect the result, they are sometimes not included in the above recurrence
Example5: Analysis Of Recursive Binary Search Use the Master T(1) = 1 if n = 1 (one element array)
public int binarySearch (int target, int[] array, T(n) = T(n / 2) + 4 if n > 1
int low, int high) {
if (low > high) Solution: a = 1, b = 2, c = 0 → a = bc → Case 2 Hence 𝑇(𝑛) ∈ 𝑂(𝑛c log 𝑛) = 𝑂(log 𝑛)
return -1; Analysis Of Recursive Merge Sort
else {
int middle = (low + high)/2; Divide:
if (array[middle] == target)
return middle; • [38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
else if(array[middle] < target)
return binarySearch(target, array, middle + 1, high);
else
return binarySearch(target, array, low, middle - 1);
}
}
Divide-and-conquer algorithm:
• divide the problem into a number of subproblems
• conquer the subproblems (solve them)
• combine the subproblem solutions to get the solution to the original problem
Example: Merge Sort
• divide the n-element sequence to be sorted into two n/2- element sequences.
• conquer the subproblems recursively using merge sort.
• combine the resulting two sorted n/2-element sequences by merging • [38, 27] is divided into [38] and [27] .
Binary search Find an element in a sorted array: • [43, 10] is divided into [43] and [10] .
1. Divide: Check middle element.
2. Conquer: Recursively search 1subarray.
3. Combine: Trivial.
Example 4 : Find 9 in array 3 5 7 8 9 12 15
Steps
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
The recurrence relation for the running time of the method is:
• Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]
Conquer:
• [38] is already sorted.
• [27] is already sorted.
• [43] is already sorted.
• [10] is already sorted.
Merge:
• Merge [38] and [27] to get [27, 38] . Therefore, the sorted list is [10, 27, 38, 43] .
• Merge [43] and [10] to get [10,43] . void mergeSort (int arr[], int l, int r)
{
if (l < r)
{ int m = l + (r - l) / 2; // Find the middle point
mergeSort(arr, l, m); // Sort first and second halves
mergeSort(arr, m + 1, r);
merge(arr, l, m, r); // Merge the sorted halves
}
}
// Merges two subarrays of arr[]. // First subarray is arr[l..m] // Second subarray is arr[m+1..r] • 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of the
void merge(int arr[], int l, int m, int r)
array. Since each half has n/2 elements, we have two recursive calls with input size as
{ // Find sizes of two subarrays to be merged
int n1 = m - l + 1; int n2 = r - m; (n/2).
int L[] = new int[n1]; int R[] = new int[n2]; // Create temp arrays
• f(n) represents the time taken to merge the two sorted halves
// Copy data to temp arrays
for (int i = 0; i < n1; ++i) • To merge the two sorted halves we do in O(n)
L[i] = arr[l + i];
So Recurrence Relation of Merge Sort T(n) =2T(n/2) + kn
for (int j = 0; j < n2; ++j)
R[j] = arr[m + 1 + j]; Using master method: a = 2, b = 2, c = 1 → a = bc →Case: 2 Hence T(n) is O(n log n)
// Merge the temp arrays
• Time Complexity:
int i = 0, j = 0; // Initial indices of first and second subarrays
int k = 0; // Initial index of merged subarray array o Best Case: O(n log n), When the array is already sorted or nearly sorted.
while (i < n1 && j < n2)
o Average Case: O(n log n), When the array is randomly ordered.
{
if (L[i] <= R[j]) o Worst Case: O(n log n), When the array is sorted in reverse order.
{ arr[k] = L[i];
i++;
}
else
{ arr[k] = R[j];
j++;
}
k++;
}
while (i < n1) // Copy remaining elements of L[] if any -
{ arr[k] = L[i];
i++; k++;
}
while (j < n2) // Copy remaining elements of R[] if any
{
arr[k] = R[j];
j++; k++;
}
}
Recurrence Relation of Merge Sort:
The recurrence relation of merge sort is:
T(1)= 1 if n=1
T(n) = 2T(n/2) + f(n) if n>1
• T(n) Represents the total time time taken by the algorithm to sort an array of size n.