0% found this document useful (0 votes)
7 views21 pages

Analysis of Algorithms

An algorithm is a finite set of precise instructions for solving a problem, which can be expressed in various forms such as pseudocode or flowcharts. The document discusses the characteristics of algorithms, the importance of analyzing their time and space complexity, and methods for comparing their efficiency. It also covers concepts like worst-case, average-case, and best-case complexity, as well as asymptotic analysis using big O notation.

Uploaded by

a7medsamir124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views21 pages

Analysis of Algorithms

An algorithm is a finite set of precise instructions for solving a problem, which can be expressed in various forms such as pseudocode or flowcharts. The document discusses the characteristics of algorithms, the importance of analyzing their time and space complexity, and methods for comparing their efficiency. It also covers concepts like worst-case, average-case, and best-case complexity, as well as asymptotic analysis using big O notation.

Uploaded by

a7medsamir124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Algorithm Example find the maximum number in a list:

An algorithm is a set of rules for solving a problem or performing a task. FUNCTION findMax(list):

An algorithm is a finite set of precise instructions for performing a computation or for solving a max ← list[0]
problem. FOR each num IN list:
Characteristics: IF num > max THEN
• Should have a clear starting and ending point. max ← num
• Clear, unambiguous, finite steps with a defined input and output
RETURN max
• Algorithms can be expressed in natural language, pseudocode, or programming
Flowchart
languages.
A flowchart is a visual representation of an algorithm using shapes and arrows to depict the
flow of control.

Characteristics:

• It uses standardized symbols (like ovals for start/end, rectangles for processes, diamonds
Example find the maximum number in a list: for decisions) to show the steps and the order in which they occur.

1. Start • Flowcharts are useful for visualizing complex processes.


2. Initialize max to the first element of the list.
3. For each element num in the list:
a. If num is greater than max, update max to num.
4. End

Pseudocode

Pseudocode is A simplified, high-level representation of an algorithm that uses a mix of natural


language and programming syntax.

Characteristics:

• It’s not bound by specific programming language syntax


• making it easier to read and write.
• Making it easy to understand
• Focuses on logic rather than syntax
Summary Algorithm 1 Algorithm 2

• Algorithm: A clear set of steps to solve a problem. arr[0] = 0; for(i=0; i<N; i++)

• Pseudocode: A text-based version of an algorithm that is easier to read and understand. arr[1] = 0; arr[i] = 0;

• Flowchart: A visual representation that illustrates the steps and flow of the algorithm arr[2] = 0;

Suppose, for example, we have four algorithms to solve a problem P: A1, A2, A3, and A4, and ...
each of them has an execution time of T1, T2, T3, and 4. Which one will you choose? arr[N-1] = 0;
Of course, we will choose the shortest execution time So, we need to do analysis of algorithms. Such that analysis is independent of machine time,
How do we Compare algorithms? programming style, etc.

1. Compare execution times? What is the goal of analysis of algorithms?


Experimental Studies To compare different algorithms (for efficiency), we look at their time and space complexity. To
Write a program implementing the algorithm. Run the program with inputs of varying
size and composition. Use a method like System. currentTimeMillis to get an accurate compare algorithms mainly in terms of running time but also in terms of other factors (e.g.,
measure of the actual running time. Plot the results. memory requirements, programmer's effort etc.).

What do we mean by running time analysis?

Determine how running time increases as the size of the problem increases.

Time complexity

The time complexity of an algorithm is defined as the number of operations done by the
algorithm to solve a problem as a function of the problem size.
Not good: Space complexity
a) It is necessary to implement the algorithm, which may be difficult.
The space complexity of an algorithm is defined as the amount of storage used by the algorithm
b) Results may not be indicative of the running time on other inputs
to solve a problem as a function of the problem size.
not included in the experiment.
Notice that the term complexity here has nothing to do with an algorithm being simple or
c) To compare two algorithms, the same hardware and software environments must
complicated.
be used and the same inputs.
2. Count the number of statements executed? We consider what is called worst case, average case, and best-case complexity.

Not good: number of statements vary with the programming language as well as
the style of the individual programmer.
Worst case complexity we can determine the maximum number of primitive operations executed by an algorithm, as a

It is the maximum number of operations done by an algorithm to solve a problem as a function function of the input size.

of the problem size, for all inputs of size n. Computing running time

Average case complexity Algorithm 1 Algorithm 2

It is the average number of operations done by an algorithm to solve a problem as a function of Cost Cost
the problem size, for all inputs of size n. arr[0] = 0; c1 for(i=0; i<N; i++) c2
Best case complexity arr[1] = 0; c1 arr[i] = 0; c1
It is the minimum number of operations done by an algorithm to solve a problem as a function arr[2] = 0; c1
of the problem size, for all inputs of size n.
...
Theoretical Analysis
arr[N-1] = 0; c1
• Algorithms uses a high-level description of the algorithm instead of an Implementation.
----------- -------------
• Characterizes running time as a function of the input size, n
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
• Considers all possible inputs.
• Theoretical a analysis allows us to evaluate the speed of an algorithm independent of the (c2 + c1) x N + c2

hardware/ software environment. Example Algorithm arrayMax

Primitive Operations:

Primitive operations are the basic computations performed by an algorithm

Examples:

• Evaluating an expression
• Assigning a value to a variable
• Indexing into an array
• Calling a method Returning from a method. Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst case.

To do Theoretical Analysis we consider that the primitive operations take a constant amount of The concepts worst case, average case, and best-case complexity will be clarified by simple
time. examples. sequential search of an array

Write an algorithm to find the index of a given key in an array of n elements and find the
complexity of your algorithm.
Counting Primitive Operations
input: Time complexity:

a: an array to search in (sorted or unsorted) The best case is 1 comparison, and it happens if the key equals the 1st array element.

n: the array size (input size) The worst case is n comparisons, and it happens if the key equals the last array element, or if

key: the element to search for the key is not found.

output: To find the average complexity, we consider the following cases:

index: the location of the key in the array, -1 if not found

We use the well-known sequential search algorithm.

... ...

found = false; average number of comparisons =

i=1

while (i <= n) and (not found)


Remark: We usually use the worst case as a measure of the complexity of an algorithm.
if key = ai
In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
found = true
We must know the case that causes a maximum number of operations to be executed.
else
For Linear Search, the worst case happens when the element to be searched (x) is not present in
i++
the array.
if found
Motivation
index = i
• Suppose program A takes f(n)=30n+8 microseconds to process any n records, while
else index = -1 program B takes g(n)=n2+1 microseconds to process the n records. Which program would
input size: n you choose, knowing you’ll want to support millions of users?

operation to count: the comparison between the key and an array element.

space complexity: n
Motivation

• Suppose program A takes f(n)=30n+8 microseconds to process any n


records, while program B takes g(n)=n2+1 microseconds to process the n
records. Which program would you choose, knowing you’ll want to
support millions of users?

is easy to verify that (numerically, graphically, or otherwise) for n large enough (n >>1) f(n)will
be smaller than g(n). Hence, algorithm B is more efficient (faster) than algorithm A for large
values of n. This is true regardless of the actual values of the constants.

Consider the example of buying elephants and fish:

Cost: cost_of_elephants + cost_of_fish

Cost ~ cost_of_elephants (approximation)


is easy to verify that (numerically, graphically, or otherwise) for n large enough
Asymptotic Analysis of an algorithm: The asymptotic analysis of an algorithm determines the (n >>1) f(n)will be smaller than g(n). Hence, algorithm B is more efficient
running time in big Oh notation. (faster) than algorithm A for large values of n. This is true regardless of the
To perform the asymptotic analysis We find the worst case number of primitive operations actual values of the constants.
executed as a function of the input size, We express this function with big Oh notation Consider the example of buying elephants and fish:
Order of magnitude (asymptotic) notation Cost: cost_of_elephants + cost_of_fish
(O, Θ, Ω, and o (big O, big theta, big omega, small o) Cost ~ cost_of_elephants (approximation)
Let f(n) and g(n) be 2 functions of the integer n. We say that f(n) is O(g(n)) if a positive Asymptotic Analysis of an algorithm: The asymptotic analysis of an
constant c and a positive integer n0 such that:
algorithm determines the running time in big Oh notation.
f(n) ≤ c g(n) n ≥ n0 .
To perform the asymptotic analysis We find the worst case number of primitive
This means that the order of the function f(n) ≤ that of g(n). operations executed as a function of the input size, We express this function
with big Oh notation


Order of magnitude (asymptotic) notation Example

(O, Θ, Ω, and o (big O, big theta, big omega, small o) a) Show that 30n+8 is O(n).

Definition O  c, n0 : 30n+8  cn,  n>n0 . Let c=31, n0=8.

Let f(n) and g(n) be 2 functions of the integer n. We say that f(n) is O(g(n)) if a Assume n> n0=8. Then cn = 31n = 30n + n > 30n+8, so 30n+8 < cn.
positive constant c and a positive integer n0 such that: Note 30n+8 isn’t less than n anywhere (n>0). And It isn’t even
f(n) ≤ c g(n) n ≥ n0. This means that the order of the function f(n) ≤ that of less than 31n everywhere. But it is less than 31n everywhere to the right of
g(n). n=8.

Equivalent Definition
𝑓(𝑛)
f(n) is O(g(n)) if lim (
𝑔(𝑛)
) = 𝑐 ≠ ∞⬚ (i.e. c =0 or = constant)
𝑛→∞

• This means that the order of the function f(n) ≤ that of g(n)(ie. g(n)
function will grow faster than an f(n) function).

• This means the time of algorithm f(n) will be slower than the time of
algorithm g(n). algorithm.
b) Show that n2+1 is O(n2).

c, n0: n2+1  cn2, n> n0: . Let c=2, n0=1.

Assume n>1. Then cn2 = 2n2 = n2+n2 >


n2+1, or n2+1< cn2.

c) Show that 2n+10 is O(n)

 c, n0 : 2n+10  cn,  n>n0 . Let c=3,


n0=10.

Assume n> n0=10.

Then cn = 3n = 2n + n > 2n+10, so 2n+10 < cn.

d) The function n2 is not O(n)

n2≤cn n ≤c, The above inequality cannot be satisfied since c must be a


constant

2 3
Some useful Mathematical facts log log n is O(log n )

1. = 2n is not O(n1000)

2. L’Hôpital’s Rule: Useful Facts about Big O

❖ c>0, O(cf)=O(f+c)=O(f−c)=O(f)

❖ If gO(f) and hO(f), then g+hO(f).


3. Sterling formula: n! ❖ If gO(f1) and hO(f2), then g+hO(f1+f2) =O(max(f1,f2)) (Very
useful!)
4.
❖ If gO(f1) and hO(f2), then ghO(f1f2)
5.
❖ fO(g)  gO(h) → fO(h)
6.
❖ The big-Oh notation gives an upper bound on the growth rate of a
Example: Show that (log n)2 is o(n).
function
❖ The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no
more than the growth rate of g(n)
Hence, (log n)2 is o(n).
❖ We can use the big-Oh notation to rank functions according to their
Example: Show that 2n is not O(nk), where k is a positive integer constant, growth rate.
regardless of how large k may be.

You can easily verify that the following statements are true:

6n+5 is O(n)

3 n2 + 2 n is O(n2)

106 n2 is O(n2)

log n is O(n)

10 n log n + 100 n is O(n2)

4 5
Example: The following are some of the common functions listed in Big O Rules
increasing order of magnitude. ➢ If f (n) is a polynomial of degree d, then f (n) is O (nd ), i. e.,
1) Drop lower order terms
2) Drop constant factors
3) Use the smallest possible class of functions
Say 2n is O(n) instead of 2n is O (n2 )
4) Use the simplest expression of the class
Say 3n +5 is O(n) instead of 3n +5 is O(3n)

Some example
n! • We say that n4 + 100n2 + 10n + 50 is of the order of n4 or O(n4)

• We say that 10n3 + 2n2 is O(n3)

• We say that n3 - n2 is O(n3)

• We say that 10 is O(1),

• We say that 1273 is O(1)

6 7
Relatives of Big O

Definition Θ

• If fO(g) and gO(f) then we say “g and f are of the same order” or “f
is (exactly) order g” and write f(g).

• Another equivalent definition:


c1,c2,k: c1g(x)f(x)c2g(x), x>k
Definition o
𝑓(𝑛)
• f(n) is Θ(g(n)) if lim (
𝑔(𝑛)
) = 𝑐 ⬚ (0<c<∞) f(n) is o(g(n)) if f(n) < c g(n) n ≥ n0. This means that the order of the
𝑛→∞
function f(n) < that of g(n).

f(n) is o(g(n)) if

This means that the order of the function f(n) < that of g(n).

This means that the order of the function f(n) = that of g(n).

Example show that ∑𝑛𝑖=1 𝑖 𝑖𝑠 𝜃 𝑛


𝑛
𝑛(𝑛 + 1)
∑𝑖 = 1 + 2 + 3 + ⋯+ ⋯+ 𝑛 − 1 + 𝑛 =
2
𝑖=1

Definition Ω

f(n) is Ω(g(n)) if f(n) ≥ c g(n) n ≥ n0. This means that the order of the
function f(n) ≥that of g(n).

f(n) is Ω(g(n)) if (i.e. = c, or = ∞)

This means that the order of the function f(n) ≥ that of g(n).

8 9
Asymptotic Analysis of an algorithm: The asymptotic analysis of an algorithm determines the
running time in big Oh notation. Running time of various statements
To perform the asymptotic analysis, We find the worst case number of primitive operations
executed as a function of the input size, We express this function with big Oh notation

Example Algorithm arrayMax

Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst case.

We determine that algorithm arrayMax executes at most 7 n - 2 primitive operations

We say that algorithm arrayMax runs in O(n) time.

Since constant factors and lower order terms are eventually dropped anyhow, we can ignore
them when counting primitive operations.

Example1

i = 0;
while (i<N) {
X=X+Y; // O(1)
result = mystery(X); // O(N), just an example...
i++; // O(1)
}
• The body of the while loop: O(N)
• Loop is executed: N times
N x O(N) = O(N2)

10 11
Example2

if (i<j)
for ( i=0; i<N; i++ )
X = X+i;
else
X=0;
Max ( O(N), O(1) ) = O (N)

Example3
for (i=1, i≤n; i++)
for (j=1, j ≤n; j++)
print “hello”
O(n)xO(n)=O(n2)
Example 4

for (i=1, i≤n; i++)


for (j=1, j ≤j; j++)
print “hello

Exercises compute the worst case running two algorithms

12 13
Analysis of Recursive Algorithms EX 02: Write Pseudocode and draw flowchart to Find the Area and Perimeter of a
EX 01: Write Pseudocode and draw flowchart to find the sum of two numbers entered by Rectangle and do asymptotic analysis of this algorithm.
the user and do asymptotic analysis of this algorithm .
Pseudocode Flowchart:
Start
Start
Declare variables num1, num2 and sum.

Read (or Enter or Input) the values of num1 Declare variables


num1, num2 and sum
and num2.

Add num1 and num2 and assign the result


Read values
to sum. (Sum=num1+num2) num1 and num2
Pseudocode: Flowchart:
Start
Display (or Print or Output) sum Start
Sum=num1+num2 Declare variables W, L A
and P.
End
Read the value of W and Declare variables
Display sum W, L, A and P
L.
Calculate A=L*W
Calculate P=2*(L+W)
Read values
End Print Area and Perimeter
W and L
End

A =L * W
P = 2*(L+W)

Display A and P

End
EX 03: Write Pseudocode and draw flowchart to find the maximum number from two Ex 04: Write Pseudocode and draw flowchart to that calculates the mathematical
numbers and do asymptotic analysis of this algorithm. formulas and do asymptotic analysis of this algorithm

Pseudocode Flowchart
Start
 x 2 , x  0.
y= 2
Declare variables num1 and num2. − x , x  0.
Read the values of num1 and num2
if num1 > num2 then
print "num1 is maximum" Algorithm Flowchart

else (i.e. num1 < num2 then) Step 1: Start

print "num2 is maximum" Step 2: Declare variables x


endif and y.
End Step 2: Read the value of x.
Step 3: Check if x >= 0
If yes then y=x2
If no then y=-x2
Step 4: Print y
Step 5: End
public void f (int n)
{
Do Asymptotic Analysis of the following Algorithms? if (n > 0)
long factorial (int n) { {
System.out.println(n);
if (n == 0) f(n-1);
return 1; }
else }
The base case is reached when n = = 0. The method performs one comparison. Thus, the
return n * factorial (n – 1); number of operations when n = = 0, T(0), is constant a.
}
When n > 0, the method performs two basic operations (comparison and print) and then calls
What is a recurrence relation? itself, using ONE recursive call, with a parameter n – 1.
A recurrence relation, T(n), is a recursive function of an integer variable n. Like all recursive ➢ Therefore, the recurrence relation is:
functions, it has one or more recursive cases and one or more base cases.
Example: T(0) = a for some constant a
T(n) = b + T(n – 1) for some constant b
➢ In General, T(n) is usually a sum of various choices of T(n ), the cost of the recursive
subproblems, plus the cost of the work done outside the recursive calls: T(n ) = aT(f(n))
+ bT(g(n)) + . . . + c(n)
➢ The portion of the definition that does not contain T is called the base case of the where a and b are the number of subproblems, f(n) and g(n) are subproblem sizes, and
recurrence relation; the portion that contains T is called the recurrent or recursive case.
c(n) is the cost of the work done outside the recursive calls [Note: c(n) may be a constant]
➢ Recurrence relations are useful for expressing the running times (i.e., the number of basic
operations executed) of recursive algorithms Example 2: Write the recurrence relation for the following method:
➢ The specific values of the constants such as a, b, and c (in the above recurrence) are
important in determining the exact solution to the recurrence. Often, however, we are public int g(int n) {
only concerned with finding an asymptotic upper bound on the solution. We call such a if (n == 1)
bound an asymptotic solution to the recurrence.
return 2;
else
Forming Recurrence Relations
return 3 * g(n / 2) + g( n / 2) + 5;
➢ For a given recursive method, the base case and the recursive case of its recurrence
relation correspond directly to the base case and the recursive case of the method. }
➢ The base case is reached when n == 1. The method performs one comparison and one
return statement. Therefore, T(1), is some constant c.
➢ When n > 1, the method performs TWO recursive calls, each with the parameter n / 2,
and some constant # of basic operations.
Hence, the recurrence relation is:
T(1) = c for some constant c
T(n) = b + 2T(n / 2) for some constant b
Example 1: Write the recurrence relation for the following method:
Example 3: Write the recurrence relation for the following method:
long fibonacci (int n) { // Recursively calculates Fibonacci number Solving Recurrence Relations
if( n == 1 || n == 2) To solve a recurrence relation T(n) we need to derive a form of T(n) that is not a recurrence
return 1; relation. Such a form is called a closed form of the recurrence relation.

else There are five methods to solve recurrence relations that represent the running time of
recursive methods:
return fibonacci(n – 1) + fibonacci(n – 2);
➢ Iteration method (unrolling and summing)
} ➢ Substitution method (Guess the solution and verify by induction)
➢ The base case is reached when n == 1 or n == 2. The method performs one comparison ➢ Recursion tree method
and one return statement. Therefore each of T(1) and T(2) is some constant c. ➢ Master theorem (Master method)
➢ When n > 2, the method performs TWO recursive calls, one with the parameter n - 1 , ➢ Using Generating functions or Characteristic equations
another with parameter n – 2, and some constant # of basic operations. In this course, we will use the Iteration method and a simplified Master theorem.
Hence, the recurrence relation is: Solving Recurrence Relations - Iteration method
T(n) = c if n = 1 or n = 2 Steps:
T(n) = T(n – 1) + T(n – 2) + b if n > 2 ➢ Expand the recurrence
Example 4: Write the recurrence relation for the following method: ➢ Express the expansion as a summation by plugging the recurrence back into itself until
you see a pattern.
long power (long x, long n) {
➢ Evaluate the summation
if(n == 0)
In evaluating the summation one or more of the following summation formulae may be used:
return 1;
Arithmetic series (Arithmetic series and Special Cases of Geometric Series):
else if(n == 1) Arithmetic series: Special Cases of Geometric Series:
return x;
else if ((n % 2) == 0)
return power (x, n/2) * power (x, n/2);
else
return x * power (x, n/2) * power (x, n/2);
}
The base case is reached when n == 0 or n == 1. The method performs one comparison and
one return statement. ThereforeT(0) and T(1) is some constant c.
At every step the problem size reduces to half the size. When the power is an odd number,
additional multiplication is involved. To work out time complexity, let us consider the worst
case, that is we assume that at every step an additional multiplication is needed. Thus total
number of operations T(n) will reduce to number of operations for n/2, that is T(n/2) with
seven additional basic operations (the odd power case)
Hence, the recurrence relation is:
T(n) = c if n = 0 or n = 1
T(n) = 2T(n /2) + b if n > 2
Analysis Of Recursive Factorial method
Example1: Form and solve the recurrence relation for the running time of factorial method Analysis Of Recursive Binary Search
and hence determine its big-O complexity: public int binarySearch (int target, int[] array,
long factorial (int n) { int low, int high) {
if (n == 0) if (low > high)
return 1; return -1;
else else {
return n * factorial (n – 1); int middle = (low + high)/2;
} if (array[middle] == target)
T(0) = c (1) return middle;
T(n) = b + T(n - 1) (2) else if(array[middle] < target)
= b + b + T(n - 2) by substituting T(n – 1) in (2) return binarySearch(target, array, middle + 1, high);
= b +b +b + T(n - 3) by substituting T(n – 2) in (2) else
… return binarySearch(target, array, low, middle - 1);
= kb + T(n - k) }
The base case is reached when n – k = 0 → k = n, we then have: }
T(n) = nb + T(n - n) The recurrence relation for the running time of the method is:
= bn + T(0) T(1) = a if n = 1 (one element array)
= bn + c T(n) = T(n / 2) + b if n > 1
Therefore the method factorial is O(n) Expanding:
T(1) = a (1)
T(n) = T(n / 2) + b (2)
= [T(n / 2 ) + b] + b = T (n / 22) + 2b
2
by substituting T(n/2) in (2)
3 3
= [T(n / 2 ) + b] + 2b = T(n / 2 ) + 3b by substituting T(n/22) in (2)
= ……..
= T( n / 2k) + kb
The base case is reached when (n/ 2k )= 1 ➔ n = 2k ➔ k = log2 n, then have:
T(n) = T(1) + b log2 n
= a + b log2 n
Therefore, Recursive Binary Search is O(log n)
Analysis of Recursive Algorithms Analysis Of Recursive Towers of Hanoi Algorithm
Analysis Of Recursive Selection Sort Problem Statement
private static void selectionSort(int[] x, int n) Move all the disks stacked on the first tower over to the last tower using a helper tower in the
{ middle. While moving the disks, certain rules must be followed. These are :
int minPos; 1. Only one disk can be moved.
if (n > 0) 2. A larger disk can not be placed on a smaller disk.
{
minPos = findMinPos(x, n);
swap(x, minPos, n);
selectionSort(x, n - 1);
}
}
private static int findMinPos (int[] x, int n)
{ Tower of Hanoi Default Setup
int k = n;
for(int i = 0; i < n; i++) We solve this question using simple recursion. To get the three disks over to the final tower you
if(x[i] < x[k]) k = i; need to :
return k; Take the disk number 1 and 2 to tower B.
} Move disk number 3 to tower C.
private static void swap(int[] x, int minPos, int n) Take disk number 1 and 2 from B to C.
{ Of course, you can’t do it like this because of the constraints. However, we can use this to create
int temp=x[n]; x[n]=x[minPos]; x[minPos]=temp; a function that does it recursively.
} public static void hanoi(int n, char from, char to, char temp)
findMinPos is O(n), and swap is O(1), therefore the recurrence relation for the {
if (n == 1)
running time of the selectionSort method is:
System.out.println(from + " --------> " + to);
T(0) = a (1) else
{
T(n) = T(n – 1) + n + c if n > 0 (2)
hanoi(n - 1, from, temp, to);
= [T(n-2) +(n-1) + c] + n + c = T(n-2) + (n-1) + n + 2c by substituting T(n-1) in (2) System.out.println(from + " --------> " + to);
hanoi(n - 1, temp, to, from);
= [T(n-3) + (n-2) + c] +(n-1) + n + 2c= T(n-3) + (n-2) + (n-1) + n + 3c by substituting T(n-2)
}
= T(n-4) + (n-3) + (n-2) + (n-1) + n + 4c }
= …… = T(n-k) + (n-k + 1) + (n-k + 2) + …….+ n + kc
The base case is reached when n – k = 0 → k = n, we then have :
T(n)= T(0)+1+2+3+ +(n-2)+(n-1)+ n +nc =a+ 1+2+3+ +(n-2) + (n-1)+ n +nc= a+nc+∑𝑛𝑖=1 𝑖
𝑛(𝑛+1) 𝑛2 1
=a+nc+ = + (𝑐 + ) 𝑛 + 𝑎
2 2 2

Therefore, Recursive Selection Sort is O(n2)

The recurrence relation for the running time of the method hanoi is:
T(n) = a if n = 1 Example1: Find the big-Oh running time of the following recurrence. Use the Master
T(n) = 2T(n - 1) + b if n > 1 Theorem:
Expandions:
T(n) = 2[2T(n – 2) + b] + b = 22 T(n – 2) + 3b by substituting T(n – 1)
2 3
= 2 [2T(n – 3) + b] + 3b = 2 T(n – 3) + 7b by substituting T(n-2) Solution: a = 3, b = 4, c = ½ → a > bc → Case 1
= 23 [2T(n – 4) + b] + 7b = 24 T(n – 4) + 15b by substituting T(n – 3)
Hence 𝑇(𝑛) ∈ 𝑂(𝑛log4 3 )
=2k T(n – k) + (2k -1)b
Example2: Find the big-Oh running time of the following recurrence. Use the Master
The base case is reached when n – k = 1 → k = n – 1, we then have:
Theorem:
T(n)= (2n-1)a+(2n-1-1)b=(2n-1)(a+b)-b
T(1) = 1
Therefore, The method hanoi is O(2n)
T(n) = 2T(n / 2) + n
Master Theorem (Master Method)
Solution: a = 2, b = 2, c = 1 → a = bc → Case 2
The master method provides an estimate of the growth rate of the solution for recurrences of the Hence T(n) is O(n log n)
form:
Example3: Find the big-Oh running time of the following recurrence. Use the Master Theorem:
T(1) = 1
T(n) = 4T(n / 2) + kn3 + h where k ≥ 1 and h  1
where a ≥ 1, b > 1 and the overhead function f(n) > 0 c
Solution: a = 4, b = 2, c = 3 → a < b → Case 3
If T(n) is interpreted as the number of steps needed to execute an algorithm for an input of size
Hence T(n) is O(n3)
n, this recurrence corresponds to a “divide and conquer” algorithm, in which a problem of size
n is divided into a sub-problems of size n / b, where a, b are positive constants T(n) = aT(n/b) Example4: Find the big-Oh running time of the following recurrence. Use the Master Theorem:
+ f(n) T(0) = 1 n=0
The Simplified Master Method for Solving Recurrences: T(n) = 5 + T(n – 1) n>0
• Consider recurrences of the form:
a=1, k=5 , c=0, b=1
T(1) = 1
Not that b is not ≥1 so we can’t use the Master Theorem
T(n) = aT(n/b) + knc
for constants a ≥ 1, b > 1, c  0, k ≥ 1 then:
1. 𝑇(𝑛) ∈ 𝑂(𝑛log b𝑎 ) if a > bc
2. 𝑇(𝑛) ∈ 𝑂(𝑛c log 𝑛) if a = bc
3. 𝑇(𝑛) ∈ 𝑂(𝑛c ) if a < bc
Note: Since k do not affect the result, they are sometimes not included in the above recurrence
Example5: Analysis Of Recursive Binary Search Use the Master T(1) = 1 if n = 1 (one element array)
public int binarySearch (int target, int[] array, T(n) = T(n / 2) + 4 if n > 1
int low, int high) {
if (low > high) Solution: a = 1, b = 2, c = 0 → a = bc → Case 2 Hence 𝑇(𝑛) ∈ 𝑂(𝑛c log 𝑛) = 𝑂(log 𝑛)
return -1; Analysis Of Recursive Merge Sort
else {
int middle = (low + high)/2; Divide:
if (array[middle] == target)
return middle; • [38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
else if(array[middle] < target)
return binarySearch(target, array, middle + 1, high);
else
return binarySearch(target, array, low, middle - 1);
}
}
Divide-and-conquer algorithm:
• divide the problem into a number of subproblems
• conquer the subproblems (solve them)
• combine the subproblem solutions to get the solution to the original problem
Example: Merge Sort
• divide the n-element sequence to be sorted into two n/2- element sequences.
• conquer the subproblems recursively using merge sort.
• combine the resulting two sorted n/2-element sequences by merging • [38, 27] is divided into [38] and [27] .

Binary search Find an element in a sorted array: • [43, 10] is divided into [43] and [10] .
1. Divide: Check middle element.
2. Conquer: Recursively search 1subarray.
3. Combine: Trivial.
Example 4 : Find 9 in array 3 5 7 8 9 12 15
Steps
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
➢ 3 5 7 8 9 12 15
The recurrence relation for the running time of the method is:
• Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]

Conquer:
• [38] is already sorted.
• [27] is already sorted.
• [43] is already sorted.
• [10] is already sorted.
Merge:
• Merge [38] and [27] to get [27, 38] . Therefore, the sorted list is [10, 27, 38, 43] .
• Merge [43] and [10] to get [10,43] . void mergeSort (int arr[], int l, int r)
{
if (l < r)
{ int m = l + (r - l) / 2; // Find the middle point
mergeSort(arr, l, m); // Sort first and second halves
mergeSort(arr, m + 1, r);
merge(arr, l, m, r); // Merge the sorted halves
}
}
// Merges two subarrays of arr[]. // First subarray is arr[l..m] // Second subarray is arr[m+1..r] • 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of the
void merge(int arr[], int l, int m, int r)
array. Since each half has n/2 elements, we have two recursive calls with input size as
{ // Find sizes of two subarrays to be merged
int n1 = m - l + 1; int n2 = r - m; (n/2).
int L[] = new int[n1]; int R[] = new int[n2]; // Create temp arrays
• f(n) represents the time taken to merge the two sorted halves
// Copy data to temp arrays
for (int i = 0; i < n1; ++i) • To merge the two sorted halves we do in O(n)
L[i] = arr[l + i];
So Recurrence Relation of Merge Sort T(n) =2T(n/2) + kn
for (int j = 0; j < n2; ++j)
R[j] = arr[m + 1 + j]; Using master method: a = 2, b = 2, c = 1 → a = bc →Case: 2 Hence T(n) is O(n log n)
// Merge the temp arrays
• Time Complexity:
int i = 0, j = 0; // Initial indices of first and second subarrays
int k = 0; // Initial index of merged subarray array o Best Case: O(n log n), When the array is already sorted or nearly sorted.
while (i < n1 && j < n2)
o Average Case: O(n log n), When the array is randomly ordered.
{
if (L[i] <= R[j]) o Worst Case: O(n log n), When the array is sorted in reverse order.
{ arr[k] = L[i];
i++;
}
else
{ arr[k] = R[j];
j++;
}
k++;
}
while (i < n1) // Copy remaining elements of L[] if any -
{ arr[k] = L[i];
i++; k++;
}
while (j < n2) // Copy remaining elements of R[] if any
{
arr[k] = R[j];
j++; k++;
}
}
Recurrence Relation of Merge Sort:
The recurrence relation of merge sort is:
T(1)= 1 if n=1
T(n) = 2T(n/2) + f(n) if n>1
• T(n) Represents the total time time taken by the algorithm to sort an array of size n.

You might also like