Daa Full
Daa Full
Algorithms
Module No: 1
Module Name: Introduction
Course Objectives
2 Module 1- Introduction
What we learn in this course?
Objective 1 : To provide mathematical approach for Analysis of Algorithms.
• Recursive
• Non-Recursive
Types of Algorithm • Divide and Conquer Approach
• Greedy Method Approach
• Dynamic Programming Approach
• Backtracking and Branch and
Bound
• String Matching Algorithms
• Asymptotic analysis using Big –Oh, Omega,
Theta notations
Methods of • The substitution method
• Recursion tree method
Analysis • Master method
3 Module 1- Introduction
What we learn in this course?
Objective 2 : To understand and solve problems using various algorithmic
approaches.
• String Matching
Algorithms
4 Module 1- Introduction
What we learn in this course?
Objective 3 : To analyze algorithms using various methods.
• Master method
5 Module 1- Introduction
Course Outcome
CO1 : Analyze the running time and space complexity of algorithms.
6 Module 1- Introduction
Outline-Module 1
S.No Topic
Performance analysis , space and time complexity Mathematical background for
1
algorithm analysis
2 Growth of function – Big –Oh ,Omega , Theta notation
7 Module 1- Introduction
Module No 1: Introduction
Lecture No: 1
Performance analysis , space and
time complexity Mathematical
background for algorithm
analysis
Algorithm
Set of Rules to
obtain the
Input expected Output
output from
the given input
Algorithm
9 Module 1- Introduction
Google Hangout Video Conference call
Algorithm??m
10 Module 1- Introduction
Google Hangout Video Conference call
11 Module 1- Introduction
How does Google Maps figure out how to get from
one place to another?
Google Map
Algorithm??
12 Module 1- Introduction
Google Map
13 Module 1- Introduction
Activity
14 Module 1- Introduction
Example: Delivery Truck
1Truck
25 location
=24! Routes
=620,448,401,
733,239,439,
360,000 routes
15 Module 1- Introduction
Example: Delivery Truck
1Truck
25 location
..solved using
“Nearest
insertion”
algorithm
16 Module 1- Introduction
Characteristics of an Algorithm
Feasible
Finiteness
Language Independent
17 Module 1- Introduction
Activity
18 Module 1- Introduction
Need of Analyzing Algorithm
19 Module 1- Introduction
Introduction to analysis of Algorithms
Analysis of Algorithms
To analyze an algorithm means determining the amount of
resources (such as time and storage) needed to execute it
•Efficiency of an algorithm can be measured in terms of:
Execution time (time complexity)
Time complexity: A measure of the amount of time required to
execute an algorithm
The amount of memory required (space complexity)
Calculated by considering data and their size
20 Module 1- Introduction
Space complexity
21 Module 1- Introduction
Space Complexity
S(P)=C+SP(I)
22 Module 1- Introduction
Example
Algorithm ADD(x,n)
{
//Problem Description: The algorithm performs addition of all the elements in an
array. Array is of floating type
//Input: An array x and n is total number of element in array
// output: return sum which is of data type float.
Sum=0.0 ----------1 unit of space by Sum
for i=1 to n do ----------1 unit of space by i ----------1 unit of space by n
Sum=Sum + x[i] ----------n unit of space by x[]
return Sum
}
S(p)≥(n+3)
23 Module 1- Introduction
Time complexity
24 Module 1- Introduction
Example: Delivery Truck
Let Suppose
1.Python
Intel Dual core
6GB RAM
TIME=20MIN
2.Python
Intel Quad Core
16GB RAM
TIME=7MIN
25 Module 1- Introduction
Example: Delivery Truck
Nearest
Number of Brute force
Insertion
nodes
1 1 1
2 4 1
3 9 2
4 16 6
…
620,448,401,
25 625 733,239,439,
360,000
n n^2 n!
26 Module 1- Introduction
Time Complexity T(P)=C+TP(I)
TP(n)=caADD(n)+csSUB(n)+cmMUL(n)+cdDIV(n)…..
Where,
n is the instance characteristics
Ca, Cm,Cs,Cd are the time for addition, multiplication,substraction,
division so on..
27 Module 1- Introduction
Lets suppose system take
• 1 unit time for arithmetic and logical operations
Example
• 1 unit time for assignment and return statements
1.Sum of 2 numbers :
Sum(a,b)
{
return a+b
//Takes 2 unit of time(constant) one for arithmetic operation and one for
Tsum= 2 (proportional to constant)
return. cost=2 no of times=1
}
2.Sum of all elements of a list :
list_Sum(A,n)
{ //A->array and n->number of elements in the array
total =0
// cost=1 no of times=1 Tsum=1 + 2 * (n+1) + 2 * n
for i=0 to n-1 + 1 = 4n + 1 =C1 * n + C2
// cost=2 no of times=n+1 (+1 for the end false condition) (proportional to n)
sum = sum + A[i]
// cost=2 no of times=n
return sum
// cost=1 no of times=1
}
28 Module 1- Introduction
Module No 1: Introduction
Lecture No:2
Growth of function – Big Oh
,Omega , Theta notation
Asymptotic Notation( Growth of function)
30 Module 1- Introduction
Big-oh(O) Notation [study material]
31 Module 1- Introduction
Omega Notation (Ω) [study material]
32 Module 1- Introduction
Theta Notation (Θ) [study material]
33 Module 1- Introduction
Lets suppose system take
• 1 unit time for arithmetic and logical operations
Example
• 1 unit time for assignment and return statements
1.Sum of 2 numbers :
Sum(a,b){
return a+b //Takes 2 unit of time(constant) one for arithmetic operation and one for
return. cost=2 no of times=1
}
Tsum= 2 = C =O(1)
2.Sum of all elements of a list :
list_Sum(A,n)
{ //A->array and n->number of elements in the array
total =0 // cost=1 no of times=1
for i=0 to n-1 // cost=2 no of times=n+1 (+1 for the end false condition)
sum = sum + A[i] // cost=2 no of times=n
return sum // cost=1 no of times=1
}
Tsum=1 + 2 * (n+1) + 2 * n + 1 = 4n + 1 =C1 * n + C2 = O(n)
34 Module 1- Introduction
Best case, Worst case and Average case Analyses
18 Module 1- Introduction
Best, Worst, and Average Case
36 Module 1- Introduction
Order of growth
37 Module 1- Introduction
Order of growth
38 Module 1- Introduction
Activity
39 Module 1- Introduction
Answer
1. 3n+2=O(n)
2. 5n5 +100n+6=O(n5)
3. 10n2+4n+2=O(n2)
5. log(log(n))+log(n) =O(log(n))
40 Module 1- Introduction
Example
41 Module 1- Introduction
Activity
42 Module 1- Introduction
Best case
43 Module 1- Introduction
Activity
44 Module 1- Introduction
Worst case
45 Module 1- Introduction
Activity
46 Module 1- Introduction
Average Case
47 Module 1- Introduction
Module No 1: Introduction
Lecture No:3
Analysis of Selection Sort
Selection Sort
Idea:
Find the smallest element in the array
Exchange it with the element in the first position
Find the second smallest element and exchange it with the element in the
second position
Continue until the array is sorted
Disadvantage:
Running time depends only slightly on the amount of order in the file
49
Module 1- Introduction
Example
8 4 6 9 2 3 1 1 2 3 4 9 6 8
1 4 6 9 2 3 8 1 2 3 4 6 9 8
1 2 6 9 4 3 8 1 2 3 4 6 8 9
1 2 3 9 4 6 8 1 2 3 4 6 8 9
50
Module 1- Introduction
Selection Sort
Algorithm SELECTION-SORT(A[0,1….n-1])
{//problem description: This algorithm sort the element using selection sort
//Input: an array of element a[0,1…n-1] that is to be stored
//output: the sorted array A[0,1,….n-1]
for i ← 0 to n – 2
do smallest ← i 8 4 6 9 2 3 1
for j ← i + 1 to n-1
do if A[j] < A[smallest]
then smallest ← j
//swap A [i] and A[smallest]
temp=A[i]
A[i]=A[smallest]
A[smallest]=temp
}
51
Module 1- Introduction
Analysis of Selection Sort
SELECTION-SORT(A[0,1….n-1]) cost times
for i ← 0 to n – 2 c1 n
do smallest ← I c2 n-1
(n i )
n2
for j ← i + 1 to n-1 c3
i 0
(n i 1)
n2
do if A[j] < A[smallest] c4 i 0
n2
then smallest ← j c5 i 0
(n i 1)
//swap A [i] and A[smallest]
temp=A[i] c6 n-1
A[i]=A[smallest] c7 n-1
A[smallest]=temp c8 n-1
=O(n²)
52 52
Module 1- Introduction
Analysis of Selection Sort
= (n-1)(n-2-0+1)-((n-2)(n-2+1)/2)
= n(n-1)/2
=O(n²)
53
Module 1- Introduction
Activity
54 Module 1- Introduction
Thank You
Subject Name: Design and Analysis of
Algorithms
Module No: 1
Module Name: Introduction
Outline-Module 1
Sr. No Topic
Performance analysis , space and time complexity Mathematical background for
1
algorithm analysis
6 Master Theorem
2 Module 1- Introduction
Module 1 : Introduction
Lecture No: 4
Analysis of Insertion Sort
Insertion Sort
Idea:
Like sorting a hand of playing cards
Start with an empty left hand and the cards facing down on the table.
Remove one card at a time from the table, and insert it into the correct
position in the left hand.
Compare it with each of the cards already in the hand, from right to left
The cards held in the left hand are sorted these cards were originally the
top cards of the pile on the table
4 Module 1- Introduction
Insertion Sort
5 Module 1- Introduction
Insertion Sort
6 Module 1- Introduction
Insertion Sort
7 Module 1- Introduction
Insertion Sort
input array
5 2 4 6 1 3
sorted unsorted
8 Module 1- Introduction
Insertion Sort
9 Module 1- Introduction
INSERTION-SORT
Algorithm INSERTION-SORT(A)
{//problem description: This algorithm sort the element using Insertion sort
//Input: an array of element a[0,1…n-1] that is to be stored
//output: the sorted array A[0,1,….n-1] 1 2 3 4 5 6 7 8
for j ← 2 to n
do key ← A[ j ] a1 a2 a3 a4 a5 a6 a7 a8
// Insert A[ j ] into the sorted sequence A[1 . . j -1]
i←j-1 key
while i > 0 and A[i] > key
do A[i + 1] ← A[i]
i←i–1
A[i + 1] ← key
}
Insertion sort – sorts the elements in place
10 Module 1- Introduction
Analysis of Insertion Sort
cost times
INSERTION-SORT(A)
c1 n
for j ← 2 to n
c2 n-1
do key ← A[ j ]
c3 n-1
i←j-1
c4
n
j 2
tj
while i > 0 and A[i] > key
c5 (t
n
1)
do A[i + 1] ← A[i] j 2 j
i←i–1
n
c6 j 2
(t j 1)
A[i + 1] ← key
c7 n-1
T (n) c1n c2 (n 1) c3 (n 1) c4 t j c5 t j 1 c6 t j 1 c7 (n 1)
n n n
j 2 j 2 j 2
11 Module 1- Introduction
Analysis of Insertion Sort
n j 1
c(n)= j 2 i 0
1
n
= j
j 2
= (n-1)(n)/2
=O(n²)
12 Module 1- Introduction
Module No 1: Introduction
Lecture No: 5
The substitution method and
Recursion tree method
Recurrence relation
What is Recurrence?
T(n) = T(n-1) + 1
14 Module 1- Introduction
Recurrence
https://www.coursera.org/learn/simulation-algorithm-analysis-pointers/lecture/oyhDs/recursion
15 Module 1- Introduction
Example of Recursive Program
n!=
{ 1
n x (n-1)!
if n=0 or n=1
n>1
T(1) = 1
Factorial(int n) T(2) = T(1) + 1 = 2
T(3) = T(2) + 1 = 3
{ if(n==0 || n==1) .
return 1; .
.
else T(n) = T(n-1) + 1
return n* Factorial(n-1)
}
T(n)=
{ 1
T(n-1)+1
if n=0 or n=1
n>1
16 Module 1- Introduction
17 Module 1- Introduction
Finding Complexity of recursive algorithms
Recursive Algorithm
Recursion-Tree
Substitution Method Master Method
Method
18 Module 1- Introduction
Solving Recurrence Relations
1
1 x x
2
for |x| < 1
1 x
19
Module 1- Introduction
Substitution Method
For n=n/2
For n=n/4
20 Module 1- Introduction
Substitution Method
21 Module 1- Introduction
Activity
22 Module 1- Introduction
Example
23 Module 1- Introduction
Activity
24 Module 1- Introduction
Example
25 Module 1- Introduction
Home Activity
26 Module 1- Introduction
Activity
27 Module 1- Introduction
Example
28 Module 1- Introduction
Activity
29 Module 1- Introduction
Example
Consider a recurrence relation
T(n)=2T(n/2 +17)+n
If initial condition is not given consider T(1)=1
Let,
T(n)=2T(n/2 +17)+n ------- equation 1
T(n/2 +17)=2T((n/2 +17)/2 +17)+(n/2 +17)
=2T(n/4 +25.5) +n/2 +17 -------putting this in equation 1 we get
T(n)=2[2T(n/4 +25.5) +n/2 +17 ]+n
= 22 T(n/4 +25.5)+3n+34 ------- equation 2
T(n/4 +25.5)=2T((n/4 +25.5)/2 +17)+(n/2 +25.5)
= 2T(n/8+29.75)+ n/2 +25.5 -------putting this in equation 2 we get
2
T(n)= 2 [2T(n/8+29.75)+ n/2 +25.5 ]+3n+34
= 23 T(n/8+29.75)+5n+136 ------ equation 3
k k
T(n)= 2 T(n/2 +c1)+(2k+1)n+c2 ------- equation k
30 Module 1- Introduction
Activity
31 Module 1- Introduction
Example
T(n)=2T(√n/2)+log₂ n
Let, m=log₂ n n= 2m
T(2m )=2T(2m/2 )+m ------- equation 1
Let, T(2m)=S(m)
S(m)=2 S(m/2) -------putting this in equation 1 we get ------- equation 2
S(m/2)=2 S(m/4)+m/2
S(m)=2[2 S(m/4)+m/2 ] + m -------putting this in equation 2 we get
S(m) = 22 S(m/ 22 ) + 2m ------- equation 3
S(m/4)=2S(m/8)+m/4 -------putting this in equation 2 we get
S(m) = 22 [2 S(m/8)+m/4 ] +2 m
S(m)= 23 S(m/ 23 ) + 3m ------- equation 3
S(m)= 2k S(m/2k )+km ------- equation k
S(1)=1 is compare the term S(m/2k)=S(1)
m/2k=1 => m=2k => then k= log₂ m
Put the value of k in general equation
S(m)= mS(1)+mlog₂ m = m+mlog₂ m now replace S(m) with T(2m)
T(2m)= m+mlog₂ m now replace m=log₂ n and 2m = n
T(n)= log₂n+ log₂n * log₂log₂n = O(log₂n * log₂log₂n )
32 Module 1- Introduction
Activity
33 Module 1- Introduction
Example
Consider a recurrence relation
T(n)=T(n-1)+n4
If initial condition is not given consider T(1)=1
Let,
T(n)=T(n-1)+n4 ------- equation 1
T(n-1)=T(n-2)+ (n-1) 4 -------putting this in equation 1 we get
T(n)= T(n-2)+ (n-1)4+ n4 ------- equation 2
T(n-2)=T(n-3)+ (n-2) 4 -------putting this in equation 2 we get
T(n)= T(n-3)+ (n-2)4+ (n-1)4+ n4 ------- equation 3
T(n-k)= T(n-k-1)+ (n-k)4 -------putting this in equation k we get
T(n)= T(n-k-1)+ (n-k)4 +…….(n-2)4+ (n-1)4+ n4 ------- equation k
= 14 + 24+34 +…….(n-2)4+ (n-1)4+ n4
= n5/5
= O(n5)
34 Module 1- Introduction
Activity
35 Module 1- Introduction
Example
Constant C
= 4k T(n/ 3k )+ Cn2
T(1)=1 is compare the term T(n/3k)=S(1)
n/3k=1 => n=3k => then k= log3 n
= 4log3n T(1)+ Cn2 = nlog34 + Cn2 = n1.26 + Cn2 =O(n2 )
36 Module 1- Introduction
Recurrence Relations to Remember
37 Module 1- Introduction
Solving Recurrence Relations
1
1 x x
2 for |x| < 1
1 x
1 Module 1- Introduction
The recursion-tree method
Steps
2 Module 1- Introduction
Example 1
Cn2 Cn2
…
C(n/16)2 n
ith node ∑ 3iC(n/4i)2
i=1
4 Module 1- Introduction
Example 1
1
ith node
n
∑ 3iC(n/4i)2 1 x x
2 for |x| < 1
i=1
1 x
n
= 2
=Cn ∑ (3/16)i • At Q (1)=T(n/4i) n/4i=1 i=log4 n
i=1
Q (n2)
5 Module 1- Introduction
Example 1
6 Module 1- Introduction
Example 2
T(n) n2
O(1)
7 Module 1- Introduction
Example 2
n2
n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n 2
256
…
O(1)
geometric series
Total =
n
= Q(n2)
2
1 16 16
5 5 2
5 3
16
8 Module 1- Introduction
Example 3
T(n) Cn
O(1)
9 Module 1- Introduction
Example 3
10 Module 1- Introduction
Example 4
T(n) Cn
O(1)
11 Module 1- Introduction
Example 4
12 Module 1- Introduction
Example 5
T(n) 4n
O(1)
13 Module 1- Introduction
Example 5
4(n/2) 4(n/2) 4n
14 Module 1- Introduction
Example 6
T(n) n2
O(1)
15 Module 1- Introduction
Example 6
16 Module 1- Introduction
Module 1- Introduction
54 Module 1- Introduction
The Master theorem
Let a ≥ 1 and b > 1 be constants, let f(n) be a function and let T(n) be
defined on the non negative integers by the recurrence.
1. If f(n) = O(n log b a - ᵋ ) for some constant Є >0 , then T(n) = Ө(n log b a )
3. If f(n) = Ω(n log b a + ᵋ ) for some constant Є >0 and if a f(n/b) ≤ c f(n)
for some constant c<1 and all sufficient large n, then T(n) = Ө(f(n))
55 Module 1- Introduction
The Master theorem(cont..)
Each of the conditions can be interpreted as:
• If the cost of solving the sub-problem at each level is nearly equal, then
the value of f(n) will be n log b a . Thus, the time complexity will be f(n)
times the total number of levels ie. n log b a * log n
56 Module 1- Introduction
Master Theorem Limitations
The master theorem cannot be used if:
• a<1
57 Module 1- Introduction
Using the master method
To use the master method, we simply determine which case of
the master theorem applies and write down the answer.
Example,
T(n) = 9 T(n/3) + n
• Step 1 :
Compare given recurrence relation with standard form i.e.
T(n)=a T(n/b) + f(n) to
find out values of a, b and f(n).
So, we have a = 9, b = 3 , f(n) = n
• Step 2 :
Calculate value of n log b a using above values,
n log b a = n log 3 9 = n2
58 Module 1- Introduction
Using the master method (cont..)
• Step 3 :
Compare the values of function f(n) with the value of n log b a
Case 1 : If function n log b a is larger than f(n) then case 1 is
applicable, T(n) = Ө(n log b a )
Case 2 : If two functions are the same size then we multiply by a
logarithmic factor and
the solution is T(n) = Ө(n log b a lg n) = Ө( f(n ) lg n )
Case 3 : If function f(n) is larger than n log b a then case 3 is applicable,
T(n) = Ө( f(n ))
• Step 4 :
Based on this we get f(n) < n log b a so we can say for the given
example, case 1 is applicable so T(n) = Ө(n log b a )
T(n) = Ө( n2 )
59 Module 1- Introduction
Example 1 :
Q. Solve the given recurrence relation using Master Method
T(n) = 4 T(n/2) + n2 .
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a
T(n/b) + f(n) to
find out values of a, b and f(n).
So, we have a = 4, b = 2 , f(n) = n2
• Step 2 :
Calculate the value of n log b a using above values,
n log b a = n log 2 4
= n2 log 2 2
= n2
60 Module 1- Introduction
Example 1 (cont..) :
• Step 3 :
Compare the values of function f(n) with the value of n log b a .
f(n) = n2 and n log b a = n2
For given recurrence relation we have f(n) = n log b a
Hence case 2 is applicable i.e. If two functions are the
same size then we multiply by a logarithmic factor and
the solution is T(n) = Ө(n log b a log n) = Ө( f(n ) log n )
• Step 4 :
Based on this case 2 T(n) = Ө(n log b a log n )
T(n) = Ө( n2 log n )
61 Module 1- Introduction
Example 2 :
Q. Solve the given recurrence relation using Master Method
T(n) = 16 T(n/4) + n!.
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a
T(n/b) + f(n) to
find out values of a, b and f(n).
So, we have a = 16, b = 4 , f(n) = n!
• Step 2 :
Calculate the value of n log b a using above values,
n log b a = n log 4 16
= n2 log 4 4
= n2
62 Module 1- Introduction
Example 2 (cont..) :
• Step 3 :
Compare the values of function f(n) with the value of n log b a .
f(n) = n ! and n log b a = n2
For given recurrence relation we have f(n) > n log b a
Hence case 3 is applicable i.e. If function f(n) is larger than
n log b a then case 3 is applicable, T(n) = Ө( f(n ))
• Step 4 :
Based on this case 3 T(n) = Ө( f(n) )
T(n) = Ө( n! )
63 Module 1- Introduction
Example 3 :
Q. Solve the given recurrence relation using Master Method
T(n) = √2 T(n/2) + log n.
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a
T(n/b) + f(n) to
find out values of a, b and f(n).
So, we have a = √2 , b = 2 , f(n) = log n
• Step 2 :
Calculate the value of n log b a using above values,
n log b a = n log 2 √2
= n1/2 log 2 2
= n ½ = √n
64 Module 1- Introduction
Example 3 (cont..) :
• Step 3 :
Compare the values of function f(n) with the value of n log b a .
f(n) = log n and n log b a = √n
For given recurrence relation we have f(n) < n log b a
Hence case 1 is applicable i.e. If function n log b a is larger
than f(n) then case 1 is applicable, T(n) = Ө(n log b a )
• Step 4 :
Based on this case 1 T(n) = Ө(n log b a )
T(n) = Ө( √n )
65 Module 1- Introduction
Example 4 :
Q. Solve the given recurrence relation using Master Method
T(n) = 2n T(n/2) + nn.
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a T(n/b) +
f(n) to
find out values of a, b and f(n).
So, we have a = 2n, b = 2 , f(n) = nn
As per theorem value of ‘a’ should be constant and in given recurrence
relation value of ‘a’ keeps on changing for all n. Therefore, a is not constant.
Hence, Master Method can not be applied.
66 Module 1- Introduction
Example 5 :
Q. Solve the given recurrence relation using Master Method
T(n) = 0.5 T(n/2) + 1/n.
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a T(n/b) +
f(n) to
find out values of a, b and f(n).
So, we have a = 0.5, b = 2 , f(n) = 1/n
To apply master theorem value of a ≥ 1 and in given recurrence relation
value of a <1.
Hence, Master Method can not be applied.
67 Module 1- Introduction
Example 6 :
Q. Solve the given recurrence relation using Master Method
T(n) = 64 T(n/8) – n2 log n.
• Step 1 :
Compare given recurrence relation with standard form i.e. T(n)=a T(n/b) +
f(n) to
find out values of a, b and f(n).
So, we have a = 64, b = 8 , f(n) = – n2 log n
To apply master theorem value of a ≥ 1 , b > 1and f(n) should be positive
function. But in given recurrence relation value of f(n) is not positive.
Hence, Master Method can not be applied.
68 Module 1- Introduction
Practice Problems
1. T (n) = 3T (n/2) + n^2
7. T (n) = 4T (n/2) + cn
https://docs.google.com/forms/d/1PpTizWXcnPfJEZ17
KgMoz3M79CjW3TV7wJzxI9AfdNY/edit?usp=sharing
70 Module 1- Introduction
Lecture No 5: Master Theorem
Thank You
DesignandAnalysisof Algorithms
Module2:
DivideandConquerApproach
1
Index -
2
Lecture 09
▪ Algorithmefficiency
▪ Parallelism
▪ Memoryaccess
▪ Roundoff control
X<16
X>16
high
▪ T(n)=T(n/2)+O(1)
▪ T(n)=T(n/2)+O(1)
▪ Best case: in best the element we are looking is found and middle
▪ Thus, there is no recursive call further
▪ T(n)=O(1)
▪ Combine: Merge the two sorted subsequences to produce the sorted answer
▪ The recursion "bottoms out" when the sequence to be sorted has length 1.
MERGESORT(A)
{
n<-length(A)
if(n<2)
{
return
}
mid<- n/2;
left =array of size (mid)
right=array of size(n-mid)
for(i=0 to mid-1)
{
left[i] <-A[i]
}
for(i=mid to n-1)
{
right[i-mid] <-A[i]
}
MERGESORT (L)
MERGESORT (R)
MERGESORT (L, R, A)
}
MERGESORT (L, R, A)
{
nL <- length(L)
nR <- length(R)
i =0, j=0,k=0
while(i<nL && j<nR)
{
if(L[i] <=R[j])
{
A[k] <-L[i]; i<-i+1
}
else
{
A[k]<-R[j]; j<-j+1
}
k=k+1
}
while(i<nL)
{
A[k]=L[i]; i<-i+1;k<-k+1;
}
while(j<nR)
{
A[k]=R[j]; j<-j+1;k<-k+1;
}
} Module 2- Divide and
22
Conquer
0 1 2 3 4 5 6 7
38 27 43 3 9 82 10 7
Best Case, Average Case and Worst case are same in Merge sort:
T(1)=1 for n=1
T(n)=T(n/2) +T(n/2)+Cn for n>1
T(n)=
{ 1 if n=1
2T(n/2)+cn n>1
2 4 5 7 1 2 3 6
p q
1 4 5 7 1 2 3 6
q
p
1 2 5 7 1 2 3 6
p q
1 2 2 7 1 2 3 6
p q
1 2 2 3 1 2 3 6
p q
1 2 2 3 4 2 3 6
p q
1 2 2 3 4 5 3 6
p q
1 2 2 3 4 5 6 6
p q
1 2 2 3 4 5 6 7
p q
5 2 4 7 1 3 2 6
1 11
5 2 4 7 1 3 2 6
2 6 12 16
5 2 4 7 1 3 2 6
3 4 7 8 13 14 17 18
5 2 4 7 1 3 2 6
5 9 15 19
2 5 4 7 1 3 2 6
10 20
2 4 5 7 1 2 3 6
21
1 2 2 3 4 5 6 7
35 50 15 25 80 20 90 45 +∞
Quick_sort(l, h) 0 1 2 3 4 5 6 7
{
35 50 15 25 80 20 90 45 +∞
if(l<h)
{
j=partition(l, h);
Quick_sort(l, j)
Quick_sort(j+1, h)
}
}
partition(l, h) 0 1 2 3 4 5 6 7
{ 35 50 15 25 80 20 90 45 +∞
pivot=A[l];
i=l;j=h;
while(i<j)
{
do
{i++;}
while(A[i] <=pivot);
while(A[j] >pivot)
{j--;}
if(i<j)
{swap(A[i], A[j]);}
}
swap(A[l], A[j]);
return j;
}
Module 2- Divide and
41
Conquer
Quick Sort – Algorithm Method 2
.
▪ QUICKSORT(A, p, r)
1. if p < r
2. then q ← PARTITION(A, p, r)
3. QUICKSORT(A, p, q - 1)
4. QUICKSORT(A, q + 1, r)
2 1 3 4 7 5 6 8
p r
q
Recursively call Quick sort A[q+1…r]
Recursively call Quick sort A[p…q-1]
. ▪ QUICKSORT(A, p, r)
1. if p < r
2. then q ← PARTITION(A, p, r)
3. QUICKSORT(A, p, q - 1)
4. QUICKSORT(A, q + 1, r)
▪ Recurrence Equation:
T(n)=T(q) +T(n – q) +f(n)
▪ PARTITION(A, p, r)
1. x ← A[r]
2. i←p-1
3. for j ← p to r - 1
4. do if A[j] ≤ x
5. then i ← i + 1
6. exchange A[i] ↔ A[j]
7. exchange A[i + 1] ↔ A[r]
8. return i + 1
▪ PARTITION(A, p, r)
1. x ← A[r]
2. i←p-1
3. for j ← p to r - 1
4. do if A[j] ≤ x
5. then i ← i + 1
6. exchange A[i] ↔ A[j]
7. exchange A[i + 1] ↔ A[r]
8.. return i + 1
1 2 3 4 5 6
1 2 3 4 5
Call to T(n)=T(n-1)+ϴ(n)
Quicksort for By substitution method
1 2 3 4
3 elements T(n-1)=T(n-2)+ ϴ(n-1)
Call to T(n)=1+2+3+4…..+n-1+n
Quicksort for 1 2 3 T(n)=n*(n-1)/2
2 elements T(n)=O(n)2
Call to
Quicksort for 1 1 2
elements
48 Lecture 12: Analysisof Quick Sort
Best Case
23 18 32 34 61 40 90 50
Call to quciksort with n/2 (3) Call to quciksort with n/2 (4)
elements elements
▪ Suppose on calling qucik sort the size of left subarray is 9n/10 and right subarray
is n/10.
Introduction to Greedy
Approach
Greedy Approach
Often it is easy to find a feasible solution but difficult to find the optimal
solution.
always makes the choice that seems to be the best at that moment
it makes a locally-optimal choice in the hope that this choice will lead to a
globally-optimal solution.
A Greedy algorithm makes greedy choices at each step to ensure that the
objective function is optimized.
The Greedy algorithm has only one shot to compute the optimal solution so
that it never goes back and reverses the decision.
It is quite easy to come up with a greedy algorithm (or even multiple greedy
algorithms) for a problem.
Analyzing the run time for greedy algorithms will generally be much
easier than for other techniques (like Divide and conquer).
For the Divide and conquer technique, it is not clear whether the technique is
fast or slow. This is because at each level of recursion the size of gets
smaller and the number of sub-problems increases.
The difficult part is that for greedy algorithms you have to work much harder
to understand correctness issues.
Eg denomination in Rs.={1,3,4,5,7}
So you will use greedy approach, always take a coin higher denomination
possible
So solution amount = 10 - 7 – 3 = 0
Eg denomination in rs.={1,3,4,5}
So you will use greedy approach, always take a coin higher denomination
possible
So solution amount = 7 – 5 – 1 - 1 = 0
Here, the feasible solution you got 3 coins in exchange for amount of Rs.7.
So you will use greedy approach, always take a coin higher denomination
possible
So solution amount = 7 – 5 = 2
Here, you are not able to get feasible solution, even though it exist.
They are short-sighted in their approach in the sense that they take decisions
on the basis of information at hand without worrying about the effect these
decisions may have in the future.
They are easy to invent, easy to implement and most of the time quite efficient.
That is, it makes a locally optimal choice in the hope that this choice will lead to
a globally optimal solution.
Knapsack Problem
Knapsack Problem
Given a set of items, each with a weight and a value, determine the number
of each item to include in a collection so that the total weight is less than or
equal to a given limit and the total value is as large as possible.
It derives its name from the problem faced by someone who is constrained
by a fixed-size knapsack and must fill it with the most valuable items.
Given a set of items, each with a weight and a value, determine the number
of each item to include in a collection so that the total weight is less than or
equal to a given limit and the total value is as large as possible.
It derives its name from the problem faced by someone who is constrained
by a fixed-size knapsack and must fill it with the most valuable items.
In Fractional Knapsack, we can break items for maximizing the total value of
knapsack.
This problem in which we can break an item is also called the fractional
knapsack problem.
The fractional knapsack can be solved using greedy/dynamic programming
approach .
In the 0-1 Knapsack problem, we are not allowed to break items. We either
take the whole item or don’t take it.
0-1 Knapsack problem, may get feasible solution with greedy approach, but
with dynamic programming approach surely gets the optimal solution.
In Fractional Knapsack, we can break items for maximizing the total value of
knapsack.
Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his
knapsack. There are n items available in the store and weight of ith item
is wi and its profit is pi . What items should the thief take?
Objective is:
constraint
Keep on picking up the item as whole by the time ∑wi <=W, for all items added
in knapsack
If there is a space remaining in knapsack take the fraction of next item, to fill the
knapsack completely
return x
1) Compute the value per pound pi / wi for each item –takes linear time O(n)
2) Sort the list based on pi / wi – sorting takes O(n log n) time
3) The thief begins by taking, as much as possible, of the item with the greatest
value per pound
4) If the supply of that item is exhausted before filling the knapsack he takes, as
much as possible, of the item with the next greatest value per pound
5) Repeat ( 3 - 4) until his knapsack becomes full – step 3 and 4 takes linear time
time complexity T(n)=ratio calculation + sorting list + adding individual item to list
T(n)= O(n)+O(n log n)+O(n)
Thus, by sorting the items by value per pound the greedy algorithm runs in
O( nlog n) time
ITEM A B C D
VALUE 50 140 60 60
SIZE 5 20 10 12
RATIO 10 7 6 5
Solution:
All of A, all of B, and ((30-25)/10) of C (and none of D)
Size: 5 + 20 + 10*(5/10) = 30
Value: 50 + 140 + 60*(5/10) = 190 + 30 = 220
ITEM A B C D
VALUE 12 9 9 5
SIZE 24 10 10 7
RATIO 0.5 0.9 0.9 0.714
Solution:
Find which items among the given are added to knapsack ?
Graph G
Kruskal’s Algorithm builds the spanning tree by adding edges one by one
into a growing spanning tree.
Kruskal's algorithm follows greedy approach as in each iteration it finds an
edge which has least weight and add it to the growing spanning tree.
Algorithm Steps:
1. Sort the graph edges with respect to their weights.
2. Start adding edges to the MST from the edge with the smallest weight
until the edge of the largest weight.
3. Only add edges which doesn't form a cycle , edges which connect only
disconnected components.
Image Description
Image Description
Image Description
The process continues to highlight the next-
smallest edge, BE with length 7. Many more
edges are highlighted in red at this
stage: BC because it would form the
loop BCE, DE because it would form the
loop DEBA, and FE because it would
form FEBAD.
Mst-Kruskal(G,w)
1. F:= ∅
2. for each v ∈ G.V
3. MAKE-SET(v)
4. sort the edges of G.E into non-decreasing order by weight w
5. for each (u, v) in G.E ordered by weight(u, v), increasing
6. if FIND-SET(u) ≠ FIND-SET(v) then
7. F:= F ∪ {(u, v)}
8. UNION(FIND-SET(u), FIND-SET(v))
9. return F
Mst-Kruskal(G,w) O(1)
1. F:= ∅
2. for each v ∈ G.V A very slow growing
3. MAKE-SET(v) function
4. sort the edges of G.E into non-decreasing order by weight w
O(E log E)
5. for each (u, v) in G.E ordered by weight(u, v), increasing
6. if FIND-SET(u) ≠ FIND-SET(v) then
7. F:= F ∪ {(u, v)} O(E log V)
8. UNION(FIND-SET(u), FIND-SET(v))
9. return F
Greedy method -
https://www.hackerearth.com/practice/algorithms/greedy/basics-of-greedy-
algorithms/tutorial/
Kruskal algo : https://www.hackerearth.com/practice/algorithms/graphs/minimum-
spanning-tree/tutorial/
Prims Algo
https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-greedy-algo-
5/
Dijktra;s
https://www.geeksforgeeks.org/dijkstras-shortest-path-algorithm-greedy-algo-7/
https://www.freecodecamp.org/news/dijkstras-shortest-path-algorithm-visual-
introduction/
Video-
Kruskal compexity: https://youtu.be/3rrNH_AizMA
Dijiktra’s : https://youtu.be/ba4YGd7S-TY
Prims : https://youtu.be/eB61LXLZVqs
Step-01:
Step-02:
Find all the edges that connect the tree to new vertices.
Find the least weight edge among those edges and include it in the existing tree.
If including that edge creates a cycle, then reject that edge and look for the next
least weight edge.
Step-03:
Keep repeating step-02 until all the vertices are included and Minimum Spanning
Tree (MST) is obtained
In this case, we choose S node as the root node of Prim's spanning tree.
This node is arbitrarily chosen, so any node can be the root node. One may
wonder why any video can be a root node.
So the answer is, in the spanning tree all the nodes of a graph are included and
because it is connected then there must be at least one edge, which will join it to
the rest of the tree.
After choosing the root node S, we see that S,A and S,C are two
edges with weight 7 and 8, respectively. We choose the edge
S,A as it is lesser than the other.
After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node
and will check all the edges again. However, we will choose only the least
cost edge. In this case, C-3-D is the new edge, which is less than other
edges' cost 8, 6, 4, etc.
After adding node D to the spanning tree, we now have two edges
going out of it having the same cost, i.e. D-2-T and D-2-B. Thus, we
can add either one. But the next step will again yield edge 2 as the
least cost. Hence, we are showing a spanning tree with both edges
included.
The biggest application of minimum cost spanning trees is connecting multiple nodes to a single
network with the smallest cost.
This can be seen with computers and a network, using wire to connect each computer, or
consider a single phone line that you want multiple phones to be connected to via physical wire.
You would want to minimize the costs to connect each computer or phone to the network.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2)Assign a key value to all vertices in the input graph. Initialize all key values as
INFINITE. Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
….a) Pick a vertex u which is not there in mstSet and has minimum key value.
….b) Include u to mstSet.
….c) Update key value of all adjacent vertices of u.
To update the key values, iterate through all adjacent vertices.
For every adjacent vertex v, if weight of edge u-v is less than the previous key
value of v, update the key value as weight of u-v
The idea of using key values is to pick the minimum weight edge from cut.
The key values are used only for vertices which are not yet included in MST.
The key value for these vertices indicate the minimum weight edges connecting
them to the set of vertices included in MST.
Algorithm
O(1)
1) Create a set mstSet that keeps track of vertices already included in
MST.
2) Assign a key value to all vertices in the input graph. Initialize all key
O(V)
values as INFINITE. Assign key value as 0 for the first vertex so that it is
picked first.
3) While mstSet doesn’t include all vertices O(V)
… . a ) Pick a vertex u which is not there in mstSet and has minimum key *
value.
… . b ) Include u to mstSet. O(V)
… . c ) Update key value of all adjacent vertices of u. +O(1)
To update the key values, iterate through all adjacent vertices. +O(V)
For every adjacent vertex v, if weight of edge u-v is less than the
previous key value of v, update the key value as weight of u-v
If the input graph is represented using adjacency list, then the time
complexity of Prim’s algorithm can be reduced to O(E log V) with the help of
binary heap.
With Dijkstra's Algorithm, you can find the shortest path between nodes in a
graph
.
Particularly, you can find the shortest path from a node (called the "source
node") to all other nodes in the graph, producing a shortest-path tree.
Like Prim’s MST, we generate a SPT (shortest path tree) with given source
as root. We maintain two sets, one set contains vertices included in shortest
path tree, other set includes vertices not yet included in shortest path tree.
Dijkstra's Algorithm can only work with graphs that have positive weights.
This is because, during the process, the weights of the edges have to be
added to find the shortest path.
If there is a negative weight in the graph, then the algorithm will not work
properly.
Once a node has been marked as "visited", the current path to that node is
marked as the shortest path to reach that node.
And negative weights can alter this if the total weight can be decremented
after this step has occurred.
To deal with graph with negative weighted edges, we have bellman ford
algorithm, in dynamic programming approach
DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S←Ø
3 Q ← V[G]
4 while Q ≠ Ø do
5 u ← EXTRACT-MIN(Q)
6 S ← S +{u}
7 for each vertex v adjacent to u do
8 RELAX(u, v, w)
Time complexity
T(n)=O(V2)
If the input graph is represented using adjacency list, it can be reduced to
O(E log V) with the help of binary heap.
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v in V[G]
2 do d[v] ← ∞
3 π[v] ← NIL
4 d[s] 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v) then
2 d[v] ← d[u] + w(u, v)
3 π[v] ← u
Start with vertex with minimum distance d[i]=min and not visited v[i]=F, mark
it visited and update distance of adjacent node if v[j]>v[i]+w[i,j]
Repeat above steps, till we have all connected nodes visited as true, v[i]=T
1 4
2 5 15 6
0 3 6
6
6 8
2 10 2
5
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v in V[G]
2 do d[v] ← ∞ O(V)
3 π[v] ← NIL
4 d[s] 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v) then
2 d[v] ← d[u] + w(u, v) O(1)
3 π[v] ← u
DIJKSTRA(G, w, s)
O(V)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S←Ø O(1)
3 Q ← V[G] O(V)
4 while Q ≠ Ø do
5 u ← EXTRACT-MIN(Q)
6 S ← S +{u} O(V) *
7 for each vertex v adjacent to u do O(logV)
8 RELAX(u, v, w) + O(1)
+O(V)
Time complexity
T(n)=O(V2)
If the input graph is represented using adjacency list, it can be reduced to
O(E log V) with the help of binary heap.
problem consists of n jobs each associated with a deadline and profit and
our objective is to earn maximum profit.
The objective is to earn maximum profit when only one job can be scheduled
or processed at any given time.
We will greedy approach, to earn more profit, will try to complete job with
more profit first.
index 1 2 3 4 5
JOB j1 j2 j3 j4 j5
DEADLINE 2 1 4 4 1
PROFIT 60 100 20 40 20
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 4 4 1
PROFIT 100 60 40 20 20
Looking at the jobs we can say the max deadline value is 3. So, dmax = 3
As dmax = 4 so we will have THREE slots to keep track of free time slots.
Set the time slot status to EMPTY
time slot 1 2 3 4
status EMPTY EMPTY EMPTY EMPTY
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 4 4 1
PROFIT 100 60 40 20 20
Select first j2, check slot 1 is empty or any other slot before slot 1, if yes add
to table of job completed
time slot 1 2 3 4
status j2 EMPTY EMPTY EMPTY
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 4 4 1
PROFIT 100 60 40 20 20
Select j1, check slot 2 is empty or any other slot before slot 2, if yes add to
table of job completed
time slot 1 2 3 4
status j2 j1 EMPTY EMPTY
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 2 3 1
PROFIT 100 60 40 20 20
Select j4, check slot 4 is empty or any other slot before slot 4, if yes add to
table of job completed, otherwise discard it
time slot 1 2 3 4
status j2 j1 EMPTY j4
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 2 3 1
PROFIT 100 60 40 20 20
Select j3, check slot 4 is empty or any other slot before slot 4, if yes add to
table of job completed, otherwise discard it
Since slot 3 as it is empty.
time slot 1 2 3 4
status j2 j1 j3 j4
index 1 2 3 4 5
JOB j2 j1 j4 j3 j5
DEADLINE 1 2 2 3 1
PROFIT 100 60 40 20 20
Select j5, check slot 1 is empty or any other slot before slot 1, if yes add to
table of job completed, otherwise discard it
Since all slots are full, stop processing.
time slot 1 2 3 4
status j2 j1 j3 j4
Our objective is to select jobs that will give us higher profit, without missing
the deadline
time slot 1 2 3 4
status j2 j1 j3 j4
T8 7 16 5 T3 30
T1 7 15 6 T1 15
T6 5 10 7 T8 16
Total 147
Sort as per the profit
Complexity : O(n2)
How?
Sort the job based on decreasing order of profit – O(n log n)
Then pick the job, from highest profit to lowest profit, one by one – O(n)
For each find empty slot, we can have maximum n slots.
Thus complexity : T(n)= O(n2)
Greedy method -
https://www.hackerearth.com/practice/algorithms/greedy/basics-of-greedy-
algorithms/tutorial/
Kruskal algo : https://www.hackerearth.com/practice/algorithms/graphs/minimum-
spanning-tree/tutorial/
Prims Algo
https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-greedy-algo-
5/
Dijktra;s
https://www.geeksforgeeks.org/dijkstras-shortest-path-algorithm-greedy-algo-7/
https://www.freecodecamp.org/news/dijkstras-shortest-path-algorithm-visual-
introduction/
Video-
Kruskal compexity: https://youtu.be/3rrNH_AizMA
Dijiktra’s : https://youtu.be/ba4YGd7S-TY
Prims : https://youtu.be/eB61LXLZVqs
1
Index -
2
Lecture 18
Introduction to Dynamic
Programming Approach
Dynamic Programming Approach
Dynamic Programming is also used in optimization problems.
The core idea of Dynamic Programming is to avoid repeated work by remembering partial
results and this concept finds it application in a lot of real life situations.
Dynamic Programming algorithm solves each sub-problem just once and then saves its
answer in a table, thereby avoiding the work of re-computing the answer every time.
Dynamic Programming is used when the subproblems are not independent, e.g. when they
share the same subproblems.
Dynamic Programming is a Bottom-up approach- we solve all possible small problems and
then combine to obtain solutions for bigger problems.
Dynamic Programming is a powerful technique that allows one to solve different types of
problems in time O(n2) or O(n3) for which a naive approach would take exponential time.
void fib () {
fibresult[0] = 1;
fibresult[1] = 1;
for (int i = 2; i<n; i++)
fibresult[i] = fibresult[i-1] +
fibresult[i-2];
}
6 Lecture 21: Introduction to Dynamic Programming Approach
Dynamic Programming Approach (cont..)
• In the recursive code, a lot of values are being recalculated multiple times.
• We could do good with calculating each unique quantity only once.
• It is mainly used where the solution of one sub-problem is needed repeatedly. The
computed solutions are stored in a table, so that these don’t have to be re-computed.
• For example,
- Binary Search does not have overlapping sub-problem. Whereas Fibonacci numbers
have many overlapping sub-problems.
• A given problem has Optimal Substructure Property, if the optimal solution of the given
problem can be obtained using optimal solutions of its sub-problems.
• For example, the Shortest Path problem has the following optimal substructure property
If a node x lies in the shortest path from a source node u to destination node v, then the
shortest path from u to v is the combination of the shortest path from u to x, and the shortest
path from x to v.
• The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are
typical examples of Dynamic Programming.
Multistage Graphs
Multistage Graph
• A multistage graph is a directed graph having a number of multiple stages.
• The goal is to find the shortest distance from source to sink.
• The number on the directed edge denotes the weight/distance between the two
vertices.
• In multistage graph problem we have to find the shortest path from source to sink.
• The cost of a path from source (denoted by S) to sink (denoted by T) is the sum of the
costs of edges on the path.
• The multistage graph can be solved using forward and backward approach.
• In forward approach we will find the path from destination to source, in backward
approach we will find the path from source to destination.
• Step 1 : Starting with Source ‘S’, we have shortest edge as S-A with weight
1. Hence starting with this edge find out next shortest path to reach ‘T’.
• Step 1 : Starting with Sink ‘S’, find distance to reach next vertex.
d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
• Step 2 : Now, find distance from sink ‘S’ to vertices D, E and F respectively.
d(S,D) = min{d(S,A)+d(A,D), d(S,B)+d(B,D)} = min{ 1+4, 2+9 } = 5
d(S,E) = min{d(S,A)+d(A,E), d(S,B)+d(B,E)} = min{ 1+11, 2+5 } = 7
d(S,F) = min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
= min{ 2+16, 5+2 } = 7
S T
C F
Solution :
Shortest Distance = d(1, 7) =14
Shortest Path = 1 2 6 7
2
1 6 7
• However, in the worst case, we get a complete graph, which has edges E =
n*(n-1)/2.
RELAX(u, v, w)
1. if d[v] > d[u] + w[u][v]
2. then d[v] = d[u] + w[u][v] // Edge Relaxation
3. Π [v] =u
V S V1 V2 V3 V4
d[v] 0 ∞ ∞ ∞ ∞
Π[v] - / / / /
V S V1 V2 V3 V4
d[v] 0 6 7 ∞ ∞
Π[v] - S S / /
V S V1 V2 V3 V4
d[v] 0 2 7 4 -2
Π[v] - V3 S V2 V1
-2
7 1
V1 V2 V3
3 9
-2 5
V4 V5
Solution :
• Step 1 : Initialize all vertices at ∞ distance from the source vertex
and distance of source vertex =0.
V S V1 V2 V3 V4 V5
d[v] 0 ∞ ∞ ∞ ∞ ∞
Π[v] - / / / / /
35 Lecture 23: Single Source Shortest Path Bellman Ford Algorithm
Example 2 (cont..)
• Step 2 : Start with source vertex, update the distances to minimum value
and do relaxation procedure to find shortest path.
S S
8 0 4 8 0 4
V1 -2 V3 V1 -2 V3
7 1 7 1
∞ ∞ ∞ ∞
8 ∞
-2 ∞
4
V2 V2
- 9 5 - 9 5
2 V 3 V5 2V 3 V5
4 ∞ ∞ 4 ∞ ∞
V S V1 V2 V3 V4 V5
d[v] 0 8 -2 4 ∞ ∞
Π[v] - S S S / /
V1 -2 V3 V1 -2 V3
7 - 1 7 - 1
8 4 8 4 -
-1
V2 2 V2 2 2+1=-
- 9 5 - 9 5 1
2 V 3 V5 2V 3 V5
4 ∞ ∞ 4 ∞
61 ∞
8-
2=6 -2+3=1
V S V1 V2 V3 V4 V5
d[v] 0 8 -2 1 -1 ∞
Π[v] - S S V2 V2 /
V1 -2 V3 V1 -2 V3
7 - 1 - 7 - 1 -
8 85
V2 2 1 V2 2 1
- 9 5 - 9 5
2 V 3 V5 2V 3 V5
4 1 ∞ 4 1 ∞4
-
V S V1 V2 V3 V4 V5 1+5=
d[v] 0 5 -2 1 -1 4
4
Π[v] - V2 S V2 V2 V3
6 NIL 1 1
4 0 4 11
1 2 2 NIL 2
D[0] = 6 0 2 Π[0] =
3 11 3 NIL NIL
2 3 ∞ 0
• Step 2 : Now,3 create a matrix D[1] using matrix D[0]. The elements in the first
column and the first row are left as they are. The remaining cells are filled in
the following way.
Let k be the intermediate vertex in the shortest path from source to
destination. D[i][j] is filled with (D[i][k] + D[k][j]) if (D[i][j] > D[i][k] + D[k][j]).
i.e. if the direct distance from the source to the destination is greater than the
path through the vertex k, then the cell is filled with D[i][k] + D[k][j].
In this step, k is vertex 1. We calculate the distance from source vertex to
destination vertex through this vertex k.
1 4 2 2 NIL 2
D[1] = 6 0 2 Π[1] =
3 1 3 1 NIL
3 7 0
13 2
D[1][3] = 11 which can be updated through vertex 2 as min(D[1][3],
D[1][2]+D[2][3])
= min( 11, 4+2) = 6
∴ D[1][3] = 6
D[3][1] = 3 and we can not update that through vertexDistance
2 as weis updated
NIL 1 1 through vertex 2 so
don’t have
0 path.
4 11 6 2 update that in path
D[3][1]
∴D[2] = 6 =3 0 2 Π[2] = 2 NIL 2
matrix also.
3 1 NIL
3 7 0
49 Lecture No. 24: All Pair Shortest Path
Example 1 : (cont..)
• Step 4 : Consider k=3 and calculate D[3] and Π[3] .The elements in the
3rd column and the 3rd row are left as they are.
6 0 4 6
NIL 1 2
1 4 2 2 NIL 2
D[2] = 6 0 2 Π[2] =
3 1 3 1 NIL
3 7 0
13 2
D[1][2] = 4 and we can not update that through vertex 3 as we don’t
have any path.
∴ D[1][2] = 4
D[2][1] = 6 which can be updated through vertex 3 as min(D[2][1],
D[2][3]+D[3][1] ) Distance is updated
NIL 1 2 through vertex 3 so
= 0min( 46, 3+2)
6 =5 update that in path
D[2][1]
∴D[3] = 56 = 50 2 Π[3] = 32 NIL 2
matrix also.
3 1 NIL
3 7 0
50 Lecture No. 24: All Pair Shortest Path
Example 1 : (cont..)
• Step 5 : In given graph we have only 3 vertices.
6
4
1 2
3 11
2
3
∴ D[3] gives the shortest path between each pair of vertices.
NIL 1 2
0 4 6
3 NIL 2
D[3] = 5 0 2 Π[3] =
3 1 NIL
3 7 0
V2 1 V4
2 3
V1 4 6
-4 7
8 V3 V5
Solution : -5
• Step 1 : Create a distance matrix D[0] and path matrix Π[0]of
dimension 0 5*5
3 8 ∞ -4 NIL 1 1 NIL 1
-4 7 2 ∞ -5 0 ∞
8 V3 V5
∞ ∞ ∞ 6 0
-5
• Step 2 : Create a matrix D[1] using matrix D[0]. The elements in the first
column and the first row are left as they are. Now, k=1
NIL 1 1 NIL 1
0 3 8 ∞ -4
NIL NIL NIL 2 2
-4 7 2 5 -5 0 -2
8 V3 V5
∞ ∞ ∞ 6 0
-5
• Step 3 : Create a matrix D[2] using matrix D[1]. The elements in the 2nd
column and the 2nd row are left as they are. Now, k=2
min(∞,(1+
NIL 1 1 NIL 1
0 3 8 4∞ -4 3)) 2
min(∞,(4+ NIL NIL NIL 2 2
V2 1 V4
2 3 D[2] = ∞ 0 ∞ 1 7
∞ 4 0 5 11
V1 4 6
-4 7 2 5 -5 0 -2
8 V3 V5
∞ ∞ ∞ 6 0
-5
• Step 4 : Create a matrix D[3] using matrix D[2]. The elements in the 3rd
column and the 3rd row are left as they are. Now, k=3
NIL 1 1 2 1
0 3 8 4 -4
NIL NIL NIL 2 2
∞ 0 ∞ 1 7
D[3] = Π[3] = NIL 3 NIL 2 2
∞ 4 0 5 11
4 1 4 NIL 1
2 -1
5 -5 0 -2 3
min( 5,(- NIL NIL NIL 5 NIL
∞ ∞ ∞ 6 0 5+4) )
55 Lecture No. 24: All Pair Shortest Path
Example 2 : (cont..)
0 3 8 4 -4
V2 1 V4
2 3 D[3] = ∞ 0 ∞ 1 7
∞ 4 0 5 11
V1 4 6
-4 7 2 -1 -5 0 -2
8 V3 V5
∞ ∞ ∞ 6 0
-5
• Step 5 : Create a matrix D[4] using matrix D[3]. The elements in the 4th
column and the 4th row are left as they are. Now, k=4
NIL 1
0 3 -1
8 4 -4 41 2 1
NIL NIL
∞
3 0 -4
∞ 1 -17 4 4NIL 2
12
D[4] = Π[4] = NIL 3 NIL 2 2
∞
7 4 0 5 11
3 4 1
4 3 4 NIL 1
2 -1 -5 0 -2
NIL NIL NIL 5 NIL
∞
8 ∞5 ∞
1 6 0 4 3 4
56 Lecture No. 24: All Pair Shortest Path
Example 2 : (cont..)
0 3 -1 4 -4
V2 1 V4
2 3 D[4] = 3 0 -4 1 -1
V1 4 6 7 4 0 5 3
-4 7 2 -1 -5 0 -2
8 V3 V5
8 5 1 6 0
-5
• Step 6 : Create a matrix D[5] using matrix D[4]. The elements in the 5th
column and the 5th row are left
minas( they
-1,(- are. Now, k=5
min ( 3, NIL
(-4+6- 0 31 -3
-1 42 -4 4+6-5) ) 31 41 52 1
4 NIL 4 2 1
5+4) )
3 0 -4 1 -1
D[5] = Π[5] = 4 3 NIL 2 1
7 4 0 5 3 min ( 4, (-
4 3 4 NIL 1
2 -1 -5 0 -2 4+6) )
4 3 4 5 NIL
8 5 1 6 0
57 Lecture No. 24: All Pair Shortest Path
Example 2 : (cont..)
• Step 7 : In given graph we have only 5 vertices.
V2 1 V4
2 3
V1 4 6
-4 7
8 V3 V5
-5
∴ D[5] gives the shortest path between each pair of vertices.
0 1 -3 2 -4 NIL 3 4 5 1
4 NIL 4 2 1
3 0 -4 1 -1
D[5] = Π[5] = 4 3 NIL 2 1
7 4 0 5 3
4 3 4 NIL 1
2 -1 -5 0 -2
4 3 4 5 NIL
8 5 1 6 0
4 3
7
Solution :
0 4 8 15 NIL 4 1 2
17 0 5 12 4 NIL 2 2
D[4] = Π[4] =
12 16 0 7 4 1 NIL 3
5 9 13 0 4 1 1 NIL
1
Index -
2
Lecture 23
Find the shortest possible route that he visits each city exactly
once and returns to the origin city.
Solution :
Step 1 : Find the distance matrix for the given graph.
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
9 Lecture 37 : Travelling Salesman Problem
Example 1 : (cont..)
Step 2 : We will consider start vertex as 1. Let S = Φ then we have,
Cost(2,Φ,1) = d(2, 1) = 5
Cost(3,Φ,1) = d(3, 1) = 6
Cost(4,Φ,1) = d(4, 1) = 8
In TSP we have to complete the tour where we have started i.e. start
and end vertex should be same. In this case start and end vertex =1.
= 35
Planning
Scheduling
Case 2:
String A: "ABCDE", String B: "AEBDF"
LCS("ABCDE", "AEBDF") = Max(LCS("ABCDE", "AEBD"), LCS("ABCD",
"AEBDF"))
c[ i , j ] = 0 if i=0 or j=0
c[ i -1, j -1] + 1 if i, j >0 and xi= yj
max(c[ i , j-1 ] , c[i-1,j ]
Lecture No 26: Longest common
22
subsequence
Longest Common Subsequence Algorithm
//Computing the Length of LCS
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0
C 0
A 0
B 0
A 0
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0
A 0
B 0
A 0
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0
B 0
A 0
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0 1 1 2 2 2 3 3
B 0
A 0
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0 1 1 2 2 2 3 3
B 0 1 2 2 3 3 3 4
A 0
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0 1 1 2 2 2 3 3
B 0 1 2 2 3 3 3 4
A 0 1 2 2 3 3 4 4
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0 1 1 2 2 2 3 3
B 0 1 2 2 3 3 3 4
A 0 1 2 2 3 3 4 4
Y 0 0 0 0 0 0 0
B 0 0 1 1 1 1 1 1
D 0 0 1 1 1 2 2 2
C 0 0 1 2 2 2 2 2
A 0 1 1 2 2 2 3 3
B 0 1 2 2 3 3 3 4
A Start
0 1 2 2 3 3 4 4
here
Longest Common Subsequence =
Lecture No 26: Longest common
34BDAB
subsequence
Example 2 :
Q. Find the longest subsequence for given string.
X[ ] = "AGGTAB"
Y[ ] = "GXTXAYB"
Solution : G X T X A Y B
0 0 0 0 0 0 0 0
A 0 0 0 0 0 1 1 1
G 0 1 1 1 1 1 1 1
G 0 1 1 1 1 1 1 1
T 0 1 1 2 2 2 2 2
A 0 1 1 2 2 3 3 3
B 0 1 1 2 2 3 3 4
LCS Length = 4
LCS = GTAB
Lecture No 26: Longest common
35
subsequence
Example 3 :
Q. Find the longest subsequence for string ABCDA and ACBDEA.
Solution :
LCS Length =
4 LCS = ACDA
Lecture No 26: Longest common
36
subsequence
Analysis of Longest Common Subsequence Algorithm
• The method of dynamic programming reduces the number of
function calls.
• It stores the result of each function call so that it can be used in
future calls without the need for redundant calls.
• In the dynamic algorithm, the results obtained from each
comparison between elements of X and the elements of Y are
stored in a table so that they can be used in future computations.
• So, the time taken by a dynamic approach is the time taken to fill
the table i.e. O(mn). Whereas, the recursion algorithm has the
complexity of 2max(m, n).
Lecture No 26: Longest common
37
subsequence
Thank You
Analysis of Algorithms
Unit 4: Backtracking and Branch and Bound
1
Index -
Lecture No: 25
Introduction to Backtracking
Backtracking
• Suppose you have to make a series of decisions, among various
choices, where
You don’t have enough information to know what to choose
Each decision leads to a new set of choices
Some sequence of choices (possibly more than one) may be a
solution to your problem
• Backtracking is a methodical way of trying out various sequences
of decisions, until you find one that “works”.
• Backtracking technique can be considered as an organized
exhaustive search that often avoids searching all possibilities.
Lecture No: 26
N Queen Problem
The N Queen Problem
• N-Queens Problem: Given a chess board having N×N cells, we need
to place N queens in such a way that no queen is attacked by any other
queen.
• A queen can attack horizontally, vertically Q
and diagonally. Q
• Consider the problem of trying to place
Q
8 queens on a chess board such that
no queen can attack another queen. Q
– What are the "choices"? Q
– How do we "make" or
"un-make" a choice? Q
– How do we know when Q
to stop?
Q
• Still we don’t have any valid cells to place 4th Queen so backtrack.
• Remove 3rd Queen from chessboard and backtrack.
• Remove 2nd Queen from chessboard and backtrack.
• Now, update the position of 1st Queen. At the end we get solution as,
• Place (k, i) returns a Boolean value that is true if the kth queen can be
placed in column i.
• It tests both whether i is distinct from all previous costs x1, x2,....xk-1 and
whether there is no other queen on the same diagonal.
• Using place, we give a precise solution to then n- queens problem.
• x [] is a global array whose final k - 1 values have been set.
• Abs (r) returns the absolute value of r.
Lecture No: 27
Sum of Subsets
Sum of Subset Problem
• In this problem, there is a given set with some integer elements. And another
some value is also provided, we have to find a subset of the given set whose
sum is the same as the given sum value.
• For example:
We have a set of numbers, and a sum value.
Set S = {10, 7, 5, 18, 12, 20, 15}
Sum Value = 35
All possible subsets of the given set, where sum of each element for
every subsets is same as the given sum value.
{10, 7, 18} {10, 5, 20} {5, 18, 12} {20, 15}
• The running time is of order O(2n.n) since there are 2n subsets and to check
each subset we need to sum at most n elements.
• Here backtracking approach is used for trying to select a valid subset when
an item is not valid, we will backtrack to get the previous subset and add
another element to get the solution.
3. If the subset is having sum M, then stop with that subset as solution.
4. If the subset is not feasible or if we have reached the end of the set, then
backtrack through the subset until we find the most suitable value.
6. If we have visited all the elements without finding a suitable subset and if
no backtracking is possible then stop without solution.
with 10 5 without 10 0
with 10 without 10
15 5 10 0
with 12 without 12 with 12 without 12
with 12
5 with 12
27 15 17
12
with 13 w/o w/o 13 22
Not a 13 With 13
w/o 13
solution
28 15 w/o 13
30 5 12
29
Module 4:- Backtracking and Branch and Bound
Example 2: (cont..)
Possible sum of subsets are :
{ 2, 7, 8 } { 2, 15 }
The state space tree is shown as below in figure. {2, 7, 8, 15}.
with 2 0 without 2
with 7 2 without 7 0
With 7 without 7
9 2 7 0
with 8 without 8 with 8 without 8
with 8
2 with 8
17 9 10
with 15 15 8
with 15 With 15
Solution w/o 15 with 15
24 25 17
15 23
Not a Not a
solution Prune it Solution solution
Prune it
Prune it Prune it
Lecture 29– Introduction to Branch and Bound Concept and 15 Puzzle Problem
Lecture No: 28
Graph Coloring
Graph Coloring Problem
• Graph coloring problem involves assigning colors to certain elements of a
graph subject to certain restrictions and constraints.
• This has found applications in numerous fields in computer science. For
example:
• Geographical maps: There can be cases when no two adjacent
cities/states can be assigned same color in the maps of countries or
states. In this case, only four colors would be sufficient to color any map.
• Vertex coloring is the most commonly encountered graph coloring
problem.
• The problem states that given m colors, determine a way of coloring the
vertices of a graph such that no two adjacent vertices are assigned same
color.
• The smallest number of colors needed to color a graph G is called
its chromatic number.
• In this approach, we color a single vertex and then move to its adjacent
(connected) vertex to color it with different color.
• In case, we find a vertex that has all adjacent vertices colored and no
color is left to make it color different, we backtrack and change the color of
the last colored vertices and again proceed further.
Means vertex is
not yet coloured
Module 4:- Backtracking and Branch and
11
Bound
Example : (cont..)
Vertex is
coloured
isSafe() called
with c=2
15 Puzzle Problem
15 Puzzle Problem
• This problem was invented by Sam Loyd in 1878.
• There are 15 numbered tiles placed on a square frame with
capacity of 16 tiles.
• Objective of this problem is : Given an initial arrangement
and objective to transform any initial arrangement to a goal
arrangement through a series of legal moves.
1 2 3 4 1 2 3 4
5 6 7 5 6 7 8
9 10 12 8 9 10 11 12
11 13 14 15 13 14 15
Arrangement 13 14 15 12 13 14 15 12
Initial 9 10 7 11 9 10
8
7 11
Arrangement 13 14 15 12 13 14 15 12
Arrangement 13 14 15 12 13 14 15 12
Arrangement 13 14 15 12 13 14 15 12
13 14 15 12 13 14 15 12
th from roo t to no de
- Path leng 8 is 2. ∴ f(10) = 2
- The shaded tiles indicate that they are not in their goal
positions. There is one such tile. ∴ ĝ(10) = 1
- ĉ(10) = f(10) + ĝ(10)
= 2+1
∴ ĉ(10) = 3
35 Module 4:- Backtracking and Branch and Bound
Solution to 15 Puzzle Problem (cont..)
• Step 3: (cont..)
- Calculating cost for node 11 using formula , ĉ(x) = f(x) +
ĝ(x)
- Consid er1 til e2 po s3iti orn4node
fos 111 as2 ho3 n4wh ere ES
s w
is moved i n D5 w6n d7ire c8ntio. 5 6 7 8
o
9 10 ES 11 9 10 15 11
13 14 15 12 13 14 ES 12
th from ro ot to nod
- Path leng e 11 is 2. ∴ f(11) = 2
- The shaded tiles indicate that they are not in their goal
positions. There are 3 such tiles. ∴ ĝ(11) = 3
- ĉ(11) = f(11) + ĝ(11)
= 2+3
∴ ĉ(11) = 5
36 Module 4:- Backtracking and Branch and Bound
Solution to 15 Puzzle Problem (cont..)
• Step 3: (cont..)
- Calculating cost for node 12 using formula , ĉ(x) = f(x) +
ĝ(x)
- Consider1 til e2 po s3itio
r nnode
4fos 112 as 2 ho3 n4wher e ES
s w
is moved i n L5 t6di re7ct io8n.
ef 5 6 7 8
9 10 ES 11 9 ES 10 11
13 14 15 12 13 14 15 12
th from roo t to nod
- Path leng e 12 is 2. ∴ f(10) = 2
- The shaded tiles indicate that they are not in their goal
positions. There are 3 such tiles. ∴ ĝ(12) = 3
- ĉ(12) = f(12) + ĝ(12)
= 2+3
∴ ĉ(12)= 5
37 Module 4:- Backtracking and Branch and Bound
Solution to 15 Puzzle Problem (cont..)
• Step 3: (cont..)
- From node 10, 11 and 12 node 10 is having minimum cost i.e.
ĉ(10) = 3. Hence node 10 becomes E node.
- Now, ES can be moved in only 2 directions i.e. Down and left
as most of the tiles are properly
placed.
- Therefore, expand node 10 to generate new nodes 22 and 23.
13 14 15 12 13 14 15 12
th from roo t to nod
- Path leng e 22 is 3. ∴ f(22) = 3
- The shaded tiles indicate that they are not in their goal
positions. There are 2 such tile. ∴ ĝ(22) = 2
- ĉ(22) = f(22) + ĝ(22)
= 3+2
∴ ĉ(22) = 5
39 Module 4:- Backtracking and Branch and Bound
Solution to 15 Puzzle Problem (cont..)
• Step 4: (cont..)
- Calculating cost for node 23 using formula , ĉ(x) = f(x) +
ĝ(x)
- Consid er1 til e2 po s3iti orn4node
fos 213 as 2 ho3 n4wher e ES
s w
is moved i n D5 w6n d7ire c8ntio.
o 5 6 7 8
9 10 11 ES 9 10 11 12
13 14 15 12 13 14 15 ES
th from ro ot to nod
- Path leng e 23 is 3. ∴ f(23) = 3
- All tiles are in their goal positions. ∴ ĝ(23) = 0
- ĉ(23) = f(23) + ĝ(23)
= 3+0
∴ ĉ(23) = 3
Hence we achieve the goal state with minimum cost =
40 3.Module 4:- Backtracking and Branch and Bound
State Space Tree of 15 Puzzle Problem
Initial
State
Goal
State
41 Module 4:- Backtracking and Branch and Bound
Thank You
Design and Analysis of Algorithms
Unit 5: String Matching Algorithms
1
Index -
2
Lecture 31
NAÏVE-STRING-MATCHER (T, P)
n ← length[T] m ← length[P]
for s ← 0 to n-m
if P[1..m] = T [(s+1)..(s+m)]
print “pattern occurs with shift” s
T=Text 1 0 1 1 1 0 1 1 1 0
S=0
P=Patter 1 1 1
n
9
1 0 1 1 1 0 1 1 1 0
S=1
1 1 1
1 0 1 1 1 0 1 1 1 0
S=2
1 1 1
S=3
1 1 1
1 0 1 1 1 0 1 1 1 0
S=4
1 1 1
1 0 1 1 1 0 1 1 1 0
S=5
1 1 1
1 0 1 1 1 0 1 1 1 0
S=6
1 1 1
S=7
1 1 1
Example
Suppose P = aab and T = acaabc. There are four passes:
Knuth-Morris-Pratt Algorithm
The Knuth-Morris-Pratt Algorithm
Compute-Prefix-Function (p)
m length[p] //’p’ pattern to be matched
Π[1] 0
k0
for q 2 to m
do while k > 0 and p[k+1] != p[q]
do k Π[k]
If p[k+1] = p[q]
then k k +1
Π[q] k
return Π
q 1 2 3 4 5 6 7
Step 2: q = 3, k = 0,
Π[3] = 1 p a b a b a c a
Π 0 0 1
q 1 2 3 4 5 6 7
Step 3: q = 4, k = 1
Π[4] = 2 p a b a b a c A
Π 0 0 1 2
q 1 2 3 4 5 6 7
Step 5: q = 6, k = 3
p a b a b a c a
Π[6] = 1
Π 0 0 1 2 3 1
q 1 2 3 4 5 6 7
Step 6: q = 7, k = 1 p a b a b a c a
Π[7] = 1 Π 0 0 1 2 3 1 1
q 1 2 3 4 5 6 7
After iterating 6 times, the prefix
function computation is p a b A b a c a
complete: Π 0 0 1 2 3 1 1
Note: KMP finds every occurrence of a ‘p’ in ‘S’. That is why KMP does not terminate in step 12,
rather it searches remainder of ‘S’ for any more occurrences of ‘p’.
S
b a c b a b a b a b a c a c a
p a b a b a c a
Let us execute the KMP algorithm to find whether ‘p’
occurs in ‘S’.
For ‘p’ the prefix function, Π was computed previously and is as follows:
q 1 2 3 4 5 6 7
p a b A b a c a
Π 0 0 1 2 3 1 1
String Matching Algorithm
Initially: n = size of S = 15;
m = size of p = 7
Step 1: i = 1, q = 0
comparing p[1] with S[1]
S b a c b a b a b a b a c a a b
p a b a b a c a
P[1] does not match with S[1]. ‘p’ will be shifted one position to the right.
Step 2: i = 2, q = 0
comparing p[1] with S[2]
S b a c b a b a b a b a c a a b
p a b a b a c a
P[1] matches S[2]. Since there is a match, p is not shifted.
S b a c b a b a b a b a c a a b
p a b a b a c a
Backtracking on p, comparing p[1] and S[3]
Step 4: i = 4, q = 0
comparing p[1] with S[4] p[1] does not match with S[4]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 5: i = 5, q = 0
comparing p[1] with S[5] p[1] matches with S[5]
S b a c b a b a b a b a c a a b
p a b a b a c a
String Matching Algorithm
Step 6: i = 6, q = 1
Comparing p[2] with S[6] p[2] matches with S[6]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 7: i = 7, q = 2
Comparing p[3] with S[7] p[3] matches with S[7]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 8: i = 8, q = 3
Comparing p[4] with S[8] p[4] matches with S[8]
S b a c b a b a b a b a c a a b
p a b a b a c a
String Matching Algorithm
Step 9: i = 9, q = 4
Comparing p[5] with S[9] p[5] matches with S[9]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 10: i = 10, q = 5
Comparing p[6] with S[10] p[6] doesn’t match with S[10]
S b a c b a b a b a b a c a a b
p a b a b a c a
Backtracking on p, comparing p[4] with S[10] because after mismatch q = Π[5] = 3
S b a c b a b a b a b a c a a b
S b a c b a b a b a b a c a a b
p a b a b a c a
S b a c b a b a b a b a c a a b
p a b a b a c a
Pattern ‘p’ has been found to completely occur in string ‘S’. The total number of shifts
that took place for the match to be found are: i – m = 13 – 7 = 6 shifts.
In the above pseudocode for computing the The for loop beginning in step 5 runs ‘n’
prefix function, the for loop from step times, i.e., as long as the length of the
4 to step 10 runs ‘m’ times. Step 1 to string ‘S’. Since step 1 to step 4 take
step 3 take constant time. Hence the constant time, the running time is
running time of compute prefix dominated by this for loop. Thus running
function is Θ(m). time of matching function is Θ(n).
By
Dr. Vanita Mane
04/04/2025
Basic concepts of P, NP, NP Hard and N
Complete problems
Deterministic Algorithms
10
Non-deterministic Polynomial Time (NP Class)
• These problems are either NP-complete or NP-hard, meaning they
have no known polynomial-time solutions, and their solutions can
only be verified in polynomial time.
• NP, for non-deterministic polynomial time, is one of the best-
known complexity classes in theoretical computer science.
• Exponential Time (O(2ⁿ), O(kⁿ)), Factorial Time (O(n!)), Super-
Exponential Time (e.g., O(nⁿ))
11
Various problems in NP
Optimization Problems:
» An optimization problem is one which asks, “What is the optimal solution
to problem X?”
» Examples:
o 0-1 Knapsack
o Fractional Knapsack
o Minimum Spanning Tree
o Decision Problems
» An decision problem is one which asks, “Is there a solution to problem X
with property Y?”
» Examples:
o Does a graph G have a MST of weight ≤ W?
12
Various problems in NP (cont...)
• An optimization problem tries to find an optimal solution
• A decision problem tries to answer a yes/no question
• Many problems will have decision and optimization versions.
» Eg: Traveling salesman problem
• optimization: find hamiltonian cycle of minimum weight
• decision: find hamiltonian cycle of weight < k
13
Tractability
• NP-Complete problems are problems that live in both the NP and NP-Hard
classes. This means that NP-Complete problems can be verified in
polynomial time and that any NP problem can be reduced to this problem in
polynomial time.
• The group of problems which are both in
NP and NP-hard are known as NP-
Complete problem.
• Now suppose we have a NP-Complete
problem R and it is reducible to Q then Q is
at least as hard as R and since R is an NP-
hard problem. therefore Q will also be at
least NP-hard , it may be NP-complete
also.
23
Example
Reduction
A problem P can be reduced to another problem Q if any instance of P can be
rephrased to an instance of Q, the solution to which provides a solution to the
instance of P
» This rephrasing is called a transformation
Intuitively: If P reduces in polynomial time to Q, P is “no harder to solve” than Q
32
32
33 Lecture No: 39 NP-Completeness
34 Lecture No: 39 NP-Completeness
Proving Travelling Salesman Problem to NP Complete Problem
2. TSP is NP-hard
To prove this, one way is to show that Hamiltonian cycle ≤p TSP (as we know that the
Hamiltonian cycle problem is NP complete).
https://www.geeksforgeeks.org/proof-that-vertex-cover-is-np-complete/
55
Thank You