0% found this document useful (0 votes)
40 views22 pages

Daa Unit Ii

This document discusses the divide and conquer algorithm, detailing its principles and applications such as binary search, merge sort, and quick sort. It explains the process of breaking down problems into smaller subproblems, solving them, and combining the results. Additionally, it analyzes the time and space complexities of these algorithms, providing examples and exercises for better understanding.

Uploaded by

6uxk0l75jf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views22 pages

Daa Unit Ii

This document discusses the divide and conquer algorithm, detailing its principles and applications such as binary search, merge sort, and quick sort. It explains the process of breaking down problems into smaller subproblems, solving them, and combining the results. Additionally, it analyzes the time and space complexities of these algorithms, providing examples and exercises for better understanding.

Uploaded by

6uxk0l75jf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT II

Divide and Conquer: General method, Binary search, Merge sort, Quick sort, Strassen’s
matrix multiplication.

INTROUDCTION TO DIVIDE AND CONQUER:


A divide and conquer algorithm works by recursively breaking down a problem into two or
more sub problems of similar type, until these problems become simple enough to be solved
directly. The solutions to the sub-problems are then combined to give a solution to the original
problem.
Control Abstraction:
Algorithm DAndC(P)
{
if small (P) then
return S(P);
else
{
Divide p into small instances P1, P2,....Pn, n>=1;
Apply DAndC to each subproblem;
return combine (DAndC(P1), DAndC(P2),....,DAndC(Pn));
}
}

1
The computing time is;
g(n) if ‘n’ is small
T(n) =
T(n1)+T(n2)+,,,,+T(nr)+Fn If ‘n’ is large
Where,
T(n) - Time for divide and conquer size of ‘n’
g(n) - Computing time required to small inputs
F(n) - Time required for dividing problem and combining solutions of sub problems

Applications of Divide and Conquer:


1. Defective Chess Board
2. Binary Search
3. Finding Maximum and Minimum
4. Quick Sort
5. Merge Sort
6. Strassen’s Matrix Multiplication, etc.

BINARY SEARCH:
Let ai, 1<= i <= n, be a list of elements that are sorted in non decreasing order. Consider the
problem of determining whether a given element ‘x’ is present in the list or not. If ‘x’ is present
at j then aj = x otherwise j=0.
Problem P= (n,ai, ...., al, x);
Where,
n - array size
ai, ...., al - List of elements
x - Element to be searched
Divide and conquer technique is used to solve this problem.
(1) If Small(P) i.e. n=1, then it will take one unit of time to determine whether the element
‘x’ is present or not in the list.
(2) If P has more than one element, then it can be divided into two sub problems. Pick an
index ‘q’ in the range [i, l] and compare ‘x’ with aq. There are three possibilities,
a. If a[q] = x, then problem is solved, return q.
b. If x < a[q], then the element can be searched in only the sub list ai, ai+1, ....aq-1.
c. If x > a[q], then the element can be searched in only the sub list aq+1, aq+2, ..al.
2
Ex:
-15 -6 0 7 9 23 54 82 101 112 125 131 142 151 n = 14
(i) x = 151
Low High Mid A[mid] X ? a[mid]
1 14 7 54 151 > 54
8 14 11 125 151 > 125
12 14 13 142 151 > 142
14 14 14 151 Found
(ii) x = 12
Low High Mid A[mid] X ? a[mid]
1 14 7 54 12 < 54
1 6 3 0 12 > 0
4 6 5 9 12 > 9
Not found

Analysis:
The time complexity of binary search technique is based on the number of elements in an array.

Successful Search:
Best Case - O(1)
Average Case - O(log n)
Worst Case - O(log n)

Unsuccessful Search: O(log n)


Proof: Every time we make one comparison and divide the given numbers into n/2. So the
recurrence relation is;
T(n) = T(n/2) + 1
= [T(n/4)+1] + 1
= [T(n/8)+1] + 2
= [T(n/23)] + 3
.....
.....
= [T(n/2k)] + K
= T(1) + log2n (let 2K = n) => k =log2n)

3
Advantages:
 Element can be found efficiently
Disadvantages:
 It is required that the list should be sorted.

Binary Decision Tree:


Binary Decision Diagram (BDD) is data structure used to represent a Boolean function. Binary
Decision Tree is way of describing the sequence of values for mid that are produced by binary
search algorithm. In BDT the value in each node is the value of the mid.
Ex: n= 14 List = 1,2,3,4,5,6,7,8,9,10,11,12,13,14.

1. The first comparison is x with a[7].


2. If x < a[7] then the next comparison is with a[3].
3. If x > a[7] then the next comparison is with a[11].
4. Each path through the tree represents a sequence of comparisons in the binary search
method.
5. If x is present then the algorithm will end of the circular nodes that list the index into
the array where x was found.
6. If x is not found, the algorithm will terminate at one of the square nodes.

Circular nodes are called internal nodes and square nodes are referred to as external
nodes.

4
Exercise:
1. Consider the list of elements as -40 11 33 37 42 55 99 100 and search the element 99
and also draw the binary decision tree.
2. 10 20 30 40 50 60 70 and x = 60 and also draw the binary decision tree.
3. Draw the binary decision tree for the following set.
3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 47

Consider the list of elements as -40 11 33 37 42 55 99 100 and search the element 99 and also
draw the binary decision tree.
Low High Mid A[mid] X ? a[mid]
1 8 4 37 99>37
5 8 7 99 99=99
Found

Element found at 7th position.


BDT:

10 20 30 40 50 60 70 and x = 60 and also draw the binary decision tree.

Low High Mid A[mid] X ? a[mid]


1 7 4 40 60>40
5 7 6 60 60=60
Found
Element found at 6th position.

5
BDT:

Draw the binary decision tree for the following set.


3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 47

6
MERGE SORT:
The merge sort is a sorting algorithm that uses the divide and conquers strategy. In this method,
division is carried out dynamically. We need an extra temporary array of the same size as the
input array for merging.
1. Merge sort is a divide-and-conquer algorithm that aims to sort an array by dividing it
into smaller subarrays, sorting them individually, and then merging them back together.
2. The basic idea of merge sort is to repeatedly divide the array in half until each subarray
contains only one element, as a single element is considered sorted.
3. After dividing the array into single-element subarrays, the merge operation is
performed. This operation involves merging two sorted subarrays into a single sorted
array.
4. During the merge operation, two pointers are used to compare elements from the two
subarrays and place them in the correct order in the merged array.
5. The merge operation continues until all the subarrays have been merged back into a
single sorted array.
6. Merge sort has a time complexity of O(n log n) in all cases (worst, average, and best),
where n is the number of elements in the array. This makes it one of the most efficient
sorting algorithms.
7. Merge sort is a stable sorting algorithm, meaning it preserves the relative order of
elements with equal values. This can be important in certain applications.
8. It is worth noting that merge sort requires additional space proportional to the size of
the input array. This additional space is used for the temporary storage of subarrays
during the merging process.
9. Merge sort is widely used in practice due to its efficiency and stability, especially for
sorting large datasets or linked lists.
Steps:
1. Divide: Partition array into two sub arrays s1 and s2 with n/2 elements each.
2. Conquer: Recursively sort s1 and s2.
3. Combine: Merge s1 and s2 into a unique sorted group.

7
Algorithm:
1. If the array has fewer than two elements, it is already considered sorted. Return the
array.
2. Divide the array into two halves. This can be done by finding the middle index of the
array.
3. Recursively call merge sort on the first half of the array.
4. Recursively call merge sort on the second half of the array.
5. Merge the two sorted subarrays created from the previous steps.
 Create an empty temporary array to store the merged subarrays.
 Initialize two pointers, one for each subarray, pointing to the first element.
 Compare the elements at the pointers of the subarrays and place the smaller (or
larger) element into the temporary array.
 Move the pointer of the subarray from which the element was taken.
 Repeat the previous two steps until one of the subarrays is fully traversed.
 Copy the remaining elements from the non-empty subarray into the temporary
array.
 Copy the elements from the temporary array back into the original array,
replacing the corresponding elements.
6. Return the sorted array.
Ex: 54 26 93 17 77 31 44 55 20
(i) Splitting the list: Merge sort is a recursive algorithm that continually splits a list in
half. If the list is empty or has one element, it is sorted. If the list contains more
than one item, we split the list and recursively invoke a merge sort on both halves.

8
(ii) Merging the list:

Sort the elements using merge sort. (2 4 1 6 8 5 3 7)


1. Split the array using recursion until the list contains single element.

2. Merging the list by performing sorting.

9
Draw the tree of recursive calls made by the merge sort.

Exercise:
Draw the tree of calls of merge sort for the following set of elements.
20 30 10 45 5 60 90 45 35 25 15 55

Draw the tree of calls of merge sort for the following set of elements.
35 25 15 10 45 75 85 65 55 5 20 18

Performance Analysis:
Time Complexity:
In merge sort algorithm the two recursive calls are made. Each recursive call focuses on n/2
elements of the list. After two recursive calls one call for combining two sublists. Hence the
recurrence relation is;
T(n) = T(n/2) + T(n/2) + cn for n>1
T(n) = 2T(n/2)+cn
= 4T(n/4)+2cn
= 8T(n/8)+4cn
= 2kT(n/2k)+k.cn ((n/2k) = 1 => n = 2k => k = log2n
=n T(1) + log2n cn
= O( n log n)

10
Space Complexity:
Because of merge sort is not an in-placement algorithm, we need a stack for implementing
recursion. So space complexity is O(log n)

QUICK SORT:
Quick Sort is one of the sorting algorithms that uses the divide and conquer strategy. In this
method division is dynamically carried out.
Steps:
1. Divide:
Split the array into two sub arrays that each element in the left sub array is less than or
equal to the middle element and each element in the right sub array is greater than the
middle element. The middle element is called as pivot element.
2. Conquer:
Recursively sort two sub arrays.
3. Combine:
Combine all the sorted elements in a group to form a list of sorted elements.

In merge sort the division of array is based on the positions of array elements, but in
quick sort the division is based on the actual value of the element.

 Quicksort is a divide-and-conquer algorithm that aims to sort an array by partitioning


it into smaller subarrays, sorting them individually, and then combining them.
 The basic idea of quicksort is to select a pivot element from the array and partition the
other elements into two subarrays, according to whether they are less than or greater
than the pivot.
 After partitioning, the pivot is in its final sorted position, and the elements to the left of
the pivot are smaller, while the elements to the right are greater.
 The process is then repeated recursively on the subarrays until the entire array is sorted.
 The choice of the pivot can significantly affect the performance of the algorithm. A
good pivot choice can lead to more balanced partitions, improving the efficiency of the
sorting process.
 Common strategies for choosing the pivot include selecting the first element, the last
element, the middle element, or a randomly chosen element.

11
 Quicksort has an average and best-case time complexity of O(n log n), where n is the
number of elements in the array. However, in the worst-case scenario where the pivot
is consistently poorly chosen, the time complexity can degrade to O(n^2).
 Quicksort is an in-place sorting algorithm, meaning it does not require additional space
proportional to the input size. It rearranges the elements within the original array.
 Quicksort is widely used in practice due to its efficiency and low space requirements.
 It is worth noting that quicksort is not a stable sorting algorithm, meaning it may change
the relative order of elements with equal values.

Algorithm:
1. If the array has fewer than two elements, it is already considered sorted. Return the
array.
2. Select a pivot element from the array. The choice of the pivot can vary (e.g., first
element, last element, middle element, or random element).
3. Partition the array by rearranging the elements around the pivot. Move all elements
smaller than the pivot to its left and all elements greater than the pivot to its right.
4. Recursively apply quicksort on the subarrays to the left and right of the pivot.
5. Combine the sorted subarrays along with the pivot element to obtain the final sorted
array.
Ex:
Pivot
50-i, 30 10 90 80 20 40 70-j
We will now split the array into two parts. The left sublist will contains the elements
less than the pivot and right sublist contains elements greater than pivot.
50 30-i 10 90 80 20 40 70-j
We will increment i. If A[i] <= pivotm we will continue to increment it until the
element pointed by i is greater than pivot.
50 30 10-i 90 80 20 40 70-j
Increment i as A[i]<=piot
50 30 10 90-i 80 20 40 70-j
As A[i] > pivot we will stop incrementing i.
50 30 10 90-i 80 20 40 70-j

12
As A[i]>pivot we will decrement j. We will continue to decrement j until the element
pointed by j is less than pivot.
50 30 10 90-i 80 20 40-j 70
Now we cannot decrement j because A[j]< pivot. Hence we will swap A[i] and A[j].
50 30 10 40-i 80 20 90-j 70
As A[i]<pivot and A[j] is greater than pivot we will continue incrementing i and
decrementing j until the false conditions are occurred.
50 30 10 40 80-i 20 90-j 70
We will stop incrementing i because A[i]>pivot.
50 30 10 40 80-i 20-j 90 70
We will stop decrementing j because A[j] < pivot. Hence we will swap A[i] and A[j].
50 30 10 40 20-i 80-j 90 70
As A[i]<pivot and A[j] is greater than pivot we will continue incrementing i and
decrementing j until the false conditions are occurred.
50 30 10 40 20-j 80-i 90 70
As A[j] < pivot and j has crossed i. i.e. j<i, So we will swap pivot and A[j].
20 30 10 40 50 -P 80 90 70
Now 20, 30, 10, 40 are less than the pivot and 80,90, 70 are greater than the pivot. Now
sort the left sublist and right sub list recursively.
20-P,i 30 10 40-j 50 80 90 70
20 30-i 10 40-j
20 30-j 10-j 40
20 10-i 30-j 40
20 10 30-i,j 40
20 10-j 30-i 40
10 20-P 30 40
80-P,i 90 70-j
80 70-j 90-j
80 70 90-i.j
80 70-j 90-i
70 80-P 90
Now merge two sublists for sorted list.
10 20 30 40 50 70 80 90

13
Time Complexity:
The running time of quick sort depends on whether partition is balanced or not. A very good
partition splits an array into two equal sized arrays. A bad partition splits an array into two
arrays of very different sizes. If the partition is balanced the quick sort runs faster otherwise it
runs very slowly.
Best Case:
The best case of quick sort will happen if each portioning state divides the array exactly in half.

Thus we have k = n/2 and n − k = n/2 for the original array of size n. Consider, therefore, the
recurrence:
T(n) = T(n/2) + T(n/2) + c.n if n>1
= 2T(n/2) + c.n
= 2 [ 2t(n/4)+ c.n/2) + c.n
= 4T (n/4) + 2c.n
= 8T(n/8) + 3c.n
= 2kT(n/2k) + kc.n.
= n log n (Let n/2k = 1 => k= log2n)
i.e. T(n) = O(n log n)
Worst Case:

T(n) = T(n-1+ T(n-2)+ ....


= T(n-1) + c.n
= T(n-2)+ 2cn-c
14
= T(n-3)+ 3cn-3c
= T(n-4)+ 4cn-6c
= T(n-k)+ kcn-(1+2+3+.....+(k-1)c)
= T(n-k)+ ([k(k-1)]/2 ). c
= T(1) + cn2 – ([n(n-1)]/2) C (n-k=1 => k=1)
= O(n2)

Worst case of the quick sort is eliminated by using randomized quick sort.
The worst case for quick sort depends upon the selection of pivot element. The worst case
occurs if we always select the first or last elements as the pivot.
Average Case: O(n log n)
The average case will occur if the pivot element is not a middle element or the first element or
the last element.

Apply quick sort to the sort to sort the list E X A M P L E in alphabetical order. Draw
the tree of recursive calls made.
0 1 2 3 4 5 6
E X-i A M P L E-j
E E A-j M-i P L X
A E E M P L X
A E – i,j
A–j E-i
A E
E
M P–i L X–j
M P–i L-j X
M L –i P-j X
M L –j P-i X
L M P X
L
P X – i, j
P X
X

15
Exercise:
Show how the quick sorts the following sequence of keys in ascending order.
(i) 22 55 33 11 99 77 55 66 54 21 32
(ii) 8 2 5 13 4 19 12 6 3 11 10 7 9
(iii) 65 70 75 80 85 60 55 50 45
(iv) 44 75 23 43 55 12 64 77 33

16
STRASSEN’S MATRIX MULTIPLICATION:
Strassen's algorithm for matrix multiplication is an efficient algorithm for multiplying two
matrices together. It was devised by Volker Strassen in 1969 and significantly reduces the
number of arithmetic operations needed compared to the standard matrix multiplication
algorithm, especially for large matrices.
The standard matrix multiplication algorithm has a time complexity of O(n^3), where n is the
size of the matrices. Strassen's algorithm reduces this to approximately O(n^log 7), making it
faster for large matrices.
The algorithm works by recursively breaking down the matrices into smaller submatrices and
performing matrix multiplication using a set of seven multiplications instead of the usual eight.
Although Strassen's algorithm is faster for large matrices, it has higher overhead due to the
recursive decomposition, and it becomes less efficient for small matrices due to the constant
factors involved.

Strassen’s matrix multiplication follows Divide and conquer strategy and works on only square
matrices, where ‘n’ is power of 2 and order of matrices are n x n.
 Divide: In this step each large matrix is divided into smaller square matrix.
 Conquer: Solve each and every smaller matrix.
 Merge: Merge all the results in conquer step to get final result.
In tradition method time complexity of matrix multiplication is O(n3). Time complexity of
matrix multiplication is reduced by using Strassen’s matrix multiplication method.
Algorithm:
1. Divide the given matrices into submatrices of n/2 x n/2. Until n=2.
2. Using scalar additions and subtractions compute the smaller matrices of size n/2.
3. Recursively compute the seven matrix products.
4. Compute the resultant matrix by using matrix products.

In traditional method C was calculated as,

17
If n=2 the traditional method requries 8 multiplcations and 4 additions. In this case time
complexity is O(n3) and matrix multiplications are more expensive than matrix additions. In
Strassen’s matric multiplication it has only 7 multiplcations and 18 additions or subtractions.
The following are the operations required for Strassen’s matrix multiplication.
P = (A11 + A22) (B11 + B22)
Q = B11 (A21 + A22)
R = A11 (B12 – B22)
S = A22 (B21 – B11)
T = B22 (A11 + A12)
U = (B11 + B12) (A21 – A11)
V = (B21 + B22) (A12-A22)

C11 = P+S-T+V
C12 = R+T
C21 = Q+S
C22 = P+R-Q+U
The resulting recurrence relations for T(n) is

Where a and b are constants.

18
Ex: Apply DandC on 4x4 matrix.

A11 A12 A13 A14

A21 A22 A23 A24


A=
A31 A32 A33 A34

A41 A42 A43 A44

B11 B12 B13 B14

B21 B22 B23 B24


B=
B31 B32 B33 B34

B41 B42 B43 B44

C= C11 C12 C13 C14

C21 C22 C23 C24

C31 C32 C33 C34

C41 C42 C43 C44

Calculate C11 using A11 and B11 as follows;

19
Calculate:

P = (A11 + A22) (B11 + B22)


Q = B11 (A21 + A22)
R = A11 (B12 – B22)
S = A22 (B21 – B11)
T = B22 (A11 + A12)
U = (B11 + B12) (A21 – A11)
V = (B21 + B22) (A12-A22)

C11 = P+S-T+V
C12 = R+T
C21 = Q+S
C22 = P+R-Q+U

Recursively solve C12, C21 and C22 as per C11.

EX:
A = 1 2 B = 2 2
3 4 3 1
CALCULATE C=AB.
SOLUTION:
A11 = 1 A12 = 2 A21 = 3 A22 = 4
B11 = 2 B12 = 2 B21 = 3 B22 = 1

P = (A11 + A22) (B11 + B22)


= (1+4) (2+1) = 15
Q = B11 (A21 + A22)
=2(3+4) = 14
R = A11 (B12 – B22)
= 1(2-1) =1
S = A22 (B21 – B11)
= 4(3-2) = 4
20
T = B22 (A11 + A12)
=1(1+2) =3
U = (B11 + B12) (A21 – A11)
= (2+2)(3-1) = 8
V = (B21 + B22) (A12-A22)
= (3+1) (2-4) = -8

C11 = P+S-T+V = 15+4-3-8 = 8


C12 = R+T = 1+3 = 4
C21 = Q+S = 14+4 = 18
C22 = P+R-Q+U = 15+1-14+8 = 10

C= 8 4
18 10

21
Important Questions

1. Explain Divide and Conquer General method and analyze time complexity of
recurrence relation.
2. Analyze quick sort algorithm with the help of an example and derive time complexity.
3. Show how the Quick sort sorts the following sequences of keys in ascending order.
22, 55, 33, 11, 99, 77, 55, 66, 54, 21, 32
4. Explain merge sort with recursive algorithm and derive its time complexity.
5. Draw the tree of calls of merge for the following set of elements.
(20, 30, 10, 40, 5, 60, 90, 45, 35, 25, 15, 55)
6. Derive the time complexity of Strassen’s matrix multiplication using recurrence
relation.
7. Explain Binary search algorithm and derive its time complexity.
8. Search the element x using Binary Search Technique.
-15 -6 0 7 9 23 54 82 101 112 125 131 142 151 n = 14 x = 151

22

You might also like