0% found this document useful (0 votes)
22 views157 pages

Mvj22Cs41 - Analysis and Design of Algorithms

The document discusses various brute force algorithms including Selection Sort, Bubble Sort, and String Matching, explaining their working principles and time complexities. It also covers the Exhaustive Search approach applied to problems like the Traveling Salesman Problem and the Knapsack Problem, highlighting their inefficiencies and NP-hard nature. Additionally, it outlines the Assignment Problem and its cost matrix, emphasizing the complexity of finding optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views157 pages

Mvj22Cs41 - Analysis and Design of Algorithms

The document discusses various brute force algorithms including Selection Sort, Bubble Sort, and String Matching, explaining their working principles and time complexities. It also covers the Exhaustive Search approach applied to problems like the Traveling Salesman Problem and the Knapsack Problem, highlighting their inefficiencies and NP-hard nature. Additionally, it outlines the Assignment Problem and its cost matrix, emphasizing the complexity of finding optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 157

MVJ22CS41 – ANALYSIS AND DESIGN OF ALGORITHMS

Module 2

Semester – IV Section - A
Academic Year : 2024-2025(EVEN)
By
NIKITHA G S

Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 1
Brute force : Selection sort
Selection sort is an in-place comparison sorting algorithm that uses brute force to
sort an array.

It's called a "brute force" algorithm because it uses the simplest and most ineffective
way of calculating the solution. However, it does makes up for it with its
straightforward implementation.

The algorithm divides the array into two subarrays:


A sorted subarray
An unsorted subarray

The sorted subarray is empty in the beginning. In every iteration, the smallest
element of the unsorted array will be appended to the end of the sorted array by
swapping. This way, the sorted array will eventually contain all the elements of the
original array.
Selection sort is a sorting algorithm that selects the smallest element from an unsorted
list in each iteration and places that element at the beginning of the unsorted list.

Selection sort, by scanning the entire given list to find its smallest element and
exchange it with the first element, putting the smallest element in its final position in
the sorted list.

Then we scan the list, starting with the second element, to find the smallest among the
last n − 1 elements and exchange it with the second element, putting the second
smallest element in its final position Generally, on the ith pass through the list, which
we number from 0 to n − 2, the algorithm searches for the smallest item among the last
n − i elements and swaps it with Ai. After n − 1 passes, the list is sorted.
Working of Selection Sort

1. Set the first element as minimum

2. Compare minimum with the second element. If the second element is smaller
than minimum, assign the second element as minimum.

3. Compare minimum with the third element. Again, if the third element is
smaller, then assign minimum to the third element otherwise do nothing. The
process goes on until the last element.
4. After each iteration, minimum is placed in the front of the unsorted list
5. For each iteration, indexing starts from the first unsorted element. Step 1 to 3
are repeated until all the elements are placed at their correct positions.
For each iteration, indexing starts from the first unsorted element. Step 1 to 3
are repeated until all the elements are placed at their correct positions.
Example : 2
.
How Long does it takes ?

How many times do we swap ?


n-1 times

How many times do we find min ?


n-1 times

What is the basic operation ?


if A[j] < A[min]

How long does it take to find min?


n-1 + n-2 + n-3 +………..1
Time Complexity:

Time Complexity is the number of times the basic operation is


executed

To find the minimum element from the array


of N elements, N−1 comparisons are required. After putting the minimum
element in its proper position, the size of an unsorted array reduces
to N−1 and then N−2 comparisons are required to find the minimum in the
unsorted array.

(n-1) + (n-2) + (n-3) +…….……..1 => (n(n-1))/2 comparisons.


Thus, selection sort is a algorithm on all inputs.

Because it always takes the same number of time irrespective of order of the given
input elements.
Brute Force Approach - Bubble sort
Bubble sort is one of the simple sorting algorithms and also popularly known as
a Brute Force Approach.

The logic of the algorithm is very simple as it works by repeatedly iterating


through a list of elements, comparing two elements at a time and swapping them
if necessary until all the elements are swapped to an order.

For e.g. if we have a list of 10 elements, bubble sort starts by comparing the first
two elements in the list. If the second element is smaller than the first element
then it exchanges them. Then it compares the current second element with the
third element in the list.

This continues until the second last and the last element is compared which
completes one iteration through the list. By the time it completes the first
iteration the largest element in the list comes to the rightmost position.
The algorithm gets its name as we start from lowest point and “bubble up” the
higher elements to the highest point in the list.

We can also follow other approach where we start from highest point and
“bubble down” lowest elements in the list.

Another brute-force application to the sorting problem is to compare adjacent


elements of the list and exchange them if they are out of order.

By doing it repeatedly, we end up “bubbling up” the largest element to the last
position on the list. The next pass bubbles up the second largest element, and so
on, until after n − 1 passes the list is sorted. Pass i (0 ≤ i ≤ n − 2) of bubble sort
can be represented by the following diagram:
Graphical Example
Sequential Search

Brute Force Algorithms are exactly what they sound like – straightforward
methods of solving a problem that rely on sheer computing power and trying every
possibility rather than advanced techniques to improve efficiency.

To repeat, the algorithm simply compares successive elements of a given list with
a given search key until either a match is encountered (successful search) or the
list is exhausted without finding a match (unsuccessful search).

A simple extra trick is often employed in implementing sequential search:

if we append the search key to the end of the list, the search for the key will have
to be successful, and therefore we can eliminate the end of list check altogether.
How Long does it takes ?

How many times do we compare ?


n+1 times (n times for the n elements of the array, 1 time for
extra element that we added at the end)

n+1 : unsuccessful search


Worst case Analysis = O(n)
n : successful search

Best Case Analysis - Ω(1)

Average Case Analysis - ɵ(n)


Brute-Force String Matching
String matching is something crucial for database development and text processing
software.

Fortunately, every modern programming language and library is full of functions for
string processing that help us in our everyday work.

String algorithms can typically be divided into several categories. One of these
categories is string matching.

When it comes to string matching, the most basic approach is what is known as brute
force, which simply means to check every single character from the text to match
against the pattern. Applications: Text editors, Search engines, Biological
research
Applications of String Matching Algorithms:

It is applicable to wide variety of problems, particularly If we have large input size,


we may use this Brute force.

Plagiarism Detection: The documents to be compared are decomposed into string


tokens and compared using string matching algorithms. Thus, these algorithms are
used to detect similarities between them and declare if the work is plagiarized or
original.

Bioinformatics and DNA Sequencing: Bioinformatics involves applying information


technology and computer science to problems involving genetic sequences to find
DNA patterns. String matching algorithms and DNA analysis are both collectively
used for finding the occurrence of the pattern set.
Spelling Checker: Trie is built based on a predefined set of patterns. Then,
this trie is used for string matching. The text is taken as input, and if any such
pattern occurs, it is shown by reaching the acceptance state.

Search engines or content search in large databases: To categorize and


organize data efficiently, string matching algorithms are used. Categorization
is done based on the search keywords. Thus, string matching algorithms make
it easier for one to find the information they are searching for.
Given a string of n characters called the text and a string of m characters (m ≤ n)
called the pattern, find a substring of the text that matches the pattern. To put it
more precisely, we want to find i—the index of the leftmost character of the first
matching substring in the text

If matches other than the first one need to be found, a string-matching algorithm
can simply continue working until the entire text is exhausted.
Align the pattern against the first m characters of the text and start matching the
corresponding pairs of characters from left to right until either all the m pairs of the
characters match (then the algorithm can stop) or a mismatching pair is encountered.

In the latter case, shift the pattern one position to the right and resume the character
comparisons, starting again with the first character of the pattern and its counterpart
in the text.

Note that the last position in the text that can still be a beginning of a matching
substring is n − m(provided the text positions are indexed from 0 to n − 1).

Beyond that position, there are not enough characters to match the entire pattern;
hence, the algorithm need not make any comparisons there.
0 1 2 3 4

T a b a c a n=5

0 1 2
P a b a m=3
0 1 2 3 4

T b d a c a n=5

0 1 2
P a c a m=3
Assignment – 1
Question : 3 0 1 2 3 4

T a t e n a n=5

0 1 2
P t e n m=3
The brute force algorithm consists in checking, at all positions in the text between
0 and n-m, whether an occurrence of the pattern starts there or not. Then, after
each attempt, it shifts the pattern by exactly one position to the right.

The time complexity of brute force string matching is O(mn), which is sometimes
written as O(n*m) . So, if we were to search for a string of "n" characters in a
string of "m" characters using brute force, it would take us n * m tries.

Worst Case:

Outer loop : (n-m+1) => n


Inner loop : m

=> O(n*m)
Exhaustive Search
Exhaustive search is simply a brute-force approach to combinatorial problems. It
is a method obtained by searching each element of given problem.

Exhaustive search applied it to three important problems: the traveling salesman


problem, the knapsack problem, and the assignment problem.

Traveling Salesman Problem : The traveling salesman problem (TSP) problem


asks to find the shortest tour through a given set of n cities that visits each city
exactly once before returning to the city where it started.

The traveling salesman problem consists of a salesman and a set of cities. The
salesman has to visit each one of the cities starting from a certain one (e.g. the
hometown) and returning to the same city. The challenge of the problem is that the
traveling salesman wants to minimize the total length of the trip
The problem can be conveniently modeled by a weighted graph, with the graph’s
vertices representing the cities and the edge weights specifying the distances.
Then the problem can be stated as the problem of finding the shortest Hamiltonian
circuit of the graph

Hamiltonian circuit is defined as a cycle that passes through all the vertices of the
graph exactly once.

It is easy to see that a Hamiltonian circuit can also be defined as a sequence of n +


1 adjacent vertices vi0, vi1, . . . , vin−1, vi0. where the first vertex of the sequence is
the same as the last one and all the other n − 1 vertices are distinct.

Thus, we can get all the tours by generating all the permutations of n − 1
intermediate cities, compute the tour lengths, and find the shortest among them.
1->3->4->2->1 2+3+4+2=11 Optimal

1->2->3->4->1 2+5+3+1=11 Optimal


1

2 2 1->3->2->4->1 2+5+4+1=12
1
1->2->4->3->1 2+4+3+2=11 Optimal
2 4 3
4 3
1->4->2->3->1 1+4+5+2=12
5
1->4->3->2->1 1+3+5+2=11 Optimal
Assignment : 1

4. Traveling Salesman Problem

6 5
1

b d c
5 3

8
KNAPSACK PROBLEM

The problem states that: given n items of known weights w1,w2, … wn and

values v1, v2,…, vn and a knapsack of capacity w, find the most valuable subset

of the items that fit into the knapsack.

Eg Items in the shopping basket

Problem Statement: To fill up the sack with items, such a way that getting the

maximum profit out of that.

Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 57
Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 58
Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 59
The above figure presents a small instance of the knapsack problem in which

65 is the maximum value. Considering that as a optimal solution,

This problem considers all the subsets of the set of n items given, computing

the total weight of each subset in order to identify feasible subsets and finding

the largest among the values, which is an optimal solution.

 The number of subsets of an n-element set is 2n the search leads to a Ω(2n)

algorithm, which is not based on the generation of individual subsets.


Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 60
Engineered for
Tomorrow

Thus, for both TSP and knapsack, exhaustive search leads to algorithms that are

inefficient on every input.

 These two problems are the best known examples of NP-hard problems.

 No polynomial-time algorithm is known for any NP-hard problem.

 The two methods Backtracking and Branch & bound enable us to solve this

problem in less than exponential time.

Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 61
Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 62
ASSIGNMENT PROBLEM

The problem is: given n people who need to be assigned to execute n jobs, one
person per job. (That is, each person is assigned to exactly one job and each job is
assigned to exactly one person).

The cost if the ith person is assigned to the jth job is a known quantity c[i,j] for each
pair i, j=1,2,….,n. The problem is to find an assignment with the minimum total
cost.

Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 63
Affiliated to VTU, Belagavi, Approved by AICTE, New Delhi, Recognised by UGC with 2(f) & 12 (B), 64
It is easy to see that an instance of the assignment problem is completely specified by
its cost matrix C.

In terms of this matrix, the problem is to select one element in each row of the matrix
so that all selected elements are in different columns and the total sum of the selected
elements is the smallest possible.

Note that no obvious strategy for finding a solution works here. For example, we
cannot select the smallest element in each row, because the smallest elements may
happen to be in the same column.

In fact, the smallest element in the entire matrix need not be a component of an
optimal solution. Thus, opting for the exhaustive search.
We can describe feasible solutions to the assignment problem as n-tuples j1, . . . , jn
in which the ith component, i = 1, . . . , n, indicates the column of the element
selected in the ith row (i.e., the job number assigned to the ith person).

For example, for the cost matrix above, 2, 3, 4, 1 indicates the assignment of Person
1 to Job 2, Person 2 to Job 3, Person 3 to Job 4, and Person 4 to Job 1.

The requirements of the assignment problem imply that there is a one-to-one


correspondence between feasible assignments and permutations of the first n
integers.
Therefore, the exhaustive-search approach to the assignment problem would
require generating all the permutations of integers 1, 2, . . . , n,

Computing the total cost of each assignment by summing up the


corresponding elements of the cost matrix, and finally selecting the one with
the smallest sum
Divide and Conquer
Divide-and-conquer is probably the best-known general algorithm design technique

Divide-and-conquer algorithms work according to the following general plan:

1. Divide: Divide the problem into a number of smaller sub-problems ideally of


about the same size.

2. Conquer: The smaller sub-problems are solved, typically recursively. If the sub-
problem sizes are small enough, just solve the sub-problems in a straight forward
manner.

3. Combine: If necessary, the solution obtained the smaller problems are connected
to get the solution to the original problem.
As an example, let us consider the problem of computing the sum of n numbers

a0, . . . , an−1. If n > 1

we can divide the problem into two instances of the same problem: to compute the
sum of the first [n/2] numbers and to compute the sum of the remaining [n/2] numbers.

Once each of these two sums is computed by applying the same method recursively,
we can add their values to get the sum in question:

Ex: Arr[0] Arr[1] Arr[2] Arr[3] Arr[4]


Arr 2 3 4 5 1
Divide-and-conquer technique :
GENERAL METHOD

Suppose we consider the divide-and-conquer strategy, when it splits the input into
two sub problem of the same kind as the original problem.

A control abstraction is a procedure whose flow of control is clear but whose


primary operations are specified by other procedures whose precise meanings are
left undefined.

The control abstraction for divide and conquer technique is DANDC(P),


where P is the problem to be solved.
Control Abstraction for Divide and Conquer

Algorithm DAndC
{
if small(p) then return s(p);
else
{
Divide p into smaller instructions p1, p2, p3….. pk; where k>1

Apply DAndC to each of the sub problems;

return combine(DAndC(p1),DAndC(p2)….D&C(pk));
}
}
Algorithm: Control abstraction for divide-and-conquer DandC(p) is the divide-and-
conquer algorithm, where P is the problem to be solved.

Small(p) is a Boolean valued function(i.e., either true or false) that determines


whether the input size is small enough that the answer can be computed without
splitting.

If this, is so the function S is invoked. Otherwise the problem P is divided into smaller
sub-problems. These sub-problems P1, P2, P3……..Pk, are solved by receive
applications of DandC.

Combine is a function that combines the solution of the K sub-problems to get the
solution for original problem ‘P. If the size of p is n and the sizes of the k sub
problems are n1, n2, . . . . nk.
Computing time of DAndC is described by the recurrence relation

T(n) is the time for DAndC on any input of size n and g(n) is the time to compute
the answer directly for small inputs. The function f(n) is the time for dividing P
and combining the solutions to sub problems

For divide and conquer based algorithms that produce sub problems of same type
as original problem, The complexity of divide and conquer can be given by,

where a and b are known constants.


BINARY SEARCH
Binary search is an efficient searching technique that works with only sorted lists.
So the list must be sorted before using the binary search method. Binary search is
based on divide-and conquer technique.

The process of binary search is as follows: The method starts with looking at the
middle element of the list. If it matches with the key element, then search is
complete. Otherwise, the key element may be in the first half or second half of
the list. If the key element is less than the middle element, then the search
continues with the first half of the list.

If the key element is greater than the middle element, then the search continues
with the second half of the list. This process continues until the key element is
found or the search fails indicating that the key is not there in the list
Consider the list of elements: -4, -1, 0, 5, 10, 18, 32, 33, 98, 147, 154, 198,
250, 500. Trace the binary search algorithm searching for the element -1.
Recursive Binary Search:
Iterative Binary Search:
Advantages of Binary Search: The main advantage of binary search is that it is faster
than sequential (linear) search. Because it takes fewer comparisons, to determine
whether the given key is in the list, then the linear search method

Disadvantages of Binary Search: The disadvantage of binary search is that can be


applied to only a sorted list of elements. The binary search is unsuccessful if the list is
unsorted.

Efficiency of Binary Search: To evaluate binary search, count the number of


comparisons in the best case, average case, and worst case.
Efficiency of Binary Search: To evaluate binary search, count the number of
comparisons in
the best case, average case, and worst case.

Best Case: The best case occurs if the middle element happens to be the key
element. Then only one comparison is needed to find it. Thus the efficiency of
binary search is Ω(1).
Ex: Let the given list is: 1, 5, 10, 11, 12. Let key = 10.

Since the key is the middle element and is found at our first attempt

Worst Case: Assume that in worst case, the key element is not there in the list.
So the process of divides the list in half continues until there is only one item
left to check.
For a list of size 16, there are 4 comparisons to reach a list of size one, given that
there is one comparison for each division, and each division splits the list size in
half.

In general, if n is the size of the list and c is the number of comparisons, then
C = log2 n

Eficiency in worst case = O(log n)


Average Case: In binary search, the average case efficiency is near to the worst
case efficiency.

So the average case efficiency will be taken as O(log n).

Efficiency in average case = O (log n).

The Time complexity of Binary Search is O (log n)


FINDING THE MAXIMUM AND MINIMUM

Problem statement: Given a list of n elements, the problem is to find the maximum
and minimum items.

Example :
Straight forward Maximum & Minimum

a[1] a[2] a[3] a[4] a[5]

a 2 3 4 5 1
Divide & Conquer Method:

1. Divide the list into small groups.


2. Then find max and min of each group.
3. The max/min of result must be one of maxs and mins of the groups.

Example:
Recursively finding the Maximum & Minimum

a[1]
a 10

a[1] a[2]
a 10 20
Analysis of MaxMin Algorithm

Analyzing the time complexity of the algorithm , Concentrates on the


number of element comparisions

If only one element is present, then comparison is not required.


Time Complexity T(n)=0, n = 1

If there are two elements in the list then we require 1 comparision


Time Complexity T(n)=1, n = 2

If there are more than two elements in the given list ,

T(n) = T(n/2) + T(n/2) +2 ……………… n>2


Merge sort

Merge Sort is one of the most popular sorting algorithms that is based on the
principle of Divide and Conquer Algorithm.

It divides the input array into two halves, calls itself for the two halves, and then
merges the two sorted halves.

It sorts a given array A[0..n − 1] by dividing it into two halves A[0..n/2 − 1] and
A[n/2..n − 1], sorting each of them recursively, and then merging the two smaller
sorted arrays into a single sorted one.
Mergesort Example:1
a[1] a[2] a[3] a[4] a[1] a[2] a[3] a[4]
1 4 6 8 2 3 5 7

b[1] b[2] b[3] b[4] b[5] b[6] b[7] b[8]


1 4 6 8 2 3 5 7 9

1 2 3 4 5 6 7 8
whenever we divide a number into half in every step, it can be represented using a
logarithmic function, which is log n and the number of steps can be represented
by log n + 1(at most)

Also, we perform a single step operation to find out the middle of any subarray,
i.e. O(1).

And to merge the subarrays, made by dividing the original array of n elements, a
running time of O(n) will be required.

Hence the total time for mergeSort function will become n(log n + 1), which gives
us a time complexity of O(n*log n).
Worst Case Time Complexity [ Big-O ]: O(n*log n)
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n)
Time Complexity of
Merge Sort : ɵ(n log n)
T(1)=0
Best Case Complexity: The merge sort algorithm has a best-case time
complexity of O(n*log n) for the already sorted array.

Average Case Complexity: The average-case time complexity for the


merge sort algorithm is O(n*log n), which happens when 2 or more
elements are jumbled, i.e., neither in the ascending order nor in the
descending order.

Worst Case Complexity: The worst-case time complexity is also O(n*log


n), which occurs when we sort the descending order of an array into the
ascending order.
QUICKSORT

The divide-and-conquer approach can be used to arrive at an efficient sorting method


different from merge sort.

In merge sort, the file a[1:n] was divided at its midpoint into subarrays which were
independently sorted and later merged. In quicksort, the division into two subarrays is
made so that the sorted subarrays do not need to be merged later.

•Quick Sort is a famous sorting algorithm.


•It sorts the given data items in ascending order.
•It uses the idea of divide and conquer approach.
•It follows a recursive algorithm.
How Does Quick Sort Works?

•Quick Sort follows a recursive algorithm.


•It divides the given array into two sections using a partitioning element called as
pivot.

The division performed is such that-


•All the elements to the left side of pivot are smaller than pivot.
•All the elements to the right side of pivot are greater than pivot.

After dividing the array into two sections, the pivot is set at its correct position.
Then, sub arrays are sorted separately by applying quick sort algorithm recursively.
m p
0 1 2 3 4 5 6 7 8 9
10 16 8 12 15 6 3 9 5

Pivot = 10

m p
0 1 2 3 4 5 6 7 8 9
10 16 8 12 15 6 3 9 5

i j
Pivot = 10
m p
0 1 2 3 4 5 6 7 8 9
10 16 8 12 15 6 3 9 5
i j
v

Pivot = 10
Here,

i will search for the element which is greater than 10. ie Pivot

j will search for the element which is less than 10. ie Pivot

So that the numbers can be exchanged.

i atmost stop at infinity and j atmost stop at pivot


Partitioning Procedure: STEP - 1
m p
0 1 2 3 4 5 6 7 8 9
10 16 8 12 15 6 3 9 5
i j
Pivot = 10 v
Here,

increment i until we find the element which is greater than 10. ie Pivot

decrement j until we find the element which is less than 10. ie Pivot

Exchange 5 and 16.


0 1 2 3 4 5 6 7 8 9
10 5 8 12 15 6 3 9 16
i j
Pivot = 10
Partitioning Procedure: STEP - 2
m p
0 1 2 3 4 5 6 7 8 9
10 5 8 12 15 6 3 9 16
i j
Pivot = 10 v
increment i, 8 is not greater than 10.
increment i,12 is greater than 10.
decrement j until we find the element which is less than 10. ie Pivot
9 is less than 10. ie pivot

Exchange 12 and 9.

0 1 2 3 4 5 6 7 8 9
10 5 8 9 15 6 3 12 16
i j
Pivot = 10
Partitioning Procedure: STEP - 3
m p
0 1 2 3 4 5 6 7 8 9
10 5 8 9 15 6 3 12 16
i j
Pivot = 10 v
increment i,15 is greater than 10
decrement j, 3 is less than 10. ie pivot

Exchange 15 and 3.

0 1 2 3 4 5 6 7 8 9
10 5 8 9 3 6 15 12 16
i j
Pivot = 10
Partitioning Procedure: STEP - 3
m p
0 1 2 3 4 5 6 7 8 9
10 5 8 9 3 6 15 12 16
j i
Pivot = 10 v
increment i,6 is not greater than 10
increment i,15 is greater than 10
decrement j, 6 is less than 10. ie pivot
Here, i> j, so don’t interchange i and j, interchange pivot and j

Interchange 10 and 6.

0 1 2 3 4 5 6 7 8 9
6 5 8 9 3 10 15 12 16
j i
Pivot = 6
Partitioning Procedure: STEP - 3
m p
0 1 2 3 4 5 6 7 8 9
6 5 8 9 3 10 15 12 16
j i
Pivot = 6 v

Here all the elements at the LHS are less than 10 and all the elements at
the RHS are greater than 10. But both are not sorted.
j is the partitioning position.
Perform quick sort recursively on each side.
Partitioning Procedure: STEP - 3
p q
0 1 2 3 4 5 6 7 8 9
6 5 8 9 3 10 15 12 16
j i
Pivot = 6 v
Worst Case-

•Quick Sort is sensitive to the order of input data.

•It gives the worst performance when elements are already in the ascending order.

•It then divides the array into sections of 1 and (n-1) elements in each call.

•Then, there are (n-1) divisions in all.

•Therefore, here total comparisons required are f(n) = n x (n-1) = O(n 2).
Worst Case Analysis:

The pivot is the


smallest element, all
the time
Average Case Analysis
Worst Case Analysis:
By definition, C=1, n0=0,g(n)=n2

So, T(n)= o(n2)


Matrix multiplication: Strassen’s algorithm
Multiplication can be performed by 4 equations:
c11 = a11b11 + a12b21
c11 = a11b12 + a12b22
c21 = a21b11 + a22b21
c22 = a21b12 + a22b22
void multiply(int A[][N], int B[][N], int C[][N])
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
C[i][j] = 0;
for (int k = 0; k < N; k++)
{
C[i][j] += A[i][k]*B[k][j]; Time Complexity = o(n 3)
}
}
}
}
Consider two matrices A and B with 4x4 dimension each as shown below,

The matrix multiplication of the above two matrices A and B is Matrix C,

where,
c11=a11∗b11+a12∗b21+a13∗b31+a14∗b41
c12=a11∗b12+a12∗b22+a13∗b32+a14∗b42
c21=a21∗b11+a22∗b21+a23∗b31+a24∗b41
c22=a21∗b12+a22∗b22+a23∗b32+a24∗b42
.
.
and so on.
The divide and conquer paradigm is important general technique for designing
algorithms.

In general, it follows the steps:

- divide the problem into sub problems

- recursively solve the sub problems

- combine solutions to sub problems to get solution to original problem


Now, let's look at the Divide and Conquer approach to multiply two matrices.
Take two submatrices from the above two matrices A and B each as
where block matrices Aij are of size n/2×n/2 (same with respect to block entries of
B and C). Trivially, we may apply the definition of block-matrix multiplication to
write down a formula for the block-entries of C, i.e.
Algorithm MM(A,B,n)
{
if(n<=2)
{ C=4 formula;
(c11 = a11b11 + a12b21
c11 = a11b12 + a12b22
c21 = a21b11 + a22b21
c22 = a21b12 + a22b22) }
else
{
mid=n/2;
MM(A11,B11,n/2) +MM(A12,B21,n/2)
MM(A11,B12,n/2) +MM(A12,B22,n/2)
MM(A21,B11,n/2) +MM(A22,B21,n/2)
MM(A21,B12,n/2) +MM(A22,B22,n/2)
}
For n>2 the elements of C can be computed Using matrix multiplication and addition
operations applied to matrices of Size n/2 x n/2.Sincen is a power of 2,these matrix
products can be recursively computed by the same algorithm we are using for the nxn
case. This Algorithm will continue applying itself to smaller-sized sub matrices until n
Becomes suitably small(n = 2) so that the product is computed directly

To compute AB, We need to perform eight multiplications of n/2 x n/2matrices and


four additions of n/2x n/2matrices.

Since two n/2 x n/2 matrices can be added in time cn2 for some constant c, the overall
Computing time T(n) of the resulting divide-and-conquer algorithm is given by the
recurrence
a=8, b=2, f(n)=n2

nk = n2
Log28
Log28 = 3 => n = n3

Time Complexity = ɵ(n3)


Strassen in 1969 which gives an overview that how we can find the multiplication of
two 2*2 dimension matrix by the brute-force algorithm.

But by using divide and conquer technique the overall complexity for multiplication
two matrices is reduced. This happens by decreasing the total number if multiplication
performed at the expenses of a slight increase in the number of addition.

For multiplying the two 2*2 dimension matrices Strassen's used some formulas in
which there are seven multiplication and eighteen addition, subtraction, and in brute
force algorithm, there is eight multiplication and four addition.

Strassen’s Algorithm is an algorithm for matrix multiplication. It is faster than the naive
matrix multiplication algorithm. The idea behind Strassen’s algorithm is in the
formulation of matrix multiplication as a recursive problem.
Pseudocode

1.Divide matrices A and B in 4 sub-matrices of size N/2 x N/2


as shown in the below diagram.
2.Calculate the 7 matrix multiplications recursively.
3.Compute the sub matricies of C.
4.Combine these sub matricies into our new matrix C
Advantages of Divide and Conquer

Divide and conquer is a powerful tool for solving conceptually difficult problems.

• Divide and Conquer tend to successfully solve one of the biggest problems, such
as the Tower of Hanoi, a mathematical puzzle, divide & conquer .

•It is challenging to solve complicated problems for which you have no basic idea,
but with the help of the divide and conquer approach, it has lessened the effort as it
works on dividing the main problem into two halves and then solve them
recursively. This algorithm is much faster than other algorithms.

• It efficiently uses cache memory without occupying much space because it solves
simple sub problems within the cache memory instead of accessing the slower main
memory.
• It is more proficient than that of its counterpart Brute Force technique.
• Since these algorithms inhibit parallelism, it does not involve any modification
and is handled by systems incorporating parallel processing.
Results in efficient algorithms
• Divide & Conquer algorithms are adapted for execution in multi-processor
machines
•Results in algorithms that use memory cache efficiently

Disadvantages of Divide and Conquer

• Since most of its algorithms are designed by incorporating recursion, so it


necessitates high memory management.

• An explicit stack may overuse the space.


It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU
.
• Recursion is slow

Another concern with it is the fact that sometimes it can become more complicated
than a basic iterative approach, especially in cases with a large n.
In other words, if someone wanted to add a large amount of numbers together, if they
just create a simple loop to add them together, it would turn out to be a much simpler
approach than it would be to divide the numbers up into two groups, add these groups
recursively, and then add the sums of the two groups together.

Another downfall is that sometimes once the problem is broken down into sub
problems, the same sub problem can occur many times

You might also like