0% found this document useful (0 votes)
35 views65 pages

1 Module 5

The document discusses various searching and sorting algorithms, including Linear Search, Binary Search, and Bubble Sort, along with their implementations and complexities. Linear Search is a simple algorithm for finding an element in an unordered list, while Binary Search is more efficient for sorted lists. Bubble Sort is an elementary sorting technique that repeatedly swaps adjacent elements, but it is not suitable for large datasets due to its poor performance.

Uploaded by

khushi.k.yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views65 pages

1 Module 5

The document discusses various searching and sorting algorithms, including Linear Search, Binary Search, and Bubble Sort, along with their implementations and complexities. Linear Search is a simple algorithm for finding an element in an unordered list, while Binary Search is more efficient for sorted lists. Bubble Sort is an elementary sorting technique that repeatedly swaps adjacent elements, but it is not suitable for large datasets due to its poor performance.

Uploaded by

khushi.k.yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 65

Searching and Sorting

Module V- Searching & sorting


- Searching: Sequential Search, Binary Search
- Sorting: Insertion sort, Selection sort, Merge sort, Quick sort
and Radix sort. Analysis of all sorting techniques

Mrs. Surekha MaliAssistant Professor IT Dept SLRTCE


Sequential Search or Linear search
• Searching is the process of finding some particular element in the list. If the element is present in the list, then
the process is called successful, and the process returns the location of that element; otherwise, the search is
called unsuccessful.
• Two popular search methods are Linear Search and Binary Search. So, here we will discuss the popular
searching technique, i.e., Linear Search Algorithm.
• Linear search is also called as sequential search algorithm. It is the simplest searching algorithm. In Linear
search, we simply traverse the list completely and match each element of the list with the item whose location
is to be found. If the match is found, then the location of the item is returned; otherwise, the algorithm returns
NULL.
• It is widely used to search an element from the unordered list, i.e., the list in which items are not sorted. The
worst-case time complexity of linear search is O(n).
• Searching is the process of finding some particular element in the list. If the element is present in
the list, then the process is called successful, and the process returns the location of that
element; otherwise, the search is called unsuccessful.
• Two popular search methods are Linear Search and Binary Search. So, here we will discuss the
popular searching technique, i.e., Linear Search Algorithm.
• Linear search is also called as sequential search algorithm. It is the simplest searching
algorithm. In Linear search, we simply traverse the list completely and match each element of the
list with the item whose location is to be found. If the match is found, then the location of the item
is returned; otherwise, the algorithm returns NULL.
• It is widely used to search an element from the unordered list, i.e., the list in which items are not
sorted. The worst-case time complexity of linear search is O(n).
Sequential Search or Linear search
• The steps used in the implementation of Linear Search are listed as
follows -
• First, we have to traverse the array elements using a for loop.
• In each iteration of for loop, compare the search element with the
current array element, and -
• If the element matches, then return the index of the corresponding array element.
• If the element does not match, then move to the next element.
• If there is no match or the search element is not present in the given
array, return -1.
• Now, let's see the algorithm of linear search.
Sequential Search or Linear search
• How Does Linear Search Algorithm Work?
• In Linear Search Algorithm,
• Every element is considered as a potential match for the key and checked
for the same.
• If any element is found equal to the key, the search is successful and the
index of that element is returned.
• If no element is found equal to the key, the search yields “No match found”.
• For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90,
40} and key = 30
Example of Sequential Search or Linear
Search
#include<stdio.h>
void main ()
{
int a[10] = {10, 23, 40, 1, 2, 0, 14, 13, 50, 9}; if(flag != 0)
int item, i,flag; {
printf("\nEnter Item which is to be searched\ printf("\nItem found at location %d\
n"); n",flag);
scanf("%d",&item); }
for (i = 0; i< 10; i++) else
{ {
if(a[i] == item) printf("\nItem not found\n");
{ }
flag = i+1; }
break;
}
else
flag = 0;
}
Binary Search
• Searching is the process of finding some particular element in the list. If the element is
present in the list, then the process is called successful, and the process returns the
location of that element. Otherwise, the search is called unsuccessful.
• Linear Search and Binary Search are the two popular searching techniques. Here we
will discuss the Binary Search Algorithm.
• Binary search is the search technique that works efficiently on sorted lists. Hence, to
search an element into some list using the binary search technique, we must ensure
that the list is sorted.
• Binary search follows the divide and conquer approach in which the list is divided into
two halves, and the item is compared with the middle element of the list. If the match
is found then, the location of the middle element is returned. Otherwise, we search
into either of the halves depending upon the result produced through the match.
Working of Binary search

Now, let's see the working of the Binary Search Algorithm.


To understand the working of the Binary search algorithm, let's take a sorted
array. It will be easy to understand the working
of Binary search with an example.
There are two methods to implement the binary search algorithm -
•Iterative method
•Recursive method
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are -

Let the element to search is, K = 56


We have to use the below formula to calculate the mid of the array -
Binary Search Example
int binarySearch(int a[], int beg, int end, int item)
#include<stdio.h>
{
int binarySearch(int[], int, int, int);
int mid;
void main ()
if(end >= beg)
{
{
int arr[10] = {16, 19, 20, 23, 45, 56, 78, 90, 96, 100};
mid = (beg + end)/2;
int item, location=-1;
if(a[mid] == item)
printf("Enter the item which you want to search ");
{
scanf("%d",&item);
return mid+1;
location = binarySearch(arr, 0, 9, item);
}
if(location != -1)
else if(a[mid] < item)
{
{
printf("Item found at location %d",location);
return binarySearch(a,mid+1,end,item);
}
}
else
else
{
{
printf("Item not found");
return binarySearch(a,beg,mid-1,item);
}
}
}
}
return -1;
Advantages of using binary search Algorithm
• When it comes to comparing large data, it is quite efficient as it
works on the technique to eliminate half of the array element.
• It has less compilation time and thus better time complexity.
• As it breaks the array in half, it is considered an improvement over
linear search.
Bubble Sort
• The working procedure of bubble sort is simplest. This article will be very
helpful and interesting to students as they might face bubble sort as a
question in their examinations. So, it is important to discuss the topic.
• Bubble sort works on the repeatedly swapping of adjacent elements until
they are not in the intended order. It is called bubble sort because the
movement of array elements is just like the movement of air bubbles in the
water. Bubbles in water rise up to the surface; similarly, the array elements
in bubble sort move to the end in each iteration.
• Although it is simple to use, it is primarily used as an educational tool
because the performance of bubble sort is poor in the real world. It is not
suitable for large data sets. The average and worst-case complexity of
Bubble sort is O(n2), where n is a number of items.
Bubble short is majorly used where –
complexity does not matter

simple and short code is preferred


Algorithm

1.begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8.end BubbleSort
Working of Bubble sort Algorithm

Now, let's see the working of Bubble sort Algorithm.


To understand the working of bubble sort algorithm, let's take an unsorted array. We
are taking a short and accurate array, as we
know the complexity of bubble sort is O(n2).
Let the elements of array are -

First Pass

Sorting will start from the initial two elements. Let compare them to check which is greater.

Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.
Here, 26 is smaller than 36. So, swapping is
required. After swapping new array will look like
– Here, 10 is smaller than 35 that are not sorted. So,
swapping is required. Now, we reach at the end of the
array. After first pass, the array will be -

Now, compare 32 and 35.

Now, move to the second iteration.

Here, 35 is greater than 32. So, there


is no swapping required as they are
already sorted.
Now, the comparison will be in
between 35 and 10.
Second Pass

The same process will be followed for second


iteration.

Here, 10 is smaller than 32. So, swapping is required. After


swapping, the array will be -

Now, move to the third iteration.


Third Pass
The same process will be followed for third
iteration.

Here, 10 is smaller than 26. So, swapping is required. After swapping, the array
will be –

Now, move to the fourth iteration.


Fourth pass

Similarly, after the fourth iteration, the array will be –

Hence, there is no swapping required, so the array is completely sorted.


Bubble Sort Example
Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and worst case. We will also see
the space complexity of bubble sort.

1.Time Complexity

•Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-
case time complexity of bubble sort is O(n).

•Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of bubble sort is O(n2).

•Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of bubble sort is O(n2).

Case Time Complexity

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)


2. Space Complexity

•The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra
variable is required for swapping.
•The space complexity of optimized bubble sort is O(2). It is because two extra variables
are required in optimized bubble sort.
Now, let's discuss the optimized bubble sort algorithm.

Space Complexity O(1)

Stable YES
1.#include<stdio.h>
2. void print(int a[], int n) //
function to print array elements
3. { 1.void main ()
4. int i; 2.{
5. for(i = 0; i < n; i++) 3. int i, j,temp;
6. { 4. int a[5] = { 10, 35, 32, 13, 26};
7. printf("%d ",a[i]); 5. int n = sizeof(a)/sizeof(a[0]);
8. } 6. printf("Before sorting array elements are -
9. } n");
10. void bubble(int a[], int n) // function to imple 7. print(a, n);
ment 8. bubble(a, n);
bubble sort 9. printf("\nAfter sorting array elements are -
1. { n");
2. int i, j, temp; 10. print(a, n);
3. for(i = 0; i < n; i++) 11.}
4. {
5. for(j = i+1; j < n; j++)
6. {
7. if(a[j] < a[i])
8. {
9. temp = a[i];
10. a[i] = a[j];
11. a[j] = temp;
Insertion Sort
• The working procedure of insertion sort is also simple. This article will be very helpful and
interesting to students as they might face insertion sort as a question in their
examinations. So, it is important to discuss the topic.
• Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the
first card is already sorted in the card game, and then we select an unsorted card. If the
selected unsorted card is greater than the first card, it will be placed at the right side;
otherwise, it will be placed at the left side. Similarly, all unsorted cards are taken and put in
their exact place.
• The same approach is applied in insertion sort. The idea behind the insertion sort is that
first take one element, iterate it through the sorted array. Although it is simple to use, it is
not appropriate for large data sets as the time complexity of insertion sort in the average
case and worst case is O(n2), where n is the number of items. Insertion sort is less
efficient than the other sorting algorithms like heap sort, quick sort, merge sort, etc.
Insertion sort has various Advantages Algorithm
such as -
The simple steps of achieving the insertion sort
•Simple implementation
are listed as follows –
•Efficient for small data sets
Step 1 - If the element is the first element,
•Adaptive, i.e., it is appropriate for data assume that it is already sorted. Return 1.
sets that are already substantially sorted.
Step2 - Pick the next element, and store it
separately in a key.

Step3 - Now, compare the key with all


elements in the sorted array.

Step 4 - If the element in the sorted array is


smaller than the current element, then move to
the next element. Else, shift greater elements in
the array towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.


Working of Insertion sort Algorithm
Now, let's see the working of the insertion sort Algorithm.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be
easier to understand the insertion sort via an example.
Let the elements of array are –

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.


Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are 31
and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.


So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and
32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.


17 is smaller than 32. So, swap them

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.


Insertion Sort Example
#include <stdio.h>
void printArr(int a[], int n) /* function to print the
void insert(int a[], int n) /* function to sort an array with
array */
insertion sort */
{
{
int i;
int i, j, temp;
for (i = 0; i < n; i++)
for (i = 1; i < n; i++)
printf("%d ", a[i]);
{
}
temp = a[i];
j = i - 1;
int main()
{
while(j>=0 && temp <= a[j]) /* Move the elements
int a[] = { 12, 31, 25, 8, 32, 17 };
greater than temp to one position ahead from their
int n = sizeof(a) / sizeof(a[0]);
current position*/
printf("Before sorting array elements are - \n");
{
printArr(a, n);
a[j+1] = a[j];
insert(a, n);
j = j-1;
printf("\nAfter sorting array elements are - \n");
}
printArr(a, n);
a[j+1] = temp;
}
return 0;
}
}
Insertion sort complexity
Now, let's see the time complexity of insertion sort in best case, average case, and in worst
case. We will also see the space complexity of insertion sort.

1.Time Complexity

Case Time Complexity


Best Case O(n)
Average Case O(n2)
Worst Case O(n2)

2. Space Complexity
•The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra
variable is required for swapping.

Space Complexity O(1)


Stable YES
•Best Case Complexity - It occurs when there is no sorting required,
i.e. the array is already sorted. The best-case time complexity of
insertion sort is O(n).

•Average Case Complexity - It occurs when the array elements are in


jumbled order that is not properly ascending and not properly
descending. The average case time complexity of insertion sort is O(n2).

•Worst Case Complexity - It occurs when the array elements are


required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of insertion sort
is O(n2).
Selection Sort
• The working procedure of selection sort is also simple. This article will be very
helpful and interesting to students as they might face selection sort as a question in
their examinations. So, it is important to discuss the topic.
• In selection sort, the smallest value among the unsorted elements of the array is
selected in every pass and inserted to its appropriate position into the array. It is
also the simplest algorithm. It is an in-place comparison sorting algorithm. In this
algorithm, the array is divided into two parts, first is sorted part, and another one is
the unsorted part. Initially, the sorted part of the array is empty, and unsorted part is
the given array. Sorted part is placed at the left, while the unsorted part is placed at
the right.
• In selection sort, the first smallest element is selected from the unsorted array and
placed at the first position. After that second smallest element is selected and
placed in the second position. The process continues until the array is entirely
sorted.
• The average and worst-case complexity of selection sort is O(n2), where n is the
number of items. Due to this, it is not suitable for large data sets.
Algorithm
Selection sort is generally used when –
1.SELECTION SORT(arr, n)
•A small array is to be sorted 2.
3.Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
•Swapping cost doesn't matter 4.Step 2: CALL SMALLEST(arr, i, n, pos)
5.Step 3: SWAP arr[i] with arr[pos]
•It is compulsory to check all elements 6.[END OF LOOP]
7.Step 4: EXIT
Now, let's see the algorithm of selection 8.
sort. 9.SMALLEST (arr, i, n, pos)
10.Step 1: [INITIALIZE] SET SMALL = arr[i]
11.Step 2: [INITIALIZE] SET pos = i
12.Step 3: Repeat for j = i+1 to n
13.if (SMALL > arr[j])
14. SET SMALL = arr[j]
15.SET pos = j
16.[END OF if]
17.[END OF LOOP]
18.Step 4: RETURN pos
Working of Selection sort Algorithm

Now, let's see the working of the Selection sort Algorithm.


To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be
easier to understand the Selection sort via an example.
Let the elements of array are –

or the first position in the sorted array, the entire array is to be scanned sequentially.
sent, 12 is stored at the first position, after searching the entire array, it is found that 8 is the smallest

So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted
array.

For the second position, where 29 is stored presently, we again sequentially scan the rest of the items of unsorted
array. After scanning, we find that 12 is the second lowest element in the array that should be appeared at second
position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the
sorted array. So, after two iterations, the two smallest values are placed at the beginning in a
sorted way.

The same process is applied to the rest of the


array elements. Now, we are showing a
pictorial representation of the entire sorting
process.

Now, the array is completely sorted.


Selection Sort Example
Selection sort complexity
Now, let's see the time complexity of selection sort in best case, average case, and in worst
case. We will also see the space complexity of the selection sort.

1. Time Complexity

Case Time Complexity


Best Case O(n2)
Average Case O(n2)
Worst Case O(n2)

•Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of selection sort is O(n2).

•Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
selection sort is O(n2).

•Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of selection sort
Space Complexity

e space complexity of selection sort is O(1). It is because, in selection sort, an extra variable is required for swapping.

Space Complexity O(1)


Stable YES
Program: Write a program to implement
selection sort in C language. 1.void printArr(int a[], int n) /
1.#include <stdio.h> * function to print the array */
2. 2.{
3.void selection(int arr[], int n) 3. int i;
4.{ 4. for (i = 0; i < n; i++)
5. int i, j, small; 5. printf("%d ", a[i]);
6. 6.}
7. for (i = 0; i < n; i+ 7.
+) // One by one move boundary of unsorted s 8.int main()
ubarray 9.{
8. { 10. int a[] = { 12, 31, 25, 8, 32, 17 };
9. small = i; // 11. int n = sizeof(a) / sizeof(a[0]);
minimum element in unsorted array 12. printf("Before sorting array elements are -
10. \n");
11. for (j = i+1; j < n; j++) 13. printArr(a, n);
12. if (arr[j] < arr[small]) 14. selection(a, n);
13. small = j; 15. printf("\
14.// Swap the minimum element with the first nAfter sorting array elements are - \n");
element 16. printArr(a, n);
15. int temp = arr[small]; 17. return 0;
16. arr[small] = arr[i]; 18.}
17. arr[i] = temp;
Quick Sort
The working procedure of Quicksort is also simple. This article will be very helpful and
interesting to students as they might face quicksort as a question in their examinations. So,
it is important to discuss the topic.

Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used
sorting algorithm that makes n log n comparisons in average case for sorting an array of n
elements. It is a faster and highly efficient sorting algorithm. This algorithm follows the
divide and conquer approach. Divide and conquer is a technique of breaking down the
algorithms into subproblems, then solving the subproblems, and combining the results
back together to solve the original problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into
two sub-arrays such that each element in the left sub-array is less than or equal to the
pivot element and each element in the right sub-array is larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.


Combine: Combine the already sorted array.
Quicksort picks an element as pivot, and then it partitions the given array around the picked pivot element. In
quick sort, a large array
is divided into two arrays in which one holds values that are smaller than the specified value (Pivot), and another
array holds the values that are greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will continue until the single
element remains in the sub-array.

Choosing the pivot


Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a
good pivot. Some of the ways of choosing a pivot are as follows -
•Pivot can be random, i.e. select the random pivot from the given array.
•Pivot can either be the rightmost element of the leftmost element of the given array.
•Select median as the pivot element.
Quick Sort Example
#include <stdio.h>
temp=number[pivot];
void quicksort(int number[25],int first,int last){ printf("Order of Sorted elements:
number[pivot]=number[j];
int i, j, pivot, temp; ");
number[j]=temp;
for(i=0;i<count;i++)
if(first<last) printf(" %d",number[i]);
quicksort(number,first,j-1);
{ }
quicksort(number,j+1,last);
pivot=first;
}
i=first;
}
j=last;
void main(){
int i, count, number[25];
while(i<j)
{
printf("How many elements are u
while(number[i]<=number[pivot] && i<last)
going to enter?: ");
i++;
scanf("%d",&count);
while(number[j]>number[pivot])
j--;
printf("Enter %d elements: ", count);
if(i<j)
for(i=0;i<count;i++)
{
scanf("%d",&number[i]);
temp=number[i];
number[i]=number[j];
quicksort(number,0,count-1);
number[j]=temp;
}
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We will also see
the space complexity of quicksort.
1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

•Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the
middle element or near to the middle element. The best-case time complexity of quicksort
is O(n*logn).
•Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
quicksort is O(n*logn).
•Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either
greatest or smallest element. Suppose, if the pivot element is always the last element of the
array, the worst case would occur when the given array is sorted already in ascending or
Though the worst-case complexity of quicksort is more than other sorting algorithms such
as Merge sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely
occurs because by changing the choice of pivot, it can be implemented in different ways. Worst
case in quicksort can be avoided by choosing the right pivot element.

2. Space Complexity
•The space complexity of quicksort is O(n*logn).

Space Complexity O(n*logn)

Stable NO
Merge Sort
Merge sort is the sorting technique that follows the divide and conquer approach. This article
will be very helpful and interesting to students as they might face merge sort as a question in
their examinations. In coding or technical interviews for software engineers, sorting
algorithms are widely asked. So, it is important to discuss the topic.
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be divided further.
Then we combine the pair of one element lists into two-element lists, sorting them in the
process. The sorted two-element pairs is merged into the four-element lists, and so on until
we get the sorted list.
Now, let's see the algorithm of merge sort.
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and end is the
last element of the array.
1.MERGE_SORT(arr, beg, end)
2.
3.if beg < end
4.set mid = (beg + end)/2
5.MERGE_SORT(arr, beg, mid)
6.MERGE_SORT(arr, mid + 1, end)
7.MERGE (arr, beg, mid, end)
8.end of if
9.
10.END MERGE_SORT
The important part of the merge sort is the MERGE function. This function performs the
merging of two sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one
sorted array A[beg…end]. So, the inputs of the MERGE function are A[], beg, mid, and end.
The implementation of the MERGE function is 1. while (i < n1 && j < n2)
given as follows - 2. {
1./* Function to merge the subarrays of a[] */ 3. if(LeftArray[i] <= RightAr
2.void merge(int a[], int beg, int mid, int end) ray[j])
4. {
3.{ 5. a[k] = LeftArray[i];
4. int i, j, k; 6. i++;
5. int n1 = mid - beg + 1; 7. }
6. int n2 = end - mid; 8. else
7. 9. {
8. int LeftArray[n1], RightArray[n2]; // 10. a[k] = RightArray[j];
temporary arrays
9. 11. j++;
10. /* copy data to temp arrays */ 12. }
11. for (int i = 0; i < n1; i++) 13. k++;
12. LeftArray[i] = a[beg + i]; 14. } 1. while (j<n2)
13. for (int j = 0; j < n2; j++) 15. while (i<n1) 2. {
14. RightArray[j] = a[mid + 1 + j]; 16. { 3. a[k] = RightArr
15. 17. a[k] = LeftArray[i];
ay[j];
16. i = 0, /* initial index of first sub-array */ 18. i++; 4. j++;
17. j = 0; /* initial index of second sub- 19. k++; 5. k++;
array */ 20. } 6. }
18. k = beg; /* initial index of merged sub- 21. 7.}
Working of Merge sort Algorithm
Now, let's see the working of merge sort Algorithm.
To understand the working of the merge sort algorithm, let's take an unsorted
array. It will be easier to understand the merge sort via an example.
Let the elements of array are –

According to the merge sort, first divide the given array into two equal halves.
Merge sort keeps dividing the list into equal parts until it cannot be further
divided.
As there are eight elements in the given array, so it is divided into two arrays of
size 4.

Now, again divide these two arrays into halves. As they are of size 4, so divide
them into new arrays of size 2.
Now, again divide these two arrays into halves. As they are of size 4, so
divide them into new arrays of size 2.

Now, again divide these arrays to get the atomic value that cannot be further
divided.

Now, combine them in the same manner they were broken.


In combining, first compare the element of each array and then combine them into
another array in sorted order.
So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8,
and in the list of two values, put 8 first followed by 25. Then compare 32 and 17,
sort them and put 17 first followed by 32. After that, compare 40 and 42, and place
them sequentially.

In the next iteration of combining, now compare the arrays with two data values
and merge them into an array of found values in sorted order.

Now, there is a final merging of the arrays. After the final merging of
above arrays, the array will look like -

Now, the array is completely sorted.


Merge sort complexity
Now, let's see the time complexity of merge sort in best case, average case, and in
worst case. We will also see the space complexity of the merge sort.
1. Time Complexity

Case Time Complexity

Best Case O(n*logn)


Average Case O(n*logn)
Worst Case O(n*logn)

•Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of merge sort is O(n*logn).

•Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
merge sort is O(n*logn).

•Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of merge sort
Merge Sort
Radix Sort
Heap Sort

You might also like