0% found this document useful (0 votes)
20 views22 pages

Notes 1

The document discusses various sorting algorithms, including algorithms based on the divide-and-conquer method such as Merge Sort, QuickSort, and Heap Sort, detailing their processes and time complexities. It also covers non-comparison-based algorithms like Counting Sort and Radix Sort, explaining their mechanisms and efficiency. Additionally, it includes a question bank related to algorithm characteristics, stability, and complexity analysis.

Uploaded by

abhinavthakur089
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views22 pages

Notes 1

The document discusses various sorting algorithms, including algorithms based on the divide-and-conquer method such as Merge Sort, QuickSort, and Heap Sort, detailing their processes and time complexities. It also covers non-comparison-based algorithms like Counting Sort and Radix Sort, explaining their mechanisms and efficiency. Additionally, it includes a question bank related to algorithm characteristics, stability, and complexity analysis.

Uploaded by

abhinavthakur089
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Some Topics Of Unit-1

Algorithm:
An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may
be, an algorithm terminates after executing a finite number of instructions. In addition every algorithm
must satisfy the following criteria:
Input: there are zero or more quantities, which are externally supplied;
Output: at least one quantity is produced
Definiteness: each instruction must be clear and unambiguous;
Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps;
Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by a
person using only pencil and paper. It is not enough that each operation be definite, but it must also be
feasible.

The divide-and-conquer method

Many useful algorithms are recursive in structure: to solve a given problem, they

recurse (call themselves) one or more times to handle closely related subproblems.

These algorithms typically follow the divide-and-conquer method: they

break the problem into several subproblems that are similar to the original problem

but smaller in size, solve the subproblems recursively, and then combine these

solutions to create a solution to the original problem.

In the divide-and-conquer method, if the problem is small enough for the base

Case- we can solve it directly without recursing. Otherwise the recursive case-

We cam perform three characteristic steps:

Divide the problem into one or more sub problems that are smaller instances of the

same problem.

Conquer the subproblems by solving them recursively.

Combine the subproblem solutions to form a solution to the original problem.


The merge sort algorithm closely follows the divide-and-conquer method. In

each step, it sorts a subarray A[p:r] starting with the entire array A[1:n] and

recursing down to smaller and smaller subarrays. Here is how merge sort operate

Divide the subarray A[p:r] to be sorted into two adjacent subarrays, each of half

the size. To do so, compute the midpoint q of A[p:r] (taking the average of p

and r ), and divide A[p:r] into subarrays A[p:q] and A[q+1:r]

Conquer by sorting each of the two subarrays A[p:q] and A[q+1:r]

Recursively using merge sort.

Combine by merging the two sorted subarrays A[p:q] and A[q+1:r]

back into

A[p:r] producing the sorted answer.

The recursion ‘bottoms out’-it reaches the base case-when the subarray A[p:r]

to be sorted has just 1 element, that is, when p equals r . As we noted in the initialization
argument for I NSERTION-SORT’s loop invariant, a subarray comprising

just a single element is always sorted.


The time complexity of merge sort is O(n log n) in all cases, including the best,
average, and worst cases. This is because the algorithm always splits an
array into two halves and takes linear time to merge them.
QuickSort
QuickSort is a sorting algorithm based on the Divide and Conquer that picks
an element as a pivot and partitions the given array around the picked pivot
by placing the pivot in its correct position in the sorted array.

the time complexity of the Quick Sort algorithm is:


• Best-case: O(n*logn)
• Average-case: O(n*logn)
• Worst-case: O(n^2)

Heap Sorting

heapsort's running time is O(n lg n). Like insertion sort, but unlike merge sort,
heapsort sorts in place: only a constant number of array elements are stored outside
the input array at any time. Thus, heapsort combines the better attributes of the two
sorting algorithms we have already discussed.

Heapsort also introduces another algorithm design technique: the use of a data
structure, in this case one we call a "heap," to manage information during the
execution of the algorithm. Not only is the heap data structure useful for heapsort, it
also makes an efficient priority queue.
HEAPSORT(A)
1 BUILD-HEAP(A)
2 for i length[A] downto 2
3 do exchange A[1] A[i]
4 heap-size[A] heap-size[A] -1
5 HEAPIFY(A, 1)

BUILD-HEAP(A)
1 heap-size[A] length[A]
2 for i length[A]/2 downto 1
3 do HEAPIFY(A, i)

HEAPIFY(A, i)
1 l LEFT(i)
2 r RIGHT(i)
3 if l heap-size[A] and A[l] > A[i]
4 then largest l
5 else largest i
6 if r heap-size[A] and A[r] > A[largest]
7 then largest r
8 if largest i
9 then exchange A[i] A[largest]
10 HEAPIFY(A,largest)
What is Counting Sort?
Counting Sort is a non-comparison-based sorting algorithm. It is
particularly efficient when the range of input values is small compared to the
number of elements to be sorted. The basic idea behind Counting Sort is to
count the frequency of each distinct element in the input array and use that
information to place the elements in their correct sorted positions. The time
complexity of the counting sort algorithm is O(n + k), where n is the number of
items being sorted and k is the number of possible value

COUNTING-SORT(A, B, k):
let C[0..k] be a new array
for i = 0 to k
C[i] = 0
for j = 1 to A.length
C[A[j]] = C[A[j]] + 1
// C[i] now contains the number of elements equal to i
for i = 1 to k
C[i] = C[i] + C[i - 1]
// C[i] now contains the number of elements less than or equal to i
for j = A.length downto 1
B[C[A[j]]] = A[j]
C[A[j]] = C[A[j]] - 1

How does Counting Sort Algorithm work?


Step1 :
• Find out the maximum element from the given array.
Step 2:
• Initialize a countArray[] of length max+1 with all elements as 0. This
array will be used for storing the occurrences of the elements of the input
array.

Step 3:
• In the countArray[], store the count of each unique element of the input
array at their respective indices.
• For Example: The count of element 2 in the input array is 2. So,
store 2 at index 2 in the countArray[]. Similarly, the count of element 5 in
the input array is 1, hence store 1 at index 5 in the countArray[].

Step 4:
• Store the cumulative sum or prefix sum of the elements of
the countArray[] by doing countArray[i] = countArray[i – 1] +
countArray[i]. This will help in placing the elements of the input array at
the correct index in the output array.

Step 5:
• Iterate from end of the input array and because traversing input array
from end preserves the order of equal elements, which eventually makes
this sorting algorithm stable.
• Update outputArray[ countArray[ inputArray[i] ] – 1] = inputArray[i].
• Also, update
countArray[ inputArray[i] ] = countArray[ inputArray[i] ]– -.

Step 6: For i = 6,
Update outputArray[ countArray[ inputArray[6] ] – 1] = inputArray[6]
Also, update countArray[ inputArray[6] ] = countArray[ inputArray[6] ]- –
Step 7: For i = 5,
Update outputArray[ countArray[ inputArray[5] ] – 1] = inputArray[5]
Also, update countArray[ inputArray[5] ] = countArray[ inputArray[5] ]- –

Step 8: For i = 4,
Update outputArray[ countArray[ inputArray[4] ] – 1] = inputArray[4]
Also, update countArray[ inputArray[4] ] = countArray[ inputArray[4] ]- –

Step 9: For i = 3,
Update outputArray[ countArray[ inputArray[3] ] – 1] = inputArray[3]
Also, update countArray[ inputArray[3] ] = countArray[ inputArray[3] ]- –

Step 10: For i = 2,


Update outputArray[ countArray[ inputArray[2] ] – 1] = inputArray[2]
Also, update countArray[ inputArray[2] ] = countArray[ inputArray[2] ]- –

Step 11: For i = 1,


Update outputArray[ countArray[ inputArray[1] ] – 1] = inputArray[1]
Also, update countArray[ inputArray[1] ] = countArray[ inputArray[1] ]- –
step 12: For i = 0,
Update outputArray[ countArray[ inputArray[0] ] – 1] = inputArray[0]
Also, update countArray[ inputArray[0] ] = countArray[ inputArray[0] ]- –

Counting Sort Algorithm:


• Declare an auxiliary array countArray[] of size max(inputArray[])+1 and
initialize it with 0.
• Traverse array inputArray[] and map each element of inputArray[] as an index
of countArray[] array, i.e., execute countArray[inputArray[i]]++ for 0 <= i <
N.
• Calculate the prefix sum at every index of array inputArray[].
• Create an array outputArray[] of size N.
• Traverse array inputArray[] from end and update outputArray[ countArray[
inputArray[i] ] – 1] = inputArray[i]. Also, update countArray[ inputArray[i]
] = countArray[ inputArray[i] ]- – .

Q.Sort the following array by counting sort A={2,5,3,0,2,3,0,3}

Radix Sort
Radix sort is the algorithm used by the card-sorting machines you now find only in
computer museums.

RADIX-SORT(A,n,d)
1 for i=1 to d
2 use a stable sort to sort array A[1…n] on digit i

Q.Using Figure 8.3 as a model, illustrate the operation of RADIX-SORT on the following
list of English words: COW, DOG, SEA, RUG, ROW, MOB, BOX, TAB, BAR, EAR,
TAR, DIG, BIG, TEA, NOW, FOX.

Starting with the unsorted words on the left, and stable sorting by progres-
sively more important positions.

COW SEA TAB BAR

DOG TEA BAR BIG

SEA MOB EAR BOX

RUG TAB TAR COW


ROW RUG SEA DIG
MOB DOG TEA DOG

BOX DIG DIG EAR

TAB BIG BIG FOX

BAR BAR MOB MOB

EAR EAR DOG NOW

TAR TAR COW ROW

DIG COW ROW RUG

BIG ROW NOW SEA

TEA NOW BOX TAB

NOW BOX FOX TAR

FOX FOX RUG TEA

Bucket Sort

Bucket sort is a sorting technique that involves dividing elements into


various groups, or buckets. These buckets are formed by uniformly
distributing the elements. Once the elements are divided into buckets, they
can be sorted using any other sorting algorithm. Finally, the sorted elements
are gathered together in an ordered fashion.

Worst Case Time Complexity: O(n2) The worst case happens when one
bucket gets all the elements. In this case, we will be running insertion sort on
all items which will make the time complexity as O(n2). We can reduce the
worst case time complexity to O(n Log n) by using a O(n Log n) algorithm
like Merge Sort or Heap Sort to sort the individual buckets, but that will
improve the algorithm time for cases when buckets have small number of
items as insertion sort works better for small arrays.
Best Case Time Complexity : O(n + k) The best case happens when every
bucket gets equal number of elements. In this case every call to insertion
sort will take constant time as the number of items in every bucket would be
constant (Assuming that k is linearly proportional to n).

Question Bank-Unit-1

Q1.Which of the following sorting algorithms are stable: insertion sort, merge sort,
heap sort, and quick sort? Give a simple scheme that makes any comparison sort
stable. How much additional time and space does your scheme entail?

Insertion sort and merge sort are stable. Heapsort and quicksort are not. To
make any sorting algorithm stable we can preprocess, replacing each element of
an array with an ordered pair. The first entry will be the value of the element,
and the second value will be the index of the element. For example, the array
[2; 1; 1; 3; 4; 4; 4] would become [(2; 1); (1; 2); (1; 3); (3; 4); (4; 5); (4; 6); (4; 7)]. We
now interpret (i; j) < (k;m) if i < k or i = k and j < m. Under this definition
of less-than, the algorithm is guaranteed to be stable because each of our new
elements is distinct and the index comparison ensures that if a repeat element
appeared later in the original array, it must appear later in the sorted array.
This doubles the space requirement, but the running time will be asymptotically
unchanged.

Q.What is stable sorting algorithm? Which of the sorting algorithms we have seen
are stable and which are unstable? Give name with explanation.

Q.Write an algorithm of merge sort and prove its worst time complexity.

Q.What do you mean by algorithm? Write the characteristic of algorithm.


Q.Show that equation are correct : 10n^2+9 = O(n^2)

Q.Sort the following array by counting sort A={2,5,3,0,2,3,0,3}

Q.Solve the following recurrence using Master method:


T (n) = 4T (n/3) + n2

Q.Name the sorting algorithm that is most practically used and also write its Time Complexity.

Q Compare Time Complexity with Space Complexity.

Q.Solve the following By Recursion Tree Method


T(n)=n+T(n/5)+T(4n/5)

ANS. Draw a recursion tree based on the given recurrence relation.

The given recurrence relation shows-


• A problem of size n will get divided into 2 sub-problems- one of size n/5 and another
of size 4n/5.
• Then, sub-problem of size n/5 will get divided into 2 sub-problems- one of size
n/52 and another of size 4n/52.
• On the other side, sub-problem of size 4n/5 will get divided into 2 sub-problems- one
of size 4n/52 and another of size 42n/52 and so on.
• At the bottom most layer, the size of sub-problems will reduce to 1.

The given recurrence relation shows-


• The cost of dividing a problem of size n into its 2 sub-problems and then combining
its solution is n.
• The cost of dividing a problem of size n/5 into its 2 sub-problems and then combining
its solution is n/5.
• The cost of dividing a problem of size 4n/5 into its 2 sub-problems and then
combining its solution is 4n/5 and so on.
Step-02:

Determine cost of each level-


• Cost of level-0 = n
• Cost of level-1 = n/5 + 4n/5 = n
• Cost of level-2 = n/52 + 4n/52 + 4n/52 + 42n/52 = n

Step-03:

Determine total number of levels in the recursion tree. We will consider the rightmost sub
tree as it goes down to the deepest level-
• Size of sub-problem at level-0 = (4/5)0n
• Size of sub-problem at level-1 =(4/5)1n
• Size of sub-problem at level-2 =(4/5)2n

Continuing in similar manner, we have-


Size of sub-problem at level-i = (4/5)in
Suppose at level-x (last level), size of sub-problem becomes 1. Then-
(4/5)xn = 1
(4/5)x = 1/n
Taking log on both sides, we get-
xlog(4/5) = log(1/n)
x = log5/4n

∴ Total number of levels in the recursion tree = log5/4n + 1

Step-04:

Determine number of nodes in the last level-


• Level-0 has 20 nodes i.e. 1 node
• Level-1 has 21 nodes i.e. 2 nodes
• Level-2 has 22 nodes i.e. 4 nodes

Continuing in similar manner, we have-


Level-log5/4n has 2log5/4n nodes

Step-05:

Determine cost of last level-


Cost of last level = 2log5/4n x T(1) = θ(2log5/4n) = θ(nlog5/42)

Step-06:

Add costs of all the levels of the recursion tree and simplify the expression so obtained in
terms of asymptotic notation-

= nlog5/4n + θ(nlog5/42)
= θ(nlog5/4n)

Q.Explain HEAP-SORT on the array. Illustrate the operation of HEAP-SORT on the array
A= {6, 14, 3, 25, 2, 10, 20, 7, 6}

Q.Solve the recurrence relation T(n)=2T(n/2)+n2 +2n +1

Q.Explain Binary Search Tree using Example.

Q.Explain Bucket Sort Using Example.

Q.What is Master Theorem to solve Recurrence Relation?

Q.Derive the time complexity of Merge Sort.

Q.Discuss the basic steps in the complete development of an algorithm.

Q.Explain counting sort using example.


Q.Explain Insertion sort using Example.

Q.Explain Quick Sort using Example.

Q. Explain Omega, Big O and theta notation .

You might also like