0% found this document useful (0 votes)
4 views39 pages

Unit 2 Analysis of Algorithm

Uploaded by

hamhazparmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views39 pages

Unit 2 Analysis of Algorithm

Uploaded by

hamhazparmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

ANALYSIS AND DESIGN

OF ALGORITHMS
(UNIT2)
OUTLINE
> The efficient algorithm
> Average, Best and worst case analysis
> Amortized analysis & Asymptotic Notations
> Analyzing control statement
> Loop invariant and the correctness of the algorithm
> Sorting Algorithms and analysis :
Bubble sort, Selection sort, Insertion sort, Shell sort Heap sort
> Sorting in linear time :
Bucket sort, Radix sort and Counting sort
What is Analysis of an Algorithm?

• Analyzing an algorithm means calculating/predicting the resources that the algorithm requires.
• Analysis provides theoretical estimation for the required resources of an algorithm to solve a specific computational
problem.
• Two most important resources are computing time (time complexity) and storage space (space complexity).

Why Analysis is required?


• By analyzing some of the candidate algorithms for a problem, the most efficient one can be easily identified.

> The efficient algorithm


• Analysis of algorithm is required to measure the efficiency of algorithm. Only after determining the efficiency
of various algorithms, you will be able to make a well informed decision for selecting the best algorithm to
solve a particular problem.
• We will compare algorithms based on their execution time. Efficiency of an algorithm means how fast it runs.
• If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances,
there is a natural unit available Bit.
• On the other hand, is we want to measure the efficiency of an algorithm in terms of time it takes to arrive at
result, there is no obvious choice.
Characteristics of an Efficient Algorithm :
1.Correctness – always gives the right answer.
2.Optimal Time Complexity – completes in the minimum number of steps.
3.Low Space Usage – uses minimal memory.
4.Scalability – performs well even as input size grows.
Efficiency is typically measured using :
1. Time Complexity – how the computation time grows with input size.
2. Space Complexity – how memory usage grows with input size.
Linear Search :
• Suppose, you are given a jar containing some business cards.
• You are asked to determine whether the name “Bill Gates" is in the jar.
• To do this, you decide to simply go through all the cards one by one.
• How long this takes?
• Can be determined by how many cards are in the jar, i.e., Size of Input.
• Linear search is a method for finding a particular value from the given list.
• The algorithm checks each element, one at a time and in sequence, until the desired element is found.
• Linear search is the simplest search algorithm. It is a special case of brute-force search.

Class Task : Write down about binary search


> Average, Best and worst case analysis
1) Worst Case Analysis
• In the worst case analysis, we calculate upper bound on running time of an algorithm.
• We must know the case that causes maximum number of operations to be executed.
• maximum comparison is required.

Average Case Analysis


• In average case analysis, we take all possible inputs and calculate computing time for all of
the inputs.
• Sum all the calculated values and divide the sum by total number of inputs. We must know
(or predict)distribution of cases.
• average number of comparison is required.

7 7 7 7
Best Case Analysis
• In the best case analysis, we calculate lower bound on running time of an algorithm.
• We must know the case that causes minimum number of operations to be executed.
• In the linear search problem, the best case occurs when x is present at the first location.
• minimum comparison is required.

7
> Asymptotic Notations
• Asymptotic notation is used to describe the running time of an algorithm.
• It shows order of growth of function.
• Using Asymptotic analysis, we can very well define the best case, average case, and
worst case scenario of an algorithm.
1. Time Complexity : is how long an algorithm take to run ..\Downloads\time and space.htm
2. Space Complexity : is how much memory is used https://www.geeksforgeeks.org/g-fact-86/

Real World Example : Video in normal speed - less time – less memory
Video in slow-motion - long time – more memory
Linear Search : A = 8 6 12 5 9 7 4 3 16 18
0 1 2 3 4 5 6 7 8 9

Searching Key = 7 Total Comparisons 6


Searching Key = 16 Total Comparisons 9
Searching Key = 20 Total Comparisons 10 but not found “20”
“Sometimes search may be successful or may be not successful”
1. Best Case : If you are searching key elements present at “1st” index it is best case because best case time
Upper Bond is constant & num of comparison is less
O- notation You can also write:
Best Case Time = 1 B(n) = 1
B(n) = O(1) B(n) = O(1) - Big Oh
(best case time (big oh
for n element) order of 1) B(n) = Ω(1) - Omega
B(n) = θ(1) - Theta

f (n) ≤ cg(n) We say that g(n) is an asymptotically upper bound for f(n).

Class Task : Search and write down about analyzing control statement
2. Worst Case : If you are searching key elements at last index it is called worst case
Lower Bond
Worst Case Time = n (numbers of comparisons) You can also write:
Ω-Notation
W(n) = O(n) W(n) = n
(worst case time (big oh
for n element) order of n) W(n) = O(n) - Big Oh
W(n) = Ω(n) - Omega
W(n) = θ(n) - Theta

cg(n) ≤ f (n)

3. Average Case : Avg case is define by all possible case time divided by numbers of cases
Same order (Note : Avg. case analysis may not be possible always)
Θ-Notation
Avg. time = 1+2+3+……+n n (n+1) A(n) = (n+1)
n 2 2
All possible case time n
No. of cases BC=< AC =< WC : time complex relation

c1g(n) ≤ f (n) ≤ c2g(n)


Worst Case Best Case
> Concept of Frequency Count
• Time complexity of an algorithm can be computed using the frequency count.
• The frequency count is a count that denotes how many times particular statement is executed.

• Frequency count of the program is the 2.

Void Display()
{

int a, b, c; Declaration part


a = 10;
b = 10;
c = a + b; Executable part
printf(“%d”, c);

}
> Sorting Algorithms and analysis
Sorting is any process of arranging items systematically or arranging items in a sequence ordered by some criterion.
Applications of Sorting :

1) Phone Bill: the calls made are date wise sorted.


2) Bank statement or Credit card Bill: transactions made are date wise sorted.
3) Filling forms online: “select country” drop down box will have the name of countries sorted in alphabetical
order.
4) Online shopping: the items can be sorted price wise, date wise or relevance wise.
5) Files or folders on your desktop are sorted date wise.

1) Bubble Sort :
• Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until they are in
the intended order.
• As the largest element moves to the highest index position, is called the element “bubbled up” from its
position.
• Largest element is moved at highest index.
• In the next pass, second largest element bubbles up, then third largest element, and so on.
• After n-1 passes the list gets sorted.
Algorithm:
Step 1 : Begin
Step 2 : Define an array
Step 3 : Compare the 1st element with the 2nd element
* if the 1st element is greater swap them
* move to the next pair of elements
Step 4 : Continue comparing and swapping adjacent elements until the largest element “bubbles up” to the
end of the array in the 1st pass
Step 5 : Repeat the same process for the remaining unsorted array elements
Step 6 : Exit

Ex : 45 34 56 23 12
Ex : 8 7 5 6 2

Pass 1 : 8 7 5 6 2 Pass 2 : 7 5 6 2 Pass 3 : 5 6 2 Pass 4 : 5 2


Result : 2, 5, 6, 7, 8
7 8 5 6 2 5 7 6 2 5 2 6 2 5
7 5 8 6 2 5 6 7 2
7 5 6 8 2 5 6 2 7
Class Task : 1) 5, 2, 4, 6, 8
7 5 6 2 8 2) 6, 5, 7, 4, 5, 2
3) 1, 3, 2, 4, 5
2) Insertion Sort :
• The elements are inserted at their appropriate place. Hence is the name insertion sort.
• The process starts with the first element
Algorithm:
Step 1 : Begin
Step 2 : Start with the 2nd element of the array
Step 3 : Compare & shift if the current element is smaller and shift the largest element to the right
Step 4 : Place the current element in it’s current sorted position within the shifted portion
Step 5 : Move to the next unsorted element and repeat step 3 & step 4 until all elements are sorted
Step 6 : Exit

Ex : 7 6 3 4 1 Ex : 7 6 5 8

7 6 3 4 1 3 4 6 7 1 7 6 5 8
6 7 3 4 1 3 4 6 1 7 6 7 5 8
6 5 7 8
6 3 7 4 1 3 4 1 6 7
5 6 7 8
3 6 7 4 1 3 1 4 6 7
Class Task : 1) 4, 2, 6, 1
3 6 4 7 1 1 3 4 6 7 2) 8, 5, 6, 7, 3
3) 1, 2, 8, 3, 5
3) Selection Sort :
• Scan the array to find its smallest element and swap it with the first element.
• Then starting with the second element scan the entire list to find the smallest element and swap it with
second element.
• Continue this fashion, entire list can be sorted. Ex : 7 6 3 4 1
Algorithm:
Step 1 : Begin 7 6 3 4 1
Step 2 : Find the smallest element of array 1 6 3 4 7
Step 3 : Swap the smallest number with the 1st number in the list 1 3 6 4 7
Step 4 : Repeat for the rest 1 3 4 6 7
Step 5 : Swap again with newly found elements
Step 6 : Keep repeating
Step 7 : Done sorting
Step 8 : Exit
4) Shell Sort :
• Improvement over the insertion sort.
• The elements at fixed distance are compared.
Algorithm
Step 1: initialize the value of n.
Step 2: calculate the value of gap. (divide the list into sub-list of gap)
Gap = N / 2
Step 3: sort the sub-lists (using insertion sort).
Step 4: repeat step 2 and 3 until the elements are sorted.
Example: 10, 23, 95, 45, 78, 63, 12, 20
• N = 8, gap = N / 2 For odd numbers of element
=8/2 gap = N – 1 / 2
=4 Example : 10, 23, 95, 45, 78 gap = 5 – 1 / 2
• To compare with gap of 4
5) Heap Sort :
• A heap is a nearly-complete binary tree where the parent node could either be minimum or maximum.
(Complete binary tree: all the levels except the last level, should be completely filled.)
• Heap with minimum root node is called min-heap and root node with maximum root node is called max-heap.
• Max-heap:
Parent >= child

• Min-heap:
Parent <= child

• Purpose: when the smallest or highest value is needed instantly.


• It follows two main operations in this procedure:

• Heap construction: First construct a heap for given numbers.


• Deletion of maximum key: Delete root key and repeat until all the input elements are processed. Removes the root
element to swap it with the minimum valued node at the leaf. Then the heapify method is applied to rearrange the
elements accordingly.
• Structural property: Filled from left to right (complete binary tree)
• Heap order property:
1) The parent node is always greater than the child node (max heap)
2) The parent node is always less than the child node (min-heap)

1) 2)

3) 4)
5)

6)
7)

8)
9)
1) 2)

3) 1)
6) Bucket Sort :
• Set up an array of initially empty buckets.
• Put each element in corresponding bucket.
• Sort each non empty bucket.
• Visit the buckets in order and put all the elements into a
sequence and print them.
Example : 0.78 , 0.17 , 0.39 , 0.26 , 0.72 , 0.94 , 0.21 , 0.12 , 0.23 , 0.68
Total element : 10

0 0
1 0.17 0.12 1 0.12 0.17
2 0.26 0.21 0.23 2 0.21 0.23 0.26
3 0.39 3 0.39
4

Sort
4
5 5
6 0.68 6 0.68
7 0.78 0.72 7 0.77 0.78
8 8
9 0.94 9 0.94

Result = 0.12 0.17 0.21 0.23 0.26 0.39 0.68 0.77 0.78 0.94
7) Radix Sort :
• It sorts the elements by first grouping the individual digits of the same place value. Then, sort the elements
according to their increasing/decreasing order.
• Suppose, we have an array of 8 elements. First, we will sort elements based on the value of the unit place. Then,
we will sort elements based on the value of the tenth place (digit by digit).
• This process goes on until the last significant place.

(First digit)
8) Counting Sort :
• It sorts the elements of an array by counting the number of occurrences of each unique element in the array.
• Three array:
A[1...n]: given elements
C[0...k]: range of A
B[1...n]: sorted output

1) 2)
3) 4)

5) 6)
7) 8)

9) 10)
11) 12)

13) 14)
15) 16)

17) 18)
19) 20)

21) 22)
23) 24)

25)
> Amortized Analysis
• Finding average running time per operation over worst case sequence of operations.
• This analysis is used when the occasional operation is very slow, but most of the operations executing very
frequently are faster.
• In the Hash table, the searching time complexity is mostly O(1), but sometimes it executes O(n) operations.
• Example: The cost of one item is 1 and another is 500.
• Commonly three techniques used:
1) Aggregate analysis
2) Accounting method
3) Potential method
1) Aggregate analysis :
T(n) = O (n log n) T(n) = Time complexity
n Suppose one time complexity is
O (n log n) and aggregate
T(n) = O (n log n) means average so we put “n” in
n below
= O (log n)
THANK
YOU
Prof. Khushi Pandya

You might also like