0% found this document useful (0 votes)
143 views3 pages

1.6 Analysis Framework

The document outlines an analysis framework for evaluating algorithm efficiency, focusing on time and space efficiency. It discusses measuring input size, units for measuring running time, orders of growth, and the worst-case, best-case, and average-case efficiencies of algorithms. Additionally, it introduces the concept of amortized efficiency, which considers the efficiency of a sequence of operations on the same data structure.

Uploaded by

saravanaegs2602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views3 pages

1.6 Analysis Framework

The document outlines an analysis framework for evaluating algorithm efficiency, focusing on time and space efficiency. It discusses measuring input size, units for measuring running time, orders of growth, and the worst-case, best-case, and average-case efficiencies of algorithms. Additionally, it introduces the concept of amortized efficiency, which considers the efficiency of a sequence of operations on the same data structure.

Uploaded by

saravanaegs2602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

1.

6 Analysis Framework
There are two kinds of efficiencies to analyze the efficiency of any algorithm. They are:
• Time efficiency, indicating how fast the algorithm runs, and
• Space efficiency, indicating how much extra memory it uses.

The algorithm analysis framework consists of the following:


• Measuring an Input’s Size
• Units for Measuring Running Time
• Orders of Growth
• Worst-Case, Best-Case, and Average-Case Efficiencies

(i) Measuring an Input’s Size


• An algorithm’s efficiency is defined as a function of some parameter n indicating the
algorithm’s input size. In most cases, selecting such a parameter is quite straightforward. For
example, it will be the size of the list for problems of sorting, searching.
• For the problem of evaluating a polynomial p(x) = anxn + . . . + a0 of degree n, the size of
the parameter will be the polynomial’s degree or the number of its coefficients, which is larger
by 1 than its degree.
• In computing the product of two n × n matrices, the choice of a parameter indicating an input
size does matter.
• Consider a spell-checking algorithm. If the algorithm examines individual characters of its
input, then the size is measured by the number of characters.
• In measuring input size for algorithms solving problems such as checking primality of a
positive integer n. the input is just one number.
• The input size by the number b of bits in the n’s binary representation is b=(log2 n)+1.

(ii) Units for Measuring Running Time


Some standard unit of time measurement such as a second, or millisecond, and so on can be
used to measure the running time of a program after implementing the algorithm.
Drawbacks
• Dependence on the speed of a particular computer.
• Dependence on the quality of a program implementing the algorithm.
• The compiler used in generating the machine code.
• The difficulty of clocking the actual running time of the program.
So, we need metric to measure an algorithm’s efficiency that does not depend on these
extraneous factors.
One possible approach is to count the number of times each of the algorithm’s operations
is executed. This approach is excessively difficult.
The most important operation (+, -, *, /) of the algorithm, called the basic operation.
Computing the number of times the basic operation is executed is easy. The total running time is
determined by basic operations count.
(iii) Orders of Growth
• A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
• For example, the greatest common divisor of two small numbers, it is not immediately clear
how much more efficient Euclid’s algorithm is compared to the other algorithms, the
difference in algorithm efficiencies becomes clear for larger numbers only.
• For large values of n, it is the function’s order of growth that counts just like the Table 1.1,
which contains values of a few functions particularly important for analysis of algorithms.

TABLE 1.1 Values (approximate) of several functions important for analysis of algorithms

n √𝑛 log2n n n log2n n2 n3 2n n!
1 1 0 1 0 1 1 2 1
2 1.4 1 2 2 4 4 4 2
4 2 2 4 8 16 64 16 24
8 2.8 3 8 2.4•101 64 5.1•102 2.6•102 4.0•104
10 3.2 3.3 10 3.3•101 102 103 103 3.6•106
16 4 4 16 6.4•101 2.6•102 4.1•103 6.5•104 2.1•1013
102 10 6.6 102 6.6•102 104 106 1.3•1030 9.3•10157
103 31 10 103 1.0•104 106 109
104 102 13 104 1.3•105 108 1012 Very big
105 3.2•102 17 105 1.7•106 1010 1015 computation
106 103 20 106 2.0•107 1012 1018

(iv) Worst-Case, Best-Case, and Average-Case Efficiencies


Consider Sequential Search algorithm some search key K
ALGORITHM SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n - 1] and a search key K
//Output: The index of the first element in A that matches K or -1 if there are no
// matching elements
i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n return i
else return -1
Clearly, the running time of this algorithm can be quite different for the same list size n.

In the worst case, there is no matching of elements or the first matching element can found
at last on the list. In the best case, there is matching of elements at first on the list.

Worst-case efficiency
• The worst-case efficiency of an algorithm is its efficiency for the worst case input of size n.
• The algorithm runs the longest among all possible inputs of that size.
• For the input of size n, the running time is Cworst(n) = n.
Best case efficiency
• The best-case efficiency of an algorithm is its efficiency for the best case input of
size n.
• The algorithm runs the fastest among all possible inputs of that size n.
• In sequential search, If we search a first element in list of size n. (i.e. first element
equal toa search key), then the running time is Cbest(n) = 1

Average case efficiency


• The Average case efficiency lies between best case and worst case.
• To analyze the algorithm’s average case efficiency, we must make some
assumptions about
possible inputs of size n.
• The standard assumptions are that
o The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
o The probability of the first match occurring in the ith position of the list is
the samefor every i.

Yet another type of efficiency is called amortized efficiency. It applies not to a


single run ofan algorithm but rather to a sequence of operations performed on the same data
structure.

You might also like