Analysis of Algorithms
Input Algorithm Output
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 1
Running Time
Most algorithms best case
transform input objects average case
worst case
into output objects. 120
The running time of an 100
algorithm typically grows
Running Time
80
with the input size.
Average case time is 60
often difficult to 40
determine. 20
We focus on the worst 0
case running time. 1000 2000 3000 4000
I nput Size
Easier to analyze
Crucial to applications such
as games, finance and
© 2013 Goodrich, Tamassia,
robotics
Goldwasser Analysis of Algorithms 2
Experimental Studies
Write a program 9000
implementing the 8000
algorithm 7000
Run the program with 6000
inputs of varying size
Time (ms)
5000
and composition, noting
4000
the time needed:
3000
2000
1000
0
0 50 100
Plot the results I nput Size
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 3
Limitations of Experiments
It is necessary to implement the
algorithm, which may be difficult
Results may not be indicative of the
running time on other inputs not
included in the experiment.
In order to compare two algorithms,
the same hardware and software
environments must be used
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 4
Theoretical Analysis
Uses a high-level description of the
algorithm instead of an
implementation
Characterizes running time as a
function of the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an
algorithm independent of the
hardware/software environment
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 5
Pseudocode
High-level description of an
algorithm
More structured than English prose
Less detailed than a program
Preferred notation for describing
algorithms
Hides program design issues
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 6
Pseudocode Details
Control flow Method call
if … then … [else …] method (arg [, arg…])
while … do … Return value
repeat … until … return expression
for … do … Expressions:
Indentation replaces Assignment
braces
Equality testing
Method declaration
Algorithm method (arg [, arg…]) n2 Superscripts and
Input … other mathematical
formatting allowed
Output …
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 7
The Random Access Machine
(RAM) Model
A CPU
An potentially
unbounded bank of 2
1
memory cells, each of 0
which can hold an
arbitrary number or
character
Memory cells are numbered and
accessing any cell in memory takes
unit time. Analysis of Algorithms
© 2013 Goodrich, Tamassia,
Goldwasser 8
Seven Important Functions
Seven functions that
often appear in 1E+30
1E+28 Cubic
algorithm analysis: 1E+26
Constant 1 1E+24 Quadratic
Logarithmic log n 1E+22 Linear
1E+20
Linear n 1E+18
N-Log-N n log n 1E+16
T (n )
1E+14
Quadratic n2 1E+12
Cubic n3 1E+10
Exponential 2n 1E+8
1E+6
1E+4
In a log-log chart, the 1E+2
slope of the line 1E+0
corresponds to the 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
growth rate
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 9
Slide by Matt
Functions Graphed Stallmann included
with permission.
Using “Normal” Scale
g(n) = n lg n
g(n) = 1 g(n) = 2n
g(n) = n2
g(n) = lg n
g(n) = n
g(n) = n3
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 10
Primitive Operations
Basic computations
performed by an algorithm
Examples:
Evaluating an
Identifiable in pseudocode expression
Largely independent from Assigning a
the programming language value to a
variable
Exact definition not Indexing into an
important (we will see why
array
later) Calling a method
Assumed to take a constant Returning from a
amount of time in the RAM method
model
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 11
Counting Primitive
Operations
By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by
an algorithm, as a function of the input size
Step 1: 2 ops, 3: 2 ops, 4: 2n ops, 5: 2n
ops, 6: 0 to n ops, 7: 1 op
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 12
Estimating Running Time
Algorithm find_max executes 5n 5 primitive
operations in the worst case, 4n 5 in the
best case. Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
Let T(n) be worst-case time of find_max. Then
a (4n 5) T(n) b(5n 5)
Hence, the running time T(n) is bounded by
two linear functions.
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 13
Growth Rate of Running
Time
Changing the hardware/ software
environment
Affects T(n) by a constant factor, but
Does not alter the growth rate of T(n)
The linear growth rate of the
running time T(n) is an intrinsic
property of algorithm find_max
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 14
Slide by Matt
Stallmann included
with permission.
Why Growth Rate Matters
if runtime
time for n + 1 time for 2 n time for 4 n
is...
c lg n c lg (n + 1) c (lg n + 1) c(lg n + 2)
cn c (n + 1) 2c n 4c n
~ c n lg n 2c n lg n + 4c n lg n + runtime
c n lg n quadruple
+ cn 2cn 4cn
s
c n2 ~ c n2 + 2c n 4c n2 16c n2 when
problem
c n3 ~ c n3 + 3c n2 8c n3 64c n3 size
doubles
c 2n c 2 n+1 c 2 2n c 2 4n
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 15
Slide by Matt
Stallmann included
with permission.
Comparison of Two Algorithms
insertion sort is
n2 / 4
merge sort is
2 n lg n
sort a million items?
insertion sort takes
roughly 70 hours
while
merge sort takes
roughly 40 seconds
This is a slow machine, but if
100 x as fast then it’s 40 minutes
versus less than 0.5 seconds
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 16
Constant Factors
1E+26
The growth rate 1E+24 Quadratic
Quadratic
is not affected by 1E+22
1E+20 Linear
constant factors
1E+18 Linear
or 1E+16
lower-order terms 1E+14
T (n )
1E+12
Examples 1E+10
1E+8
102n 105 is a
1E+6
linear function 1E+4
105n2 108n is a 1E+2
1E+0
quadratic function
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 17
Big-Oh Notation
10,000
Given functions f(n) 3n
and g(n), we say that 2n+10
1,000
f(n) is O(g(n)) if there
are positive constants n
c and n0 such that 100
f(n) cg(n) for n n0
10
Example: 2n 10 is O(n)
2n 10 cn
(c 2) n 10 1
1 10 100 1,000
n 10(c 2) n
Pick c 3 and n0 10
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 18
Big-Oh Example
1,000,000
n^2
Example: the 100n
100,000
function n2 is not 10n
O(n) 10,000 n
n2 cn
nc 1,000
The above
100
inequality cannot be
satisfied since c
10
must be a constant
1
1 10 100 1,000
n
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 19
More Big-Oh
Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 1 such that 7n-2 c•n for n n0
this is true for c = 7 and n0 = 1
3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 1 such that 3n3 + 20n2 + 5 c•n3 for n
n0
this is true
3 log n for
+5 c = 4 and n0 = 21
3 log n + 5 is O(log n)
need c > 0 and n0 1 such that 3 log n + 5 c•log n for n
n0
© 2013 Goodrich, Tamassia,
this is true for c =
Goldwasser 8 andAnalysis
n0 = of 2 Algorithms 20
Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the
growth rate of a function
The statement “f(n) is O(g(n))” means that the
growth rate of f(n) is no more than the growth rate
of g(n)
We can use the big-Oh notation to rank functions
according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows Yes No
more
f(n) grows more No Yes
Same growth
© 2013 Goodrich, Tamassia, Yes Yes
Goldwasser Analysis of Algorithms 21
Big-Oh Rules
If is f(n) a polynomial of degree d, then f(n)
is O(nd), i.e.,
Drop lower-order terms
Drop constant factors
Use the smallest possible class of functions
Say “2n is O(n)” instead of “2n is O(n2)”
Use the simplest expression of the class
Say “3n 5 is O(n)” instead of “3n 5 is O(3n)”
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 22
Asymptotic Algorithm
Analysis
The asymptotic analysis of an algorithm
determines the running time in big-Oh notation
To perform the asymptotic analysis
We find the worst-case number of primitive
operations executed as a function of the input size
We express this function with big-Oh notation
Example:
We say that algorithm find_max “runs in O(n) time”
Since constant factors and lower-order terms
are eventually dropped anyhow, we can
disregard them when counting primitive
operations
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 23
Computing Prefix
Averages
We further illustrate
35
asymptotic analysis with X
two algorithms for prefix 30 A
averages
25
The i-th prefix average of
an array X is average of 20
the first (i 1) elements of 15
X:
10
A[i] X[0] X[1] … X[i])/(i+1)
5
Computing the array A of
prefix averages of another 0
array X has applications to 1 2 3 4 5 6 7
financial analysis
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 24
Prefix Averages
(Quadratic)
The following algorithm computes prefix
averages in quadratic time by applying the
definition
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 25
Arithmetic Progression
7
The running time of
6
prefixAverage1 is
O(1 2 …n) 5
The sum of the first n 4
integers is n(n 1) 2 3
There is a simple
visual proof of this fact 2
Thus, algorithm 1
prefixAverage1 runs in 0
O(n2) time
1 2 3 4 5 6
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 26
Prefix Averages 2 (Looks
Better)
The following algorithm uses an internal Python
function to simplify the code
Algorithm prefixAverage2 still runs in O(n2)
time!
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 27
Prefix Averages 3 (Linear
Time)
The following algorithm computes prefix
averages in linear time by keeping a running
sum
Algorithm prefixAverage3 runs in O(n) time
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 28
Math you need to Review
Summations
Logarithms and Exponents
properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba = logxa/logxb
properties of exponentials:
a(b+c) = aba c
abc = (ab)c
Proof techniques ab /ac = a(b-c)
Basic probability b = a logab
bc = a c*logab
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 29
Relatives of Big-Oh
big-Omega
f(n) is (g(n)) if there is a constant c > 0
and an integer constant n0 1 such that
f(n) c•g(n) for n n0
big-Theta
f(n) is (g(n)) if there are constants c’ > 0
and c’’ > 0 and an integer constant n0 1
such that c’•g(n) f(n) c’’•g(n) for n n0
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 30
Intuition for
Asymptotic Notation
Big-Oh
f(n) is O(g(n)) if f(n) is
asymptotically less than or equal
to g(n)
big-Omega
f(n) is (g(n)) if f(n) is
asymptotically greater than or
equal to g(n)
big-Theta
f(n) is (g(n)) if f(n) is
asymptotically equal to g(n)
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 31
Example Uses of
the Relatives of
Big-Oh
5n2 is (n2)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 5 and n0 = 1
5n2 is (n)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 1 and n0 = 1
5n2 is (n2)
f(n) is (g(n)) if it is (n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and
an integer constant n0 1 such that f(n) < c•g(n) for n n0
Let c = 5 and n0 = 1
© 2013 Goodrich, Tamassia,
Goldwasser Analysis of Algorithms 32