Module 3 - Dynamic Programming
Module 3 - Dynamic Programming
• Principle of optimality
• 0/1 Knapsack
• Largest Common Subsequence
• Travelling Salesperson Problem
• Multistage Graph problem
(using Forward computation)
4
Comparison with Greedy Approach
• Greedy and Dynamic Programming are methods for solving
optimization problems.
• However, often you need to use dynamic programming since
the optimal solution cannot be guaranteed by a greedy
algorithm.
• Dynamic Programming provides efficient solutions for some
problems for which a brute force approach would be very slow.
• To use Dynamic Programming we need only show that the
principle of optimality applies to the problem.
Elements of Dynamic Programming …
• Principle of optimality
In an optimal sequence of decisions or choices, each subsequence
must also be optimal.
• Memorization (for overlapping sub-problems)
• avoid calculating the same thing twice,
• usually by keeping a table of know results that fills up as sub-instances
are solved.
Example: Fibonacci numbers
Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 24
Computing the nth fibonacci number using bottom-up
iteration:
• f(0) = 0
• f(1) = 1
• f(2) = 0+1 = 1
• f(3) = 1+1 = 2
• f(4) = 1+2 = 3
• f(5) = 2+3 = 5
•
•
•
• f(n-2) = f(n-3)+f(n-4)
• f(n-1) = f(n-2)+f(n-3)
• f(n) = f(n-1) + f(n-2)
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) for n ≥ 2
• Computing the nth Fibonacci number recursively (top-
down): f(n)
f(n-1) + f(n-2)
...
Recursive calls for fib
fib Using Dynamic Programming
Knapsack 0-1 Problem
• The difference between this problem and the fractional knapsack one
is that you CANNOT take a fraction of an item.
• Basically, the solution to the optimization problem for Sk+1 might NOT contain the
optimal solution from problem Sk.
Knapsack 0-1 Problem
• Let’s illustrate that point with an example:
Item Weight Value
I0 3 10
I1 8 4
I2 9 9
I3 8 11
• The best set of items from {I0, I1, I2} is {I0, I1, I2}
• BUT the best set of items from {I0, I1, I2, I3} is {I0, I2, I3}.
• In this example, note that this optimal solution, {I 0, I2,
I3}, does NOT build upon the previous optimal solution,
{I0, I1, I2}.
• (Instead it build's upon the solution, {I0, I2}, which is really the optimal subset of {I0, I1, I2}
with weight 12 or less.)
Knapsack 0-1 problem
• So now we must re-work the way we build upon previous sub-
problems…
• Let B[k, w] represent the maximum total value of a subset Sk with weight
w.
• Our goal is to find B[n, W], where n is the total number of items and W
is the maximal weight the knapsack can carry.
• this means that the best subset of Sk that has total weight w is:
1)The best subset of Sk-1 that has total weight w, or
2)The best subset of Sk-1 that has total weight w-wk plus the item k
Knapsack 0-1 Problem –
Recursive Formula
• The best subset of Sk that has the total weight w, either contains item
k or not.
• Second case: wk ≤ w
• Then the item k can be in the solution, and we choose the case with greater
value.
Knapsack 0-1 Algorithm
for w = 0 to W { // Initialize 1st row to 0’s
B[0,w] = 0
}
for i = 1 to n { // Initialize 1st column to 0’s
B[i,0] = 0
}
for i = 1 to n {
for w = 0 to W {
if wi <= w { //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
}
else B[i,w] = B[i-1,w] // wi > w
}
}
Knapsack 0-1 Problem
• Let’s run our algorithm on the following data:
• n = 4 (# of elements)
• W = 5 (max weight)
• Elements (weight, value):
(2,3), (3,4), (4,5), (5,6)
Knapsack 0-1 Example
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
for i = 1 to n
B[i,0] = 0
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 vi = 3
2 0 wi = 2
3 0 w=1
4 0 w-wi = -1
We’re DONE!!
The max possible value that can be carried in this knapsack is $7
Knapsack 0-1 Algorithm
• This algorithm only finds the max possible value that can be carried
in the knapsack
• The value in B[n,W]
• To know the items that make this maximum value, we need to trace
back through the table.
Knapsack 0-1 Algorithm
Finding the Items
• Let i = n and k = W
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed knapsack?
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 6
3 0 0 3 4 5 7 wi = 5
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 7
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 5
3 0 0 3 4 5 7 wi = 4
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 7
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 4
3 0 0 3 4 5 7 wi = 3
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 3
k – wi = 2
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4) Item 1
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
3 0 0 3 4 5 7 wi = 2
4 0 0 3 4 5 7 B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4) Item 1
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
3 0 0 3 4 5 7 wi = 2
4 0 0 3 4 5 7 B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
k = 0, so we’re DONE!
Brute-force search or exhaustive search, also known as generate and test, is a very
general problem solving, is a very general problem solving technique and
algorithmic paradigm that consists of systematically enumerating all possible
candidates for the solution and checking whether each candidate satisfies the
problem's statement.
• Consider the problem having weights and profits are:
• Weights: {3, 4, 6, 5}
• Profits: {2, 3, 1, 4}
• The weight of the knapsack is 8 kg
• The number of items is 4
43
The 0-1 knapsack problem
• A solution to the knapsack problem may be obtained by making a sequence
of decisions on the variables x1, x2, ... , xn. A decision on variable x; involves
deciding which of the values 0 or 1 is to be assigned to it.
• Let (X) be the value of an optimal solution to KNAP(l,j, X). Since the
principle of optimality holds, we obtain
44
The 0-1 knapsack problem (Ref. Horowithz Sahni , page no-305)
• Consider the knapsack instance n = 3, (w1, w2,w3) =
(2, 3, 4), (p1, p2, p3) = (1, 2, 5) and M = 6.
Initially compute
S0 ={(0,0)}
45
The 0-1 knapsack problem
46
The 0-1 knapsack problem
47
Largest/Longest Common Subsequence (LCS)
(Ref. Parag Dave, page no-285)
A subsequence is a sequence that appears in the same relative order, but not necessarily
contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of
“abcdefg”.
Problem:
Given two sequences X = <x1,x2,…xm> and Y = <y1,y2,…yn> , Find the longest sub-
sequence Z = <z1,….zk> that is common to X and Y.
For example:
If X = < A,B,C,B,D,A,B> and Y = <B,D,C,A,B,A>
then some common sub-sequences are:
{A} {B} {C} {D} {A,A} {B,B} {B,C,A} {B,C,A} {B,C,B,A } {B,D,A,B}
48
49
PRINT-LCS (b, x, i, j)
1. if i=0 or j=0
2. then return
3. if b [i,j] = ' ↖ '
4. then PRINT-LCS (b,x,i-1,j-1)
5. print x_i
6. else if b [i,j] = ' ↑ '
7. then PRINT-LCS (b,X,i-1,j)
8. else PRINT-LCS (b,X,i,j-1)
50
Largest/Longest Common Subsequence (LCS)
Example:
Given two strings are X = BACDB and Y = BDCB
find the longest common subsequence.
51
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)
In the traveling salesman problem, a map of cities is given to the salesman and
he has to visit all the cities only once and return to his starting point to complete
the tour in such a way that the length of the tour is the shortest among all
possible tours for this map.
Clearly starting from a given city, the salesman will have a total of (n-1)! Different
sequences:
If n = 2, A and B, there is no choice.
If n = 3, i.e. he wants to visit three cities
inclusive of the starting point, he has 2! Possible routes and so on.
52
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)
The Dynamic Programming proceeds as follows:-
Step-1
Consider the given travelling salesman problem in which he wants to find that route which has shortest
distance.
Step-2
Consider set of 0element, such that
g(2, Φ ) = c21 g(3, Φ ) = c31 g(4, Φ ) = c41
Step-3
After completion of step-2, consider sets of 1 elements, such that
Set {2}: g(3,{2}) = c32 + g(2, Φ ) = c32 + c21
g(4,{2}) = c42 + g(2, Φ ) = c42 + c21
Set {3}: g(2,{3}) = c23 + g(3, Φ ) = c23 + c31
g(4,{3}) = c43 + g(3, Φ) = c43 + c31
Set {4}: g(2,{4}) = c24 + g(4, Φ ) = c24 + c41
g(3,{4}) = c34 + g(4, Φ ) = c34 + c41
Step-4
After completion of step-3, consider sets of 2 elements, such that
Set {2,3}: g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})}
Set {2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})}
Set {3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})}
Step-5
After completion of step-4, Find the length of an optimal tour:
f = g(1,{2,3,4}) = min { c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3}) }
Step-6
After completion of step-5, Find the Optimal TSP tour 53
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)
Solve the TSP problem for given Distance matrix.
g(2, Φ ) = c21 = 1
g(3, Φ ) = c31 = 15
g(4, Φ ) = c41 = 6
7 -55
The shortest path in multistage graphs
• e.g.
■ d(A,T) = min{4+d(D,T),
11+d(E,T)}
= min{4+18, 11+13} = 22.
7 -57
• d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
7 -58
Algorithm Fgraph(G,k,n,p)
//input is k stage graph G=(V,E) with n
vertices
//indexed in order of stages
// E set of edges, c[i][j] is cost of <i,j>
// p[1..k] is a minimum cost path
{
fcost[n] = 0.0;
For j= n-1 to 1 step -1 do
{ // compute fcost[j]
Let r be the vertex such that <j, r> is an
edge
of G and c[j][r] + fcost[r] is minimum;
fcost[j] = c[j][r] + fcost[r];
d[j]=r ;
}
p[1]=1; p[k] = n; 7 -59
Complexity
• G represented using adjacency list
• Vertex r can be found in time proportional to the degree of vertex j.
• If G has |E| edges, the total time required is Ѳ(|v| +|E|)
• Additional space required for cost[],d[],p[]
7 -60
Backward approach
(forward reasoning)
• d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
• d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
= min{ 2+16, 5+2 } = 7
7 -61
• d(S,T) = min{d(S, D)+d(D, T), d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9
7 -62
Algorithm Bgraph(G,k,n,p)
//input is k stage graph G=(V,E) with n vertices
//indexed in order of stages
// E set of edges, c[i][j] is cost of <i,j>
// p[1..k] is a minimum cost path
{
bcost[1] = 0.0;
For j=2 to n do
{ // compute bcost[j]
Let r be the vertex such that <r, j> is an edge
of G and bcost[r] + c[r][j] is minimum;
bcost[j] = bcost[r] + c[r][j];
d[j]=r ;
}
p[1]=1; p[k] = n;
for j= k-1 to 2 do p[j] = d[p[j+1]];
} 7 -63
Principle of optimality
• Principle of optimality: Suppose that in solving a
problem, we have to make a sequence of decisions
D1, D2, …, Dn. If this sequence is optimal, then the
last k decisions, 1 < k < n must be optimal.
• e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1, i2,
…, j must be a shortest path from i1 to j
• In summary, if a problem can be described by a
multistage graph, then it can be solved by dynamic
programming.
7 -64
Dynamic programming
7 -65
The End
66