0% found this document useful (0 votes)
10 views66 pages

Module 3 - Dynamic Programming

The document discusses dynamic programming as an algorithm design method for optimization problems, emphasizing its bottom-up approach compared to divide-and-conquer and greedy methods. Key concepts include the principle of optimality, memorization, and specific problems like the 0/1 Knapsack problem and Fibonacci sequence. It provides detailed explanations and examples of how to implement dynamic programming algorithms effectively.

Uploaded by

shriharsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views66 pages

Module 3 - Dynamic Programming

The document discusses dynamic programming as an algorithm design method for optimization problems, emphasizing its bottom-up approach compared to divide-and-conquer and greedy methods. Key concepts include the principle of optimality, memorization, and specific problems like the 0/1 Knapsack problem and Fibonacci sequence. It provides detailed explanations and examples of how to implement dynamic programming algorithms effectively.

Uploaded by

shriharsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Dynamic Programming

• Principle of optimality
• 0/1 Knapsack
• Largest Common Subsequence
• Travelling Salesperson Problem
• Multistage Graph problem
(using Forward computation)

(Ref. Horowithz Sahni and Parag


Dave )
Dynamic Programming

• Dynamic programming is typically applied to optimization problem.


• Dynamic Programming is an algorithm design method that can be
used when the solution to a problem may be viewed as the result of
a sequence of decisions
Why Dynamic Programming?
• Divide-and-Conquer : a top-down approach.
partitions a problem into independent subproblems

• Greedy method : only works with the local information

• Dynamic programming :a bottom-up approach.


Solutions for smaller instances are stored in a table for later use.
Comparison with divide-and-conquer

• Divide-and-conquer algorithms split a problem into separate


subproblems, solve the subproblems, and combine the
results for a solution to the original problem
• Example: Quicksort
• Example: Mergesort
• Example: Binary search
• Divide-and-conquer algorithms can be thought of as top-
down algorithms
• In contrast, a dynamic programming algorithm proceeds by
solving small problems, then combining them to find the
solution to larger problems
• Dynamic programming can be thought of as bottom-up

4
Comparison with Greedy Approach
• Greedy and Dynamic Programming are methods for solving
optimization problems.
• However, often you need to use dynamic programming since
the optimal solution cannot be guaranteed by a greedy
algorithm.
• Dynamic Programming provides efficient solutions for some
problems for which a brute force approach would be very slow.
• To use Dynamic Programming we need only show that the
principle of optimality applies to the problem.
Elements of Dynamic Programming …

• Principle of optimality
In an optimal sequence of decisions or choices, each subsequence
must also be optimal.
• Memorization (for overlapping sub-problems)
• avoid calculating the same thing twice,
• usually by keeping a table of know results that fills up as sub-instances
are solved.
Example: Fibonacci numbers
Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 24
Computing the nth fibonacci number using bottom-up
iteration:
• f(0) = 0
• f(1) = 1
• f(2) = 0+1 = 1
• f(3) = 1+1 = 2
• f(4) = 1+2 = 3
• f(5) = 2+3 = 5



• f(n-2) = f(n-3)+f(n-4)
• f(n-1) = f(n-2)+f(n-3)
• f(n) = f(n-1) + f(n-2)
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) for n ≥ 2
• Computing the nth Fibonacci number recursively (top-
down): f(n)

f(n-1) + f(n-2)

f(n-2) + f(n-3) f(n-3) + f(n-4)

...
Recursive calls for fib
fib Using Dynamic Programming
Knapsack 0-1 Problem

• The difference between this problem and the fractional knapsack one
is that you CANNOT take a fraction of an item.

• You can either take it or not.


• Hence the name Knapsack 0-1 problem.
Knapsack 0-1 Problem

• As we did before we are going to solve the problem in terms of sub-


problems.
• So let’s try to do that…

• Our first attempt might be to characterize a sub-problem as follows:


• Let Sk be the optimal subset of elements from {I0, I1, …,
Ik}.
• What we find is that the optimal subset from the elements {I 0, I1, …, Ik+1} may not correspond
to the optimal subset of elements from {I0, I1, …, Ik} in any regular pattern.

• Basically, the solution to the optimization problem for Sk+1 might NOT contain the
optimal solution from problem Sk.
Knapsack 0-1 Problem
• Let’s illustrate that point with an example:
Item Weight Value
I0 3 10
I1 8 4
I2 9 9
I3 8 11

• The maximum weight the knapsack can hold is 20.

• The best set of items from {I0, I1, I2} is {I0, I1, I2}
• BUT the best set of items from {I0, I1, I2, I3} is {I0, I2, I3}.
• In this example, note that this optimal solution, {I 0, I2,
I3}, does NOT build upon the previous optimal solution,
{I0, I1, I2}.
• (Instead it build's upon the solution, {I0, I2}, which is really the optimal subset of {I0, I1, I2}
with weight 12 or less.)
Knapsack 0-1 problem
• So now we must re-work the way we build upon previous sub-
problems…
• Let B[k, w] represent the maximum total value of a subset Sk with weight
w.
• Our goal is to find B[n, W], where n is the total number of items and W
is the maximal weight the knapsack can carry.

• So our recursive formula for subproblems:


B[k, w] = B[k - 1,w], if wk > w
= max { B[k - 1,w], B[k - 1,w - wk] + vk}, otherwise

• this means that the best subset of Sk that has total weight w is:
1)The best subset of Sk-1 that has total weight w, or
2)The best subset of Sk-1 that has total weight w-wk plus the item k
Knapsack 0-1 Problem –
Recursive Formula

• The best subset of Sk that has the total weight w, either contains item
k or not.

• First case: wk > w


• Item k can’t be part of the solution! If it was the total weight would be > w,
which is unacceptable.

• Second case: wk ≤ w
• Then the item k can be in the solution, and we choose the case with greater
value.
Knapsack 0-1 Algorithm
for w = 0 to W { // Initialize 1st row to 0’s
B[0,w] = 0
}
for i = 1 to n { // Initialize 1st column to 0’s
B[i,0] = 0
}
for i = 1 to n {
for w = 0 to W {
if wi <= w { //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
}
else B[i,w] = B[i-1,w] // wi > w
}
}
Knapsack 0-1 Problem
• Let’s run our algorithm on the following data:
• n = 4 (# of elements)
• W = 5 (max weight)
• Elements (weight, value):
(2,3), (3,4), (4,5), (5,6)
Knapsack 0-1 Example

i/w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

// Initialize the base cases


for w = 0 to W
B[0,w] = 0

for i = 1 to n
B[i,0] = 0
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 vi = 3
2 0 wi = 2
3 0 w=1
4 0 w-wi = -1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 vi = 3
2 0 wi = 2
3 0 w=2
4 0 w-wi = 0

if wi <= w //item i can be in the


solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w]
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 vi = 3
2 0 wi = 2
3 0 w=3
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 vi = 3
2 0 wi = 2
3 0 w=4
4 0 w-wi = 2

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 vi = 3
2 0 wi = 2
3 0 w=5
4 0 w-wi = 3

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 wi = 3
3 0 w=1
4 0 w-wi = -2

if wi <= w //item i can be in the if wi <= w //item i can be in the solution


solution if vi + B[i-1,w-wi] > B[i-1,w]
if vi + B[i-1,w-wi] > B[i-1,w] B[i,w] = vi + B[i-1,w- wi]
B[i,w] = vi + B[i-1,w- wi] else
else B[i,w] = B[i-1,w]
B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // wi > w
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 wi = 3
3 0 w=2
4 0 w-wi = -1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 wi = 3
3 0 w=3
4 0 w-wi = 0

if wi <= w //item i can be in the


solution if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w]
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 4 wi = 3
3 0 w=4
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 4 7 wi = 3
3 0 w=5
4 0 w-wi = 2

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 w = 1..3
4 0 w-wi = -3..-1

if wi <= w //item i can be in the if wi <= w //item i can be in the solution


solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 w=4
4 0 w-wi = 0

if wi <= w //item i can be in the solution if wi <= w //item i can be in the


if vi + B[i-1,w-wi] > B[i-1,w] solution
B[i,w] = vi + B[i-1,w- wi] if vi + B[i-1,w-wi] > B[i-1,w]
else B[i,w] = vi + B[i-1,w- wi]
B[i,w] = B[i-1,w] else
else B[i,w] = B[i-1,w] // wi > w B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 7 w=5
4 0 w-wi = 1

if wi <= w //item i can be in the solution if wi <= w //item i can be in the


if vi + B[i-1,w-wi] > B[i-1,w] solution
B[i,w] = vi + B[i-1,w- wi] if vi + B[i-1,w-wi] > B[i-1,w]
else B[i,w] = vi + B[i-1,w- wi]
B[i,w] = B[i-1,w] else
else B[i,w] = B[i-1,w] // wi > w B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 vi = 6
2 0 0 3 4 4 7 wi = 5
3 0 0 3 4 5 7 w = 1..4
4 0 0 3 4 5 w-wi = -4..-1

if wi <= w //item i can be in the solution if wi <= w //item i can be in the


if vi + B[i-1,w-wi] > B[i-1,w] solution
B[i,w] = vi + B[i-1,w- wi] if vi + B[i-1,w-wi] > B[i-1,w]
else B[i,w] = vi + B[i-1,w- wi]
B[i,w] = B[i-1,w]
else
else B[i,w] = B[i-1,w] // wi > w
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 vi = 6
2 0 0 3 4 4 7 wi = 5
3 0 0 3 4 5 7 w=5
4 0 0 3 4 5 7 w-wi = 0

if wi <= w //item i can be in the if wi <= w //item i can be in the solution


solution
if vi + B[i-1,w-wi] > B[i-1,w] if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi] B[i,w] = vi + B[i-1,w- wi]
else else
B[i,w] = B[i-1,w] B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7

We’re DONE!!
The max possible value that can be carried in this knapsack is $7
Knapsack 0-1 Algorithm

• This algorithm only finds the max possible value that can be carried
in the knapsack
• The value in B[n,W]

• To know the items that make this maximum value, we need to trace
back through the table.
Knapsack 0-1 Algorithm
Finding the Items
• Let i = n and k = W
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed knapsack?
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 6
3 0 0 3 4 5 7 wi = 5
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 7

i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 5
3 0 0 3 4 5 7 wi = 4
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 7

i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 4
3 0 0 3 4 5 7 wi = 3
4 0 0 3 4 5 7 B[i,k] = 7
B[i-1,k] = 3
k – wi = 2
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4) Item 1
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
3 0 0 3 4 5 7 wi = 2
4 0 0 3 4 5 7 B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4) Item 1
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
3 0 0 3 4 5 7 wi = 2
4 0 0 3 4 5 7 B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
k = 0, so we’re DONE!

The optimal knapsack should contain:


Item 1 and Item 2
Knapsack 0-1 Problem – Run Time
for w = 0 to W
B[0,w] = 0 O(W)
for i = 1 to n
B[i,0] = 0 O(n)
for i = 1 to n Repeat n times
for w = 0 to W
< the rest of the code > O(W)

What is the running time of this algorithm?


O(n*W)

Remember that the brute-force algorithm takes: O(2n)

Brute-force search or exhaustive search, also known as generate and test, is a very
general problem solving, is a very general problem solving technique and
algorithmic paradigm that consists of systematically enumerating all possible
candidates for the solution and checking whether each candidate satisfies the
problem's statement.
• Consider the problem having weights and profits are:
• Weights: {3, 4, 6, 5}
• Profits: {2, 3, 1, 4}
• The weight of the knapsack is 8 kg
• The number of items is 4

43
The 0-1 knapsack problem
• A solution to the knapsack problem may be obtained by making a sequence
of decisions on the variables x1, x2, ... , xn. A decision on variable x; involves
deciding which of the values 0 or 1 is to be assigned to it.
• Let (X) be the value of an optimal solution to KNAP(l,j, X). Since the
principle of optimality holds, we obtain

• For arbitrary Equation generalizes to

• Equation may be solved by beginning with the knowledge


for all X and may be successively
computed using equation 2.

44
The 0-1 knapsack problem (Ref. Horowithz Sahni , page no-305)
• Consider the knapsack instance n = 3, (w1, w2,w3) =
(2, 3, 4), (p1, p2, p3) = (1, 2, 5) and M = 6.

Initially compute
S0 ={(0,0)}

Where Si may now be obtained by merging together Si-1 and si1.


Purging Rule: If si+1 contains (Pj, Wj) and (Pk, Wk); these two pairs such that
Pj <= Pk and Wj >= Wk, then (Pj, Wj) can be eliminated . This purging rule is
also called as dominance rule. In short, remove the pair with less profit
and more weight.

45
The 0-1 knapsack problem

46
The 0-1 knapsack problem

47
Largest/Longest Common Subsequence (LCS)
(Ref. Parag Dave, page no-285)

A subsequence is a sequence that appears in the same relative order, but not necessarily
contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of
“abcdefg”.

Problem:
Given two sequences X = <x1,x2,…xm> and Y = <y1,y2,…yn> , Find the longest sub-
sequence Z = <z1,….zk> that is common to X and Y.

For example:
If X = < A,B,C,B,D,A,B> and Y = <B,D,C,A,B,A>
then some common sub-sequences are:
{A} {B} {C} {D} {A,A} {B,B} {B,C,A} {B,C,A} {B,C,B,A } {B,D,A,B}

From which {B,C,B,A } {B,D,A,B} are the Longest Common sub-sequences.

C[i, j]= length of LCS for X[i] and Y[j] 🡺

48
49
PRINT-LCS (b, x, i, j)
1. if i=0 or j=0
2. then return
3. if b [i,j] = ' ↖ '
4. then PRINT-LCS (b,x,i-1,j-1)
5. print x_i
6. else if b [i,j] = ' ↑ '
7. then PRINT-LCS (b,X,i-1,j)
8. else PRINT-LCS (b,X,i,j-1)

50
Largest/Longest Common Subsequence (LCS)

Example:
Given two strings are X = BACDB and Y = BDCB
find the longest common subsequence.

51
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)

In the traveling salesman problem, a map of cities is given to the salesman and
he has to visit all the cities only once and return to his starting point to complete
the tour in such a way that the length of the tour is the shortest among all
possible tours for this map.
Clearly starting from a given city, the salesman will have a total of (n-1)! Different
sequences:
If n = 2, A and B, there is no choice.
If n = 3, i.e. he wants to visit three cities
inclusive of the starting point, he has 2! Possible routes and so on.

52
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)
The Dynamic Programming proceeds as follows:-

Step-1
Consider the given travelling salesman problem in which he wants to find that route which has shortest
distance.
Step-2
Consider set of 0element, such that
g(2, Φ ) = c21 g(3, Φ ) = c31 g(4, Φ ) = c41
Step-3
After completion of step-2, consider sets of 1 elements, such that
Set {2}: g(3,{2}) = c32 + g(2, Φ ) = c32 + c21
g(4,{2}) = c42 + g(2, Φ ) = c42 + c21
Set {3}: g(2,{3}) = c23 + g(3, Φ ) = c23 + c31
g(4,{3}) = c43 + g(3, Φ) = c43 + c31
Set {4}: g(2,{4}) = c24 + g(4, Φ ) = c24 + c41
g(3,{4}) = c34 + g(4, Φ ) = c34 + c41
Step-4
After completion of step-3, consider sets of 2 elements, such that
Set {2,3}: g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})}
Set {2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})}
Set {3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})}
Step-5
After completion of step-4, Find the length of an optimal tour:
f = g(1,{2,3,4}) = min { c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3}) }
Step-6
After completion of step-5, Find the Optimal TSP tour 53
Travelling Salesman Problem
(Ref. Horowithz Sahni , page no-319)
Solve the TSP problem for given Distance matrix.
g(2, Φ ) = c21 = 1
g(3, Φ ) = c31 = 15
g(4, Φ ) = c41 = 6

k = 1, consider sets of 1 element:


Set {2}: g(3,{2}) = c32 + g(2, Φ ) = c32 + c21 = 7 + 1 = 8
g(4,{2}) = c42 + g(2, Φ ) = c42 + c21 = 3 + 1 = 4
Set {3}: g(2,{3}) = c23 + g(3, Φ ) = c23 + c31 = 6 + 15 = 21
g(4,{3}) = c43 + g(3, Φ) = c43 + c31 = 12 + 15 = 27
Set {4}: g(2,{4}) = c24 + g(4, Φ ) = c24 + c41 = 4 + 6 = 10
g(3,{4}) = c34 + g(4, Φ ) = c34 + c41 = 8 + 6 = 14

k = 2, consider sets of 2 elements:


Set {2,3}: g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})} = min {3+21, 12+8}= min {24, 20}= 20
Set {2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})} = min {7+10, 8+4}= min {17, 12} = 12
Set {3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})} = min {6+14, 4+27}= min {20, 31}= 20

Length of an optimal tour:


f = g(1,{2,3,4}) = min { c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3}) }
= min {2 + 20, 9 + 12, 10 + 20}
= min {22, 21, 30} = 21
Successor of node 1: c13 + g(3,{2,4}) = 3
Successor of node 3: = c34 + g(4,{2})} =4
Successor of node 4: g(4,{2}) =2
Successor of node 2: back to staring node 1
54
Optimal TSP tour: 1 →3 → 4 →2→1 with minimum cost= 21
Dynamic Programming

• Dynamic Programming is an algorithm design method that can be


used when the solution to a problem may be viewed as the result of
a sequence of decisions

7 -55
The shortest path in multistage graphs

• e.g.

• The greedy method can not be applied to this case:


(S, A, D, T) 1+4+18 = 23.
• The real shortest path is:
(S, C, F, T) 5+2+2 = 9.
7 -56
Dynamic programming approach
• Dynamic programming approach (forward approach):

• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}

■ d(A,T) = min{4+d(D,T),
11+d(E,T)}
= min{4+18, 11+13} = 22.
7 -57
• d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.

• d(C, T) = min{ 2+d(F, T) } = 2+2 = 4


• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
• The above way of reasoning is called
backward reasoning.

7 -58
Algorithm Fgraph(G,k,n,p)
//input is k stage graph G=(V,E) with n
vertices
//indexed in order of stages
// E set of edges, c[i][j] is cost of <i,j>
// p[1..k] is a minimum cost path
{
fcost[n] = 0.0;
For j= n-1 to 1 step -1 do
{ // compute fcost[j]
Let r be the vertex such that <j, r> is an
edge
of G and c[j][r] + fcost[r] is minimum;
fcost[j] = c[j][r] + fcost[r];
d[j]=r ;
}
p[1]=1; p[k] = n; 7 -59
Complexity
• G represented using adjacency list
• Vertex r can be found in time proportional to the degree of vertex j.
• If G has |E| edges, the total time required is Ѳ(|v| +|E|)
• Additional space required for cost[],d[],p[]

7 -60
Backward approach
(forward reasoning)

• d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
• d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
= min{ 2+16, 5+2 } = 7
7 -61
• d(S,T) = min{d(S, D)+d(D, T), d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9

7 -62
Algorithm Bgraph(G,k,n,p)
//input is k stage graph G=(V,E) with n vertices
//indexed in order of stages
// E set of edges, c[i][j] is cost of <i,j>
// p[1..k] is a minimum cost path
{
bcost[1] = 0.0;
For j=2 to n do
{ // compute bcost[j]
Let r be the vertex such that <r, j> is an edge
of G and bcost[r] + c[r][j] is minimum;
bcost[j] = bcost[r] + c[r][j];
d[j]=r ;
}
p[1]=1; p[k] = n;
for j= k-1 to 2 do p[j] = d[p[j+1]];
} 7 -63
Principle of optimality
• Principle of optimality: Suppose that in solving a
problem, we have to make a sequence of decisions
D1, D2, …, Dn. If this sequence is optimal, then the
last k decisions, 1 < k < n must be optimal.
• e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1, i2,
…, j must be a shortest path from i1 to j
• In summary, if a problem can be described by a
multistage graph, then it can be solved by dynamic
programming.

7 -64
Dynamic programming

• Forward approach and backward approach:


• Note that if the recurrence relations are formulated
using the forward approach then the relations are solved
backwards . i.e., beginning with the last decision
• On the other hand if the relations are formulated using
the backward approach, they are solved forwards.
• To solve a problem by using dynamic programming:
• Find out the recurrence relations.
• Represent the problem by a multistage graph.

7 -65
The End

66

You might also like