Module 4: Dynamic Programming Approach
General Method: Dynamic Programming (DP)
Dynamic programming is a technique that breaks the problems into sub-problems, and
saves the result for future purposes so that we do not need to compute the result again.
The sub problems are optimized to optimize the overall solution is known as optimal
substructure property.
The main use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the minimum or the
maximum solution of a problem.
Types of Dynamic Programming Approach:
1. Multistage problem
2. All Pair Shortest Path: Floyd Warshall Algorithm
3. 0/1 Knapsack Problem
4. Assembly-line Scheduling Problem
5. Travelling Salesman Problem
6. Longest Common Subsequence
7. Bellman-Ford Algorithm
Multistage Graphs
A multistage graph is a directed and weighted graph, in which all vertices are divided into
stages, such that all the edges are only directed from the vertex of the current stage to the
vertex of the next stage (i.e. there is no edge between the vertex of the same stage and to
the previous stage vertex).
Both the first and last stage contains only one vertex called source and destination/sink
respectively.
Algorithm:
Step 1: Number the nodes from 1 to n such that
Source = 1, Destination = n
Stage 1 → Stage 2 → … → Stage k
Step 2: Process vertices in reverse order (from n-1 to 1)
For each vertex i, check all outgoing edges (i → j).
Update cost[i] as:
cost[i] = min(cost[i], weight (i, j) + cost[j])
Step 3: Continue updating for all vertices from second last to the first stage.
Step 4: The final answer (minimum cost from source to destination) is stored in cost [1].
Example:
Stage1 Stage2 Stage3 Stage4 Stage5
Draw the table contains Vertices(V), Cost(C) and Distance(D).
In this problem, we will start from 12, so find cost of 12 and distance of 12.
For V (12) and Stage 5
Cost (5,12) =0
Distance is 12 because destination of vertex 12 is 12 itself.
For V (9, 10, 11) and Stage 4
Cost (4, 9) = 4
Cost (4, 10) = 2
Cost (4, 11) = 6
Distance is 12 because destination of all vertices is 12.
For V (6, 7, 8) and Stage 3
V (6)
Cost (3, 6) = C (6, 9) + Cost (4, 9) = 6+4 = 10
Cost (3, 6) = C (6, 10) + Cost (4, 10) = 5+2 = 7
Cost (3, 6) = Min [10, 7] = 7
Distance is 10 because 10 vertex given the minimum cost.
V (7)
Cost (3, 7) = C (7, 9) + Cost (4, 9) = 4+4 = 8
Cost (3, 7) = C (7, 10) + Cost (4, 10) = 3+2 = 5
Cost (3, 7) = Min [8, 5] = 5
Distance is 10 because 10 vertex given the minimum cost.
V (8)
Cost (3, 8) = C (8, 10) + Cost (4, 10) = 5+2 = 7
Cost (3, 8) = C (8, 11) + Cost (4, 11) = 6+6 = 12
Cost (3, 8) = Min [7, 12] = 7
Distance is 10 because 10 vertex given the minimum cost.
For V (2, 3, 4, 5) and Stage 2
V (2)
Cost (2, 2) = C (2, 6) + Cost (3, 6) = 4+7 = 11
Cost (2, 2) = C (2, 7) + Cost (3, 7) = 2+5 = 7
Cost (2, 2) = C (2, 8) + Cost (3, 8) = 1+7 = 8
Cost (2, 2) = Min [11, 7, 8] = 7
Distance is 7 because 7 vertex given the minimum cost.
V (3)
Cost (2, 3) = C (3, 6) + Cost (3, 6) = 2+7 = 9
Cost (2, 3) = C (3, 7) + Cost (3, 7) = 7+5 = 12
Cost (2, 3) = Min [9, 12] = 9
Distance is 6 because 6 vertex given the minimum cost.
V (4)
Cost (2, 4) = C (4, 8) + Cost (3, 8) = 11+7 = 18
Cost (2, 4) = 18
Distance is 8 because 8 vertex given the minimum cost.
V (5)
Cost (2, 5) = C (5, 7) + Cost (3, 7) = 11+5 = 16
Cost (2, 5) = C (5, 8) + Cost (3, 8) = 8+7 = 15
Cost (2, 5) = Min [16, 15] = 15
Distance is 8 because 8 vertex given the minimum cost.
V (1)
Cost (1, 1) = C (1, 2) + Cost (2, 2) = 9+7 = 16
Cost (1, 1) = C (1, 3) + Cost (2, 3) = 7+9 = 16
Cost (1, 1) = C (1, 4) + Cost (2, 4) = 3+18 = 21
Cost (1, 1) = C (1, 2) + Cost (2, 5) = 2+15 = 17
Cost (1, 1) = Min [16, 16, 21, 17] = 16
Distance is 2 and 3 both because both vertices given the minimum cost.
Here, Vertex 1 derived two same costs for vertex 2 and 3, So first take 2
d(1, 1)= 2
d(2, 2)= 7
d(3, 7)= 10
d(4,10)=12
So, the first shortest path is 1 -> 2 -> 7 -> 10 -> 12
Now, take vertex 3 for path
d(1, 1)= 3
d(2, 3)= 6
d(3, 6)= 10
d(4, 10)= 12
So, the second shortest path is 1 -> 3 -> 6 -> 10 -> 12
Time Complexity: O(E)
Applications:
Project planning
Multi-step decision processes
Routing in layered networks
All-Pairs Shortest Path: Floyd-Warshall Algorithm
The Floyd–Warshall algorithm works by maintaining a two-dimensional array that
represents the distances between nodes. Initially, this array is filled using only the direct
edges between nodes. Then, the algorithm gradually updates these distances by
checking if shorter paths exist through intermediate nodes.
This algorithm works for both the directed and undirected weighted graphs and can
handle graphs with both positive and negative weight edges.
Formula:
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
Algorithm:
Step 1: Construct an adjacency matrix A with all the costs of edges present in the graph.
If there is no path between two vertices, mark the value as ∞.
Step 2: Derive another adjacency matrix A1 from A keeping the first row and first column
of the original adjacency matrix intact in A1. And for the remaining values, say A1[i,j],
if A[i,j]>A[i,k]+A[k,j] then replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the
values. Here, in this step, k = 1 (first vertex acting as pivot).
Step 3: Repeat Step 2 for all the vertices in the graph by changing the k value for every
pivot vertex until the final matrix is achieved.
Step 4: The final adjacency matrix obtained is the final solution with all the shortest paths.
Example:
Time Complexity: O(V³)
Applications:
Network routing
Map systems
Game AI for all-pairs travel
0/1 Knapsack Problem
The 0/1 knapsack problem is a classic optimization problem where you have a knapsack
with a limited capacity and a set of items, each with a weight and value.
The goal is to select a subset of items to put in the knapsack such that the total weight
does not exceed the capacity and the total value is maximized.
The "0/1" aspect means you can either take an entire item or leave it behind; you
cannot take a fraction of an item.
Algorithm:
Example:
Time Complexity: O(nW)
Applications:
Budget allocation
Cargo loading
Investment decisions
Assembly Line Scheduling Problem
The assembly line scheduling problem aims to find the fastest way to move a product
through a series of stations on two parallel assembly lines.
It's a classic dynamic programming problem.
The core idea is to build up the solution from smaller sub problems, storing intermediate
results to avoid redundant calculations.
Example:
Time Complexity: O(n)
Applications:
o Car manufacturing
o Robotic line automation
o Workflow optimization
🔸 DP Formulas:
Let f1[j] and f2[j] be the minimum times to reach station j on line 1 and 2:
f1[j] = min(f1[j-1] + a1[j], f2[j-1] + t2[j-1] + a1[j])
f2[j] = min(f2[j-1] + a2[j], f1[j-1] + t1[j-1] + a2[j])
Travelling Salesperson Problem (TSP)
The Traveling Salesperson Problem (TSP) asks for the shortest possible route that visits
each city exactly once and returns to the origin city.
It's a classic optimization problem with a wide range of applications
Given a set of cities and distances, find the minimum cost cycle that visits every city
once and returns to the start.
The rule is to start the execution from the source and end the execution at again to the
same node. For example, if execution started from node 1 the come back to the node 1
at the end to terminate the traversal.
Time Complexity: O(n² * 2ⁿ)
Applications:
o Logistics & delivery
o Vehicle routing
o Circuit design
Example:
Longest Common Subsequence (LCS)
The Longest Common Subsequence (LCS) problem aims to find the longest sequence of
characters that are present in the same order within two given sequences (strings).
It's a classic dynamic programming problem.
Algorithm:
Step1: Input
o Two sequences seq1 of length m.
o Two sequences seq2 of length n.
Step 2: Create a 2D Table
o Create a 2D table dp with dimensions (m+1) x (n+1).
o Initialize all elements of dp to 0.
Step 3: Fill the Table
o Loop through each character in seq1 and seq2.
o If characters match, update the dp table with the value from the
diagonal cell plus one.
o If characters do not match, update the dp table with the maximum
value from the cell to the left or the cell above.
Step 4: Retrieve the Result: The value in dp[m][n] contains the length of the LCS.
Example:
seq1 = “ABCD” and seq2 = “ACDB”
Create and initialize the table
Whenever a character matches, the cell contains a North-West arrow (“ ”), and
the value in that cell is the value from the North-West cell plus 1.
Rules:
“←”: Take the maximum value from the cell directly above or the cell
directly to the left.).
If character matches then the cell contains a North-West arrow (“ ”) and
the value in that cell is the value from the North-West cell plus 1.
For i=1 (character A in seq1):
For i=2 (character B in seq1):
For i=3 (character C in seq1):
For i=4 (character D in seq1):
Final answer using backtracking:
The Longest Common Subsequence is ACD
Time Complexity: O(m * n)
Applications:
o DNA sequence analysis
o File comparison
o Spell checkers
Single Source Shortest Path: Bellman-Ford Algorithm
The Bellman-Ford algorithm finds the shortest paths from a single source vertex to all
other vertices in a weighted directed graph, even when the graph contains negative
edge weights.
The algorithm works by iteratively relaxing the edges of the graph, updating distance
estimates until they converge to the shortest path distances.
Time Complexity: O(V * E)
Applications:
o Currency arbitrage
o Network routing with penalties
o Time-dependent pathfinding
Example:
Summary of Real-Life Applications
Algorithm Real-Life Applications
Multistage Graph Project scheduling, stage-based workflows
Floyd-Warshall All-pairs routing, network latency maps
0/1 Knapsack Resource allocation, cargo loading
Assembly Line Scheduling Industrial automation, car manufacturing
TSP Delivery routing, sales route optimization
LCS Bioinformatics (DNA), plagiarism detection, diff tools
Bellman-Ford Routing with negative costs, currency arbitrage