0% found this document useful (0 votes)
15 views32 pages

ADSunit 3

The document explains the Greedy Method, an algorithmic strategy for solving optimization problems by making locally optimal choices at each step, which can lead to a globally optimal solution for certain problems. It details the Job Sequencing with Deadlines problem as an application of the Greedy Method, emphasizing the importance of sorting jobs by profit and scheduling them within their deadlines to maximize total profit. Additionally, it introduces the Single Source Shortest Path problem and Dijkstra's Algorithm, which also follows the Greedy approach, and contrasts it with Dynamic Programming, highlighting their respective strategies and applications.

Uploaded by

prasinguvarshiny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views32 pages

ADSunit 3

The document explains the Greedy Method, an algorithmic strategy for solving optimization problems by making locally optimal choices at each step, which can lead to a globally optimal solution for certain problems. It details the Job Sequencing with Deadlines problem as an application of the Greedy Method, emphasizing the importance of sorting jobs by profit and scheduling them within their deadlines to maximize total profit. Additionally, it introduces the Single Source Shortest Path problem and Dijkstra's Algorithm, which also follows the Greedy approach, and contrasts it with Dynamic Programming, highlighting their respective strategies and applications.

Uploaded by

prasinguvarshiny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

1.

GREEDY METHOD — General Method


🔹 Introduction
The Greedy Method is an algorithmic strategy used to solve optimization problems —
problems where we aim to find the best (maximum or minimum) solution out of many
possible solutions.

In this method, the solution is built step-by-step, choosing the locally optimal choice at each
step, with the hope that these local choices lead to a globally optimal solution.

📘 Detailed Explanation (Long Notes Style)


The greedy method works by making a sequence of choices, each of which looks best at the
moment. The main idea is simple — choose the best option available right now without
reconsidering previous choices.

It does not always guarantee an optimal solution for every problem, but for certain types of
problems (like Minimum Spanning Tree, Huffman Coding, Job Sequencing, etc.), it gives the
best solution.

A problem can be solved using the greedy method only if it has the following properties:

1. Greedy Choice Property:


A global optimum can be reached by choosing a local optimum (best choice) at each step.
2. Optimal Substructure:
A problem exhibits optimal substructure if an optimal solution to the problem contains
optimal solutions to its subproblems.

🧩 General Steps in Greedy Method


1. Define the Objective Function — what we need to maximize or minimize (e.g., profit, cost,
weight).
2. Make a Greedy Choice — at each step, pick the best possible option available.
3. Check Feasibility — ensure the selected option doesn’t violate constraints (like deadlines,
capacity, etc.).
4. Repeat — continue until all elements are considered or constraints are satisfied.
5. Find the Optimal Solution — the collection of all chosen items forms the solution.

🧾 Algorithmic Outline (Generic Pseudocode)


Greedy(problem)

solution = Ø

while (solution is not complete)


{

x = feasible choice that gives the best benefit

add x to solution

return solution

⚡ Examples of Greedy Method Applications


Job Sequencing with Deadlines
Fractional Knapsack Problem
Prim’s Algorithm (Minimum Spanning Tree)
Kruskal’s Algorithm (Minimum Spanning Tree)
Dijkstra’s Algorithm (Shortest Path)
Huffman Coding (Data Compression)

🧩 Short Key Points (For Quick Revision)


Greedy = make best local choice → hope for best global result
Works for problems with Greedy Choice Property + Optimal Substructure
Does not always give the best solution for all problems
Used in: MST, Huffman, Dijkstra, Job Sequencing, Knapsack (fractional)
Steps: define → choose → check → repeat → finish

💼 2. JOB SEQUENCING WITH DEADLINES


🔹 Problem Definition
You are given a set of n jobs, each with:

A deadline (time by which it should be completed)


A profit (if completed before or on its deadline)

Each job takes 1 unit of time, and only one job can be scheduled at a time.
The goal is to maximize the total profit by scheduling jobs within their deadlines.

📘 Detailed Explanation (Long Notes Style)


The Job Sequencing with Deadlines problem is an application of the greedy method.
Here, we try to schedule jobs in such a way that:

The jobs are completed within their respective deadlines, and


The total profit obtained is maximum.
To do this efficiently, we sort the jobs in decreasing order of profit — i.e., select the most
profitable job first.
Then we assign each job to the latest possible time slot before its deadline (if that slot is
free).
This ensures that more profitable jobs get higher priority while still respecting deadlines.

🧮 Example
Job Deadline Profit

J1 2 60

J2 1 100

J3 3 20

J4 2 40

J5 1 20

Step 1: Sort jobs by profit (descending):


J2 (100), J1 (60), J4 (40), J3 (20), J5 (20)

Step 2: Schedule each job in the latest slot before its deadline:

Time Slot Job Scheduled

1 J2

2 J1

3 J3

✅ Total Profit = 100 + 60 + 20 = 180


🧾 Algorithm (Pseudocode)
JobSequencing(Jobs[], n)
{

sort Jobs by profit in descending order

for each job in Jobs[]

find a free slot (from job.deadline down to 1)

if slot found

assign job to slot

⚡ Time Complexity
Sorting: O(n log n)
Scheduling: O(n²) (can be optimized using Disjoint Set Union)

🧩 Short Key Points


Each job → 1 unit time, has deadline & profit
Objective → Maximize total profit within deadlines
Sort jobs by profit (descending)
Assign job to latest available slot before deadline
Example applications: manufacturing, task scheduling, CPU scheduling

Aww you’re most welcome, Varshiny! 😊💫


You’re doing amazing — the way you’re organizing your topics is exactly how a topper would
prepare.
Now let’s dive deep into the Single Source Shortest Path (SSSP) problem — one of the most
beautiful applications of the Greedy Method using Dijkstra’s Algorithm.

Single Source Shortest Path (SSSP)


🔹 1. Introduction
The Single Source Shortest Path problem is to find the shortest path from a single source
vertex to all other verticesin a weighted graph.

Here, “shortest” means the path with the minimum total weight (or cost) among all possible
paths.

This problem is very common in network routing, GPS navigation, transportation planning,
etc.

🧠 2. Problem Definition
Given a weighted, connected graph G(V, E) and a source vertex ‘s’,
find the shortest path from ‘s’ to every other vertex in the graph.

Each edge (u, v) has a weight w(u, v) which represents the cost or distance between u and v.

🧩 3. Algorithms for SSSP


Algorithm Works On Approach

Dijkstra’s Algorithm Graphs with non-negative Greedy Method ✅


edge weights

Bellman–Ford Algorithm Graphs with negative edge Dynamic Programming


weights

Since we’re under the Greedy Method, we’ll study Dijkstra’s Algorithm.

⚙️ 4. Dijkstra’s Algorithm (Greedy Approach)


🔹 Concept
Dijkstra’s algorithm is used to find the shortest distance from a source node to all other
nodes in a weighted graph(with non-negative weights).

The main idea is:

At every step, choose the vertex with the minimum distance from the source that has not
been processed yet, and update the distances of its neighboring vertices.

This follows the Greedy strategy, since we always pick the vertex with the current minimum
distance (local optimum) hoping it leads to the global optimum.

🧮 5. Steps of the Algorithm


1. Initialize distances:
Distance of source vertex = 0
All other vertices = ∞ (infinity)
2. Mark all vertices as unvisited.
3. Pick the vertex with the minimum distance (from unvisited ones).
4. Update the distances of all adjacent vertices:
If dist[u] + weight(u, v) < dist[v], update dist[v].
5. Mark the vertex as visited.
6. Repeat until all vertices are visited.
🧾 6. Pseudocode
Dijkstra(Graph, source):

for each vertex v in Graph:

dist[v] = ∞

visited[v] = false

dist[source] = 0

for i = 1 to number of vertices - 1:

u = vertex with minimum dist[u] and visited[u] = false

visited[u] = true

for each neighbor v of u:

if visited[v] == false and dist[u] + weight(u, v) < dist[v]:

dist[v] = dist[u] + weight(u, v)

return dist[]

📘 7. Example
Graph:
Edge Weight

A→B 4

A→C 2

B→C 5

B→D 10

C→E 3

E→D 4

D→F 11

Source = A

Step-by-step working:

Step Chosen Vertex Updated Distances


(A,B,C,D,E,F)

Init — 0, ∞, ∞, ∞, ∞, ∞

1 A A=0, B=4, C=2

2 C A=0, B=4, C=2, E=5

3 B A=0, B=4, C=2, D=14, E=5

4 E A=0, B=4, C=2, D=9, E=5

5 D A=0, B=4, C=2, D=9, E=5,


F=20
✅ Shortest Distances from A:
A(0), B(4), C(2), D(9), E(5), F(20)

⏱️ 8. Time Complexity
Implementation Time Complexity

Using Adjacency Matrix O(V²)

Using Min Heap + Adjacency List O((V + E) log V)

💡 9. Important Points
Works only for non-negative edge weights
Follows the Greedy Method (always picks the minimum distance vertex next)
Produces shortest path tree (SPT) from the source vertex
Used in network routing protocols (like OSPF)

🧩 10. Short Notes (Quick Revision)


Problem: Find shortest path from one source to all vertices
Approach: Greedy
Algorithm: Dijkstra’s Algorithm
Key Operation: Pick vertex with smallest distance
Condition: No negative edges
Applications: GPS, routing, road maps
Time Complexity: O((V + E) log V) using priority queue

🛣️ 11. Real-Life Analogy


Imagine Google Maps showing you the shortest driving routes from your current location
(source) to all nearby places —
that’s exactly what Dijkstra’s Algorithm does in a graph of cities and roads!

Perfect, Varshiny! 🌸 You’re moving through these topics in an excellent order — after Greedy,
Dynamic Programming (DP) is the next powerful problem-solving method.

Let’s begin with the General Method of Dynamic Programming — I’ll first give you a deep and
clear explanation in paragraph form, and then short bullet points for easy revision.

🧠 Dynamic Programming (DP) — General Method


🔹 1. Introduction
Dynamic Programming (DP) is a powerful algorithmic technique used to solve optimization
problems — problems that require finding the maximum or minimum value of something, like
cost, profit, time, or distance.

It is based on the idea of breaking a complex problem into smaller overlapping subproblems,
solving each subproblem only once, and storing the results (to avoid repeated computation).

The word “Dynamic” refers to the idea that we build up the solution step by step, using the
results of previously solved subproblems.

DP is mainly used when the problem has Optimal Substructure and Overlapping
Subproblems properties.

📘 2. Key Idea
Dynamic Programming = Divide the problem into smaller subproblems → Solve each
subproblem once → Store its result → Reuse it to build the final solution.

Unlike the Greedy Method (which makes local best choices at every step), DP explores all
possible subproblem combinations and ensures the global optimum.

🔹 3. Properties Required for DP


A problem can be solved using DP only if it satisfies these two conditions:

1. Optimal Substructure Property:


The optimal solution to the entire problem can be constructed using optimal solutions of
its subproblems.
Example:
In the shortest path problem, the shortest path from A to C via B = shortest path from A
to B + shortest path from B to C.
2. Overlapping Subproblems Property:
The problem can be broken into subproblems that are reused multiple times.
Example:
Fibonacci numbers — Fib(5) depends on Fib(4) and Fib(3), and Fib(4) again depends on
Fib(3) and Fib(2)(so the same subproblems repeat).

🧩 4. Approaches in Dynamic Programming


There are two main approaches to implement DP:

🟢 (a) Top-Down Approach (Memoization)


The problem is solved recursively.
Each time a subproblem is solved, its result is stored in a table (memo).
If the same subproblem appears again, the stored result is reused instead of
recomputing.
✅ Example: Recursive Fibonacci with memoization.
🔵 (b) Bottom-Up Approach (Tabulation)
The problem is solved iteratively.
We start from the smallest subproblems and build up to the final solution.
Results are stored in a table (usually an array).

✅ Example: Iterative Fibonacci using an array.


⚙️ 5. General Steps in DP Algorithm
1. Define the structure of subproblems
(Identify what smaller problems combine to form the bigger one)
2. Write the recursive formula (recurrence relation)
(Express the main problem in terms of subproblems)
3. Compute and store results of subproblems
(In a table, array, or matrix)
4. Build up the solution to the main problem
(From bottom to top)

📄 6. General Pseudocode
DynamicProgramming(problem)

define subproblems

initialize DP_table

for each subproblem in order:

compute result using previous subproblems

store result in DP_table

return solution from DP_table

💡 7. Difference Between Greedy and DP


Feature Greedy Method Dynamic Programming

Strategy Local best choice All possible subproblems

Works for Simple optimization Complex overlapping


problems problems

Optimality May not give global Always gives global


optimum optimum

Example Job sequencing, Dijkstra Knapsack (0/1), Floyd–


Warshall, Matrix Chain
Multiplication

🧠 8. Examples of DP Problems
0/1 Knapsack Problem
Fibonacci Sequence
Matrix Chain Multiplication
Longest Common Subsequence (LCS)
Optimal Binary Search Tree (OBST)
All-Pairs Shortest Path (Floyd–Warshall)
Travelling Salesman Problem (TSP)

🧾 9. Advantages and Limitations


Advantages Limitations

Avoids recomputation → faster Needs extra memory

Guarantees optimal result Hard to identify DP relation for some


problems

Applicable to many complex problems Works only if subproblems overlap

🧩 10. Short Notes (Quick Revision)


DP = “remembering past results to solve future subproblems efficiently.”
Key Properties → Optimal Substructure + Overlapping Subproblems
Approaches → Top-Down (Memoization), Bottom-Up (Tabulation)
Used in optimization problems
Greedy ≠ DP → Greedy is local; DP is global
Common examples: Knapsack, LCS, Matrix Chain, OBST, TSP
Time complexity often reduced from exponential → polynomial

🧮 11. Real-Life Analogy


Imagine you’re climbing stairs and you can take 1 or 2 steps at a time.
Instead of recalculating ways to reach each step again and again,
you remember (store) the number of ways for each previous step and use it —
That’s Dynamic Programming in action!

🚦 All Pairs Shortest Path (APSP)


(Using Floyd–Warshall Algorithm)

🔹 1. Introduction
In the Single Source Shortest Path (SSSP) problem (like Dijkstra’s algorithm), we find the
shortest path from one source vertex to all others.
But in the All Pairs Shortest Path (APSP) problem, we find the shortest paths between every
pair of vertices in a weighted graph.

This means for a graph with n vertices, we want to find the shortest distance:
[
dist(i, j)
]
for every pair of vertices (i, j).

🧠 2. Problem Definition
Given:

A weighted, directed graph G(V, E)


Each edge (u, v) has a weight w(u, v) (which may be positive or negative, but no negative
cycles allowed)

Goal:
Find the shortest path distance between every pair of vertices (i, j).

🔹 3. Why Dynamic Programming?


The Floyd–Warshall Algorithm is based on the Dynamic Programming approach, because:
The shortest path between two vertices can be constructed using the shortest paths
between intermediate vertices.
It reuses results of smaller subproblems (paths using fewer vertices) to build up the final
shortest paths.

🧩 4. Principle Used — Optimal Substructure


Let dist[i][j][k] be the shortest distance from vertex i to vertex j
using only the first k vertices (1 to k) as intermediate nodes.

Then,
[
dist[i][j][k] = \min \big( dist[i][j][k-1],\ dist[i][k][k-1] + dist[k][j][k-1] \big)
]

Meaning:
The shortest path from i to j using the first k vertices is either:

1. The same as the shortest path using the first k–1 vertices, or
2. A path that goes through vertex k in between i and j.

⚙️ 5. Floyd–Warshall Algorithm (Simplified Form)


We can implement it using a 2D distance matrix, where
dist[i][j] = weight of the edge from i to j.
If there is no direct edge, set dist[i][j] = ∞ (infinity).
Also, dist[i][i] = 0 for all i.

🧾 6. Pseudocode
FloydWarshall(dist[][], n)

for k = 1 to n:

for i = 1 to n:

for j = 1 to n:

if dist[i][j] > dist[i][k] + dist[k][j]:

dist[i][j] = dist[i][k] + dist[k][j]

✅ After completion, dist[i][j] will hold the shortest distance from i to j.


🧮 7. Example
Let’s take a simple weighted directed graph with 4 vertices:

From → To 1 2 3 4

1 0 3 ∞ 7

2 8 0 2 ∞

3 5 ∞ 0 1

4 2 ∞ ∞ 0

Step 1: Initially, dist[i][j] = weights from table.


Step 2: Iteratively update using the formula
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
for each vertex k = 1 to 4.

After completing all iterations:

From → To 1 2 3 4

1 0 3 5 6

2 5 0 2 3

3 3 6 0 1

4 2 5 7 0

✅ So, the shortest distance between every pair of vertices is stored in this matrix.
⏱️ 8. Time and Space Complexity
Operation Complexity

Time O(V³)

Space O(V²)

💡 9. Advantages
Works for positive and negative weights (but not negative cycles).
Simple and easy to implement.
Gives shortest paths between all pairs in a single run.

⚠️ 10. Limitations
Cannot handle negative weight cycles (distances keep decreasing infinitely).
For large graphs (many vertices), O(V³) becomes expensive.

🧩 11. Applications
Routing algorithms (like in computer networks)
Airline/transportation systems for shortest travel routes
Urban traffic systems
Game development (pathfinding among all possible nodes)

🧾 12. Short Notes (Quick Revision)


Goal: Shortest paths between every pair of vertices.
Algorithm: Floyd–Warshall.
Approach: Dynamic Programming.
Recurrence Relation:
[
dist[i][j] = \min(dist[i][j],\ dist[i][k] + dist[k][j])
]
Time Complexity: O(V³)
Space Complexity: O(V²)
Works for negative edges but no negative cycles.
Output: Distance matrix showing shortest paths between all pairs.

🚗 13. Real-Life Analogy


Imagine a map of cities where you want to know the shortest distance between every pair of
cities — not just from one source.
You compute step by step whether traveling via another city gives a shorter route — that’s
exactly what Floyd–Warshall does!
Perfect, Varshiny! 🌟
Now we’ll move to Single Source Shortest Path with General Weights — which is important
because Dijkstra doesn’t work with negative edge weights. Here, the Bellman-Ford Algorithm
is used. Let’s go step by step, with detailed explanation, example, and short notes at the end.

🚦 Single Source Shortest Path — General Weights (Bellman-


Ford Algorithm)
🔹 1. Introduction
The Single Source Shortest Path (SSSP) problem finds the shortest distance from a source
vertex to all other vertices in a weighted graph.

Unlike Dijkstra’s algorithm, Bellman-Ford works with graphs containing negative weight
edges, as long as there are no negative weight cycles.

It is based on Dynamic Programming rather than purely greedy methods.

🧠 2. Problem Definition
Given:

A weighted directed graph G(V, E)


Source vertex s
Edge weights may be positive or negative

Goal:
Find shortest distances dist[s][v] for all vertices v.
Also, detect if the graph contains a negative weight cycle (because shortest paths don’t exist
in that case).

🔹 3. Principle / Idea
Bellman-Ford repeatedly relaxes all edges in the graph.

Relaxation:
For an edge (u, v) with weight w(u, v), if:
[
dist[v] > dist[u] + w(u, v)
]
then update:
[
dist[v] = dist[u] + w(u, v)
]

Key idea:
The shortest path can have at most V-1 edges.
By relaxing all edges V-1 times, we guarantee correct shortest distances.

Optional step:

Check one more iteration for negative cycles.


If any distance can still be reduced, a negative cycle exists.

⚙️ 4. Algorithm (Pseudocode)
BellmanFord(G, source):

for each vertex v in G:

dist[v] = ∞

dist[source] = 0

for i = 1 to V-1:

for each edge (u, v) in G:

if dist[u] + weight(u, v) < dist[v]:

dist[v] = dist[u] + weight(u, v)

// Check for negative weight cycles

for each edge (u, v) in G:

if dist[u] + weight(u, v) < dist[v]:

report "Negative weight cycle exists"

🧮 5. Example
Graph with 5 vertices and edges:
Edge Weight
1→2 6
1→3 7
2→3 8
2→4 5
2→5 -4
3→4 -3
4→2 -2
5→4 7
Source = 1
Step 1: Initialize distances:
dist[1]=0, dist[2..5]=∞
Step 2: Relax all edges V-1 = 4 times
After completing relaxations:
Vertex Distance from 1
1 0
2 2
3 7
4 4
5 -2
Step 3: Check negative cycles → None in this graph.

✅ Result: Shortest distances from vertex 1 are [0, 2, 7, 4, -2].


⏱️ 6. Time and Space Complexity
Operation Complexity

Time O(V × E)

Space O(V)

💡 7. Advantages
Handles negative weight edges
Detects negative weight cycles
Simple to implement using edge list

⚠️ 8. Limitations
Slower than Dijkstra’s Algorithm for graphs with only non-negative weights (O(V × E) vs
O((V + E) log V))
Cannot handle graphs with negative cycles for shortest paths

🧩 9. Applications
Graphs with negative weights (like financial networks)
Network routing algorithms
Finding arbitrage opportunities (detecting negative cycles)

🧾 10. Short Notes (Quick Revision)


Problem: Single Source Shortest Path with negative weights
Algorithm: Bellman-Ford (Dynamic Programming)
Steps:
1. Initialize distances
2. Relax all edges V-1 times
3. Check for negative cycles
Relaxation Formula: dist[v] = min(dist[v], dist[u] + w(u,v))
Time Complexity: O(V × E)
Can detect: Negative weight cycles

🚗 11. Real-Life Analogy


Imagine traveling in a city where some roads give you discounts (negative costs) and some
charge extra (positive costs).
Bellman-Ford finds the cheapest routes from your source location to all other locations, while
also checking if there’s a loop where you keep gaining money infinitely (negative cycle).

🌳 Optimal Binary Search Trees (OBST)


🔹 1. Introduction
A Binary Search Tree (BST) is a data structure where:

The left child of a node contains keys smaller than the node’s key.
The right child contains keys larger than the node’s key.

An Optimal BST (OBST) is a BST constructed in such a way that the expected cost of
searching a key is minimized, based on the frequency (probability) of access of each key.

This problem arises in searching applications, compilers, and database indexing, where
frequently accessed keys should be near the root to reduce search time.

🔹 2. Problem Definition
Given:

A set of n keys: ( K_1 < K_2 < ... < K_n )


Probabilities of searching each key: ( P = {p_1, p_2, ..., p_n} )
Probabilities of unsuccessful searches between keys: ( Q = {q_0, q_1, ..., q_n} )

Goal:
Construct a BST that minimizes the expected search cost:

[
E[Cost] = \sum_{i=1}^{n} (depth(K_i) + 1) \cdot p_i + \sum_{i=0}^{n} (depth(dummy_i) + 1) \cdot
q_i
]

Where dummy_i are the non-existing keys representing failed searches.

🔹 3. Approach — Dynamic Programming


OBST uses DP because:

1. The problem exhibits Optimal Substructure:


If K_r is the root of the BST for keys K_i…K_j,
then the left subtree must be the optimal BST for K_i…K_{r-1}
and the right subtree must be the optimal BST for K_{r+1}…K_j.
2. The problem has overlapping subproblems:
The cost of subtrees for certain key ranges is calculated multiple times.

⚙️ 4. Steps to Construct OBST


1. Define the subproblem:
Let cost[i][j] = minimum cost of BST containing keys K_i to K_j.
2. Recursive formula:
[
cost[i][j] = \min_{r=i}^{j} \Big( cost[i][r-1] + cost[r+1][j] + sum(p[i..j] + q[i-1..j]) \Big)
]

Here:

r = root of BST for keys K_i…K_j


sum(p[i..j] + q[i-1..j]) = total probability of keys in this range
1. Base case:
[
cost[i][i-1] = q_{i-1} \quad \text{(empty tree cost)}
]
2. Build the DP table from smaller subtrees to larger subtrees.
3. Select the root that gives minimum cost for each range.

🧾 5. Algorithm (Pseudocode)
OptimalBST(P, Q, n):

for i = 1 to n+1:

cost[i][i-1] = Q[i-1] // Base case

for length = 1 to n:

for i = 1 to n - length + 1:

j = i + length - 1

cost[i][j] = ∞

for r = i to j:

temp = cost[i][r-1] + cost[r+1][j] + sum(P[i..j] + Q[i-1..j])

if temp < cost[i][j]:

cost[i][j] = temp

root[i][j] = r
return cost[1][n]

🧮 6. Example
Let’s say we have 3 keys: K1, K2, K3

Probabilities: p = [0.2, 0.5, 0.3]


Dummy probabilities: q = [0.1, 0.05, 0.05, 0.1]

Step 1: Compute costs of single keys (base case)


Step 2: Compute costs of ranges of 2 keys
Step 3: Compute cost for all 3 keys using the formula
Step 4: Choose roots that minimize total cost

✅ Result: We get cost-minimized BST and root positions.


⏱️ 7. Time and Space Complexity
Operation Complexity

Time O(n³) (can be reduced to O(n²) with


optimization)

Space O(n²)

💡 8. Applications
Database indexing
Compiler symbol tables
Searching in dictionaries and text processing

🧩 9. Key Points (Short Notes)


OBST = BST with minimum expected search cost
Uses Dynamic Programming
Optimal Substructure + Overlapping Subproblems
Recursive Formula: cost[i][j] = min(cost[i][r-1] + cost[r+1][j] + sum(P[i..j]+Q[i-1..j]))
Base Case: cost[i][i-1] = q[i-1]
Time Complexity: O(n³), Space: O(n²)
Output: Cost Table + Root Table for construction

🌟 10. Real-Life Analogy


Imagine a library where some books are borrowed frequently and some rarely.
Frequently borrowed books are placed near the top of the shelf (root).
Rarely borrowed books are placed deeper.
This arrangement minimizes the average search time — exactly like OBST.

Perfect, Varshiny! 🌸 Let’s focus on the 0/1 Knapsack Problem in detail so you can add it to
your notes clearly. I’ll explain it deeply first and then provide short bullet points for quick
revision.

🎒 0/1 Knapsack Problem (Dynamic Programming)


🔹 1. Introduction
The 0/1 Knapsack Problem is a classic optimization problem in which:

You have n items, each with:


Weight Wi
Profit Pi
A knapsack with maximum weight capacity W

Goal:
Select a subset of items such that:

1. The total weight ≤ W


2. The total profit is maximized

Constraint:

You cannot take a fraction of an item → either take it (1) or leave it (0).

This makes the problem non-fractional, unlike the Fractional Knapsack (which can use greedy
method).

🔹 2. Dynamic Programming Approach


The key idea: Solve the problem using subproblems, storing results in a table, and building up
the solution.

🔹 3. Subproblem Definition
Let:

[
dp[i][w] = \text{maximum profit using first } i \text{ items with knapsack capacity } w
]

i = number of items considered


w = current capacity being considered

🔹 4. Recurrence Relation
[
dp[i][w] =
\begin{cases}
dp[i-1][w] & \text{if } W_i > w [2mm]
\max(dp[i-1][w], P_i + dp[i-1][w-W_i]) & \text{if } W_i \le w
\end{cases}
]

Explanation:

1. If the item doesn’t fit (weight > current capacity) → we skip it


2. If the item fits → we take the maximum of:
Not taking the item (dp[i-1][w])
Taking the item (Pi + dp[i-1][w-Wi])

🔹 5. Base Case
[
dp[0][w] = 0 \quad \text{for all } w
]

Zero items → profit is zero

[
dp[i][0] = 0 \quad \text{for all } i
]

Zero capacity → profit is zero

🔹 6. DP Table Construction
1. Create a 2D array dp[n+1][W+1]
2. Fill rows = items, columns = capacities 0 to W
3. Use the recurrence formula to fill the table

Example:

Items:
Item Weight Profit

1 1 1

2 3 4

3 4 5

4 5 7

Knapsack capacity = 7

Step 1: Initialize dp[0][w] = 0

Step 2: Fill the table row by row using the recurrence formula

Step 3: dp[n][W] gives the maximum profit = 9

Items included can be determined by backtracking from dp[n][W].

🔹 7. Time & Space Complexity


Feature Complexity

Time O(n × W)

Space O(n × W)

Can be optimized to O(W) using a single 1D array if needed.

🔹 8. Advantages
Always gives optimal solution
Simple to implement using DP table

🔹 9. Short Notes (Quick Revision)


Problem: Maximize profit without exceeding knapsack weight
Constraint: 0/1 → cannot take fractions
DP Table: dp[i][w] = max profit using first i items for capacity w
Recurrence:
[
dp[i][w] = \max(dp[i-1][w], P_i + dp[i-1][w-W_i]) \text{ if } W_i \le w
]
Base case: dp[0][w] = dp[i][0] = 0
Time Complexity: O(n × W), Space Complexity: O(n × W)

🔹 10. Real-Life Analogy


Imagine you’re packing a bag for a trip:

You have several items (clothes, gadgets, food)


Each has a weight and value
Your bag has a weight limit
You want to maximize total value without exceeding the weight → that’s exactly 0/1
Knapsack!

Got it, Varshiny 🌸 — let’s go step-by-step and make it crystal clear with a proper diagram and
explanation.
We’ll learn String Editing (Edit Distance) in Dynamic Programming neatly — just like how your
exam expects it 👇
🌟 String Editing Using Dynamic Programming
🧠 Definition
The String Editing problem (also called Edit Distance) is to find the minimum number of
operations needed to convert one string into another.

The allowed operations are:


1️⃣ Insertion
2️⃣ Deletion
3️⃣ Replacement

Each operation costs 1 unit.

🎯 Goal
Given two strings X and Y, find the minimum cost (edit distance) to convert X → Y.

⚙️ Example
Let

X = "sunday"

Y = "saturday"
We need to find the minimum number of edits to convert sunday → saturday.

🪄 Dynamic Programming Approach


We create a 2D table dp[m+1][n+1], where:

m = length of X
n = length of Y
dp[i][j] = minimum number of edits to convert first i characters of X → first j characters of
Y

📋 Base Conditions
Case Meaning Value

dp[0][j] X is empty → we must insert dp[0][j] = j


all j characters of Y

dp[i][0] Y is empty → we must dp[i][0] = i


delete all i characters of X

🧮 Recurrence Relation
If the last characters are same,
[
dp[i][j] = dp[i-1][j-1]
]

If the last characters are different,


[
dp[i][j] = 1 + \min(dp[i-1][j],\ dp[i][j-1],\ dp[i-1][j-1])
]
where

dp[i-1][j] → delete
dp[i][j-1] → insert
dp[i-1][j-1] → replace

🧩 Example Table for “sunday” → “saturday”


Let’s take smaller example to see the idea clearly:

X = "sunday" (m=6)
Y = "saturday" (n=8)

s a t u r d a y

** ** 0 1 2 3 4 5 6 7 8

s 1 0 1 2 3 4 5 6 7

u 2 1 1 2 2 3 4 5 6

n 3 2 2 2 3 3 4 5 6

d 4 3 3 3 3 4 3 4 5

a 5 4 3 4 4 4 4 3 4

y 6 5 4 4 5 5 5 4 3

✅ Minimum Edit Distance = 3


(We need 3 operations to convert sunday → saturday.)

🧩 Visualization Diagram
+-----------+-----------+-----------+

| DELETE | REPLACE | INSERT |

+-----------+-----------+-----------+

↘ ↓ ↙

dp[i-1][j] dp[i-1][j-1] dp[i][j-1]

At every cell (i, j),


we pick the minimum cost path among Insert, Delete, or Replace.

✅ Final Answer
dp[m][n] gives the minimum number of edits required.

📘 Time and Space Complexity


Time Complexity: O(m × n)
Space Complexity: O(m × n)

Perfect, Varshiny 🌸 — this one’s a very famous Dynamic Programming problem in Advanced
Data Structures and Algorithms:
Let’s understand the Travelling Salesperson Problem (TSP) clearly and completely (just like
how it appears in exams)!
🚗 Travelling Salesperson Problem (TSP)
🧠 Definition
The Travelling Salesperson Problem (TSP) is:

Given a list of cities and the distances between each pair of cities, find the shortest possible
route that visits every city exactly once and returns to the starting city.

🎯 Goal
Minimize the total travel cost (distance)
✅ while visiting each city once
✅ and returning to the starting city
🏙️ Example
Suppose there are 4 cities: A, B, C, D

The cost matrix (distance between cities) is:

From / To A B C D

A 0 10 15 20

B 10 0 35 25

C 15 35 0 30

D 20 25 30 0

We must find the minimum cost path that visits all cities exactly once and returns to A.

🪄 Dynamic Programming Approach (Held–Karp Algorithm)


⚙️ Idea
There are many possible paths, and checking all by brute force would take O(n!) time (very
large).

So, we use Dynamic Programming to reduce it to O(n² × 2ⁿ).

📘 Representation
Let:

n = number of cities
C[i][j] = cost (distance) from city i → city j
S = set of cities visited so far

Define a DP function:

[
dp[i][S] = \text{minimum cost to start at city i, visit all cities in set S, and return to the start.}
]

🧮 Recurrence Relation
[
dp[i][S] = \min_{j \in S} \big( C[i][j] + dp[j][S - {j}] \big)
]

Meaning:

Go from city i to city j


Then visit remaining cities (S - {j})
Add travel cost C[i][j]
Choose the minimum among all such j

📋 Base Case
If only one city (the start city) is left:
[
dp[i][{i}] = C[i][0]
]

🔁 Algorithm Steps
1. Start from city 0 (say A).
2. Use bitmask to represent visited cities.
3. Apply the recurrence relation recursively with memoization.
4. Find the minimum cost tour visiting all cities once and returning to the start.
All possible routes:

A → B → C → D → A = 10 + 35 + 30 + 20 = 95

A → B → D → C → A = 10 + 25 + 30 + 15 = 80

A → C → B → D → A = 15 + 35 + 25 + 20 = 95

A → C → D → B → A = 15 + 30 + 25 + 10 = 80

A → D → B → C → A = 20 + 25 + 35 + 15 = 95

A → D → C → B → A = 20 + 30 + 35 + 10 = 95

✅ Minimum cost = 80
✅ Path: A → B → D → C → A or A → C → D → B → A
⏱️ Complexity
Type Complexity

Time O(n² × 2ⁿ)

Space O(n × 2ⁿ)

🧩 Key Points for Exam


Concept Explanation

Problem Type Optimization

Technique Used Dynamic Programming (Held–Karp)

Subproblems Each subpath of unvisited cities

Overlapping Subproblems Yes

Example Finding shortest tour visiting all cities


Would you like me to show this same example (4 cities) with a simple diagram + C program
(DP version) to make it even clearer for your notes?

You might also like