0% found this document useful (0 votes)
23 views57 pages

3 - Algorithm Design Techniques

The document outlines various algorithmic techniques including Brute-Force, Greedy Algorithms, Recursion, Divide and Conquer, and Dynamic Programming, detailing their definitions, applications, and time complexities. It provides specific examples like the Traveling Salesman Problem and Naive String Matching, illustrating the brute-force approach and its limitations. Additionally, it discusses the characteristics of Greedy Algorithms and presents problems such as Activity Selection and Huffman Coding, highlighting when greedy solutions are applicable.

Uploaded by

Ahmed Jaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views57 pages

3 - Algorithm Design Techniques

The document outlines various algorithmic techniques including Brute-Force, Greedy Algorithms, Recursion, Divide and Conquer, and Dynamic Programming, detailing their definitions, applications, and time complexities. It provides specific examples like the Traveling Salesman Problem and Naive String Matching, illustrating the brute-force approach and its limitations. Additionally, it discusses the characteristics of Greedy Algorithms and presents problems such as Activity Selection and Huffman Coding, highlighting when greedy solutions are applicable.

Uploaded by

Ahmed Jaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

🔍 1.

Brute-Force
 While not discussed under a specific "brute-force" heading, this technique is
implicitly covered in many basic algorithm examples (e.g., exhaustive search,
naive string matching).
 Relevant Section:
o Chapter 32: String Matching — Naive string-matching algorithm
o Pages: Approx. 980–985

🧮 2. Greedy Algorithms
 Chapter 16: Greedy Algorithms
 Pages: 417–450
 Topics Covered:
o Activity-selection problem
o Greedy-choice property and optimal substructure
o Huffman coding

3. Recursion
 Recursion is a foundational concept discussed throughout the book, especially
in:
o Chapter 2: Getting Started (recursive insertion sort)
o Chapter 4: Divide-and-Conquer (recursive recurrence solving)
o Appendix A — Recursive mathematical techniques
 Pages:
o Chapter 2: Pages 15–30
o Chapter 4: Pages 97–126
o Appendix A: Pages A1–A17

4. Divide and Conquer


 Chapter 4: Divide-and-Conquer
 Pages: 97–126
 Topics Covered:
o Merge Sort
o Recurrence Relations (Master Theorem)
o Binary Search
o Strassen’s matrix multiplication

5. Dynamic Programming
 Chapter 15: Dynamic Programming
 Pages: 374–416
 Topics Covered:
o Rod Cutting
o Matrix Chain Multiplication
o Longest Common Subsequence
o Optimal BSTs

✅ 1. Brute-Force Technique
Definition:

Brute-force (also called exhaustive search) solves problems by trying all possible
solutions and selecting the best.

🔍 Example Problem 1:

Problem: Given an array of integers, find all pairs that sum to a target value k.

Input: arr = [2, 4, 3, 5], k = 7


Output: (2, 5), (4, 3)

Brute-force Approach:

Check every possible pair.

for i in range(len(arr)):
for j in range(i + 1, len(arr)):
if arr[i] + arr[j] == k:
print((arr[i], arr[j]))
Time Complexity: 𝑶( )y:

✅ Practice Question 1:

Q: Count how many substrings of "abc" are palindromes using brute-force.

Answer:
Substrings = ["a", "b", "c", "ab", "ac", "bc", "abc"]
Palindromes = "a", "b", "c"
✅ Count = 3

🧮 Brute-Force Problem Set (Advanced)

Problem 2: Traveling Salesman Problem (TSP)

Problem:
Given a list of cities and the distances between each pair, find the shortest possible
route that visits each city once and returns to the origin.

Cities = A, B, C, D
Distance matrix (symmetric):

A B C D
A 0 2 9 10
B 1 0 6 4
C 15 7 0 8
D 6 3 12 0

Brute-force Solution:

1. Generate all permutations of the cities (excluding the starting city A).
2. Compute total round-trip distance for each permutation.
3. Select the minimum.

from itertools import permutations

cities = ['A', 'B', 'C', 'D']


dist = {
('A', 'B'): 2, ('A', 'C'): 9, ('A', 'D'): 10,
('B', 'A'): 1, ('B', 'C'): 6, ('B', 'D'): 4,
('C', 'A'): 15, ('C', 'B'): 7, ('C', 'D'): 8,
('D', 'A'): 6, ('D', 'B'): 3, ('D', 'C'): 12
}

min_cost = float('inf')
best_path = []

for perm in permutations(['B', 'C', 'D']):


cost = dist[('A', perm[0])]
for i in range(len(perm)-1):
cost += dist[(perm[i], perm[i+1])]
cost += dist[(perm[-1], 'A')] # Return to start
if cost < min_cost:
min_cost = cost
best_path = ['A'] + list(perm) + ['A']

print("Shortest path:", best_path)


print("Minimum cost:", min_cost)

Time Complexity: ( !) – infeasible for large n

🔍 Problem 3: Subset Sum Problem

Problem:
Given a set of integers, determine if there exists a subset whose sum is exactly a target
value T.

Set = [3, 34, 4, 12, 5, 2], Target = 9

Brute-force Approach:

 Generate all 2 subsets


 Check if any subset sums to T

Time Complexity: (2 ⋅ )

✅ Answer: Yes, subset [4, 5] sums to 9

Problem 4: String Matching (Naive)

Q: Does the pattern abc exist in the text abcabcabc?

Solution (Brute-force):

1. Slide the pattern from index 0 to (n-m)


2. At each position, check if pattern matches text substring

✅ Output: Yes, at index 0, 3, 6


Time: (( − + 1) ⋅ )

✅ Summary: When to Use Brute-Force

Feature Comments
🔍 Simple Implementation Easy to code and reason about
Inefficient for Large Input Exponential or quadratic time
✅ Guarantees Correctness Always finds the optimal answer
Often First Step Used as baseline for improvement

Problem 2: Traveling Salesman Problem (TSP)


Problem:

Given n cities, and a symmetric matrix of distances between each pair, find the shortest
round-trip that visits each city exactly once and returns to the starting point.

Let’s walk through this for n = 4:

Cities: A, B, C, D

Step-by-step:

1. Fix the starting city (say A).


2. List all permutations of the remaining cities:
B, C, D → there are ( − 1)! = 3! = 6 permutations:

BCD
BDC
CBD
CDB
DBC
DCB

3. For each permutation:


o Calculate path cost like:
A→B→C→D→A
Sum all edge distances.
o
4. Choose the one with minimum cost.

Time Complexity: 𝑶( !)

 You fix 1 city → permute the rest → ( − 1)!


 For each permutation, you compute the path cost in ( )

Thus:

( ) = ( − 1)! ⋅ ( ) = ( !)

Problem 3: Subset Sum


Problem:

Given a set of n integers and a target sum T, determine whether there exists a subset
that sums to T.

Example:
Set = [3, 34, 4, 12, 5, 2], T = 9

Step-by-step:

1. For each element, you have two choices:


o Include it
o Exclude it
2. This leads to all possible subsets:
o Number of subsets = 2
3. For each subset:
o Sum its elements
o Check if sum equals T

Example:

Subsets of [3, 4, 5]
→ [], [3], [4], [5], [3,4], [3,5], [4,5], [3,4,5]
→ Total: 2^3 = 8 subsets
Each subset requires up to ( ) time to sum.

Time Complexity: 𝑶( ⋅ )

 2 subsets
 Each subset summed in ( )

Problem 4: Naive String Matching


Problem:

Given text of length n, and pattern of length m, determine if the pattern exists in the
text.

Example:
Text = abcabcabc, Pattern = abc
n = 9, m = 3

Step-by-step:

1. Slide the pattern from position i = 0 to n - m = 6:


o Check:
T[0:3] = abc → ✅
T[1:4] = bca → ❌
T[2:5] = cab → ❌
T[3:6] = abc → ✅
T[4:7] = bca → ❌
T[5:8] = cab → ❌
T[6:9] = abc → ✅
2. At each of these n − m + 1 positions:
o You compare m characters

Time Complexity: 𝑶(( − + )⋅ ) = 𝑶( ⋅ )


 − + 1 pattern shifts
 ( ) comparisons at each

Summary of Time Complexities


Problem Steps/Structure Time Complexity
TSP Permutations of cities ( !)
Subset Sum All subset combinations (2 ⋅ )
Naive String Matching Shifts × character comparison (( − + 1) ⋅ ) ≈ ( ⋅ )

Python vs MATLAB Indexing


Feature Python MATLAB
Indexing Starts At 0 At 1
Slice Syntax T[start : end] (end excluded) T(start : end) (inclusive)
T[0:3] in Python Gives elements at 0, 1, 2 Not valid in MATLAB
T(1:3) in MATLAB Gives elements at 1, 2, 3 Equivalent to Python's T[0:3]

✅ MATLAB Code:

function naive_string_match(T, P)
n = length(T); % length of text
m = length(P); % length of pattern

fprintf('Matches found at positions: ');

for i = 1:(n - m + 1)
% Extract substring of length m
window = T(i : i + m - 1);

% Compare with pattern


if strcmp(window, P)
fprintf('%d ', i); % Match found at position i
end
end

fprintf('\n');
end
🧮 Example Usage:

T = 'abcabcabc';
P = 'abc';
naive_string_match(T, P);

Output:

Matches found at positions: 1 4 7

Let’s explore why and how:

(( − + 1) ⋅ )≈ ( ⋅ )

This is a standard simplification in asymptotic analysis, and understanding it is


important for accurate algorithm complexity comparisons.

🔍 Step-by-Step Explanation
Recall the Naive String Matching Time:

You slide the pattern over the text:

 You do this for ( − + 1) positions


 At each position, you compare up to mmm characters

So the time complexity is:

( )=( − + 1) ⋅

Asymptotic Approximation
We now want to simplify this expression for Big-O notation.

Case 1: When ≪ (common in practice)

Let’s say:
 n = 1,000,000
 m = 10

Then:

( − + 1) ⋅ = (999991) ⋅ 10 = 9,999,910 ≈ ⋅ = 1,000,000 ⋅ 10 =


10,000,000

So the difference is negligible at large scale.

Case 2: When ≈

Let’s say m = n/2

Then:

2
( − + 1) ⋅ = ( − /2 + 1) ⋅ /2 = ( /2 + 1) ⋅ /2 ≈ /4

Which is still within the same growth rate as ( ⋅ )

✅ Final Takeaway
In asymptotic notation, constants and small differences like +1 or -m are ignored:

(( − + 1) ⋅ )= ( ⋅ )

Because:

 − +1≤
 And we always keep the dominant terms in Big-O

✅ Topic 2: Greedy Algorithms

Definition:

A Greedy Algorithm makes a series of locally optimal choices in the hope that this leads
to a globally optimal solution.
It doesn’t backtrack or revise previous decisions.

Key Characteristics:

Feature Description
A global optimum can be reached by choosing the
Greedy-Choice Property
best option at each step.
The optimal solution of a problem contains optimal
Optimal Substructure
solutions to subproblems.

🔍 Example 1: Activity Selection Problem


Problem:

Given n activities with start and finish times, select the maximum number of activities
that don’t overlap.

Activities:
A1: (1, 4)
A2: (3, 5)
A3: (0, 6)
A4: (5, 7)
A5: (8, 9)
A6: (5, 9)

✅ Greedy Strategy:

1. Sort activities by their finish time


2. Always pick the earliest finishing non-overlapping activity

Step-by-Step:

Activity Start Finish


A1 1 4
A2 3 5
A3 0 6
A4 5 7
A6 5 9
A5 8 9
1. Pick A1 (1–4)
2. Next is A4 (5–7)
3. Then A5 (8–9)

✅ Output: A1, A4, A5

Time Complexity:

 Sorting: ( log⁡ )
 Selection: ( )
Total: ( log⁡ )

🔍 Example 2: Huffman Coding


Used in data compression.

Problem:

Given characters with frequencies, build a binary prefix code minimizing total weighted
path length.

Greedy Strategy:

 Always combine the two least frequent nodes first.

Example:

Char Freq
a 5
b 9
c 12
d 13
e 16
f 45

Huffman Tree gives codes like:


 f: 0
 c: 100
 d: 101
 a: 1100
 b: 1101
 e: 111

✅ Prefix code, no ambiguity, minimum cost

🧮 Practice Problem 1:
Q: You have coins of denominations [1, 5, 10, 25].
You want to make 63¢ using the least number of coins.
What coins will the greedy algorithm pick?

✅ Answer (Greedy Strategy):

 Pick largest coin ≤ remaining amount

Steps:

 25 → pick 2 (50)
 10 → pick 1 (60)
 1 → pick 3 (63)

✅ Greedy solution: [25, 25, 10, 1, 1, 1]


Total coins = 6

Note: Greedy works only if coin system is canonical (not always optimal for arbitrary
denominations)

✅ Summary Table
Problem Greedy Solution Exists?
Activity Selection ✅ Yes
Huffman Coding ✅ Yes
Coin Change (US system) ✅ Yes
Problem Greedy Solution Exists?
Coin Change (non-canonical) ❌ No

🧮 Greedy Algorithm Practice Problems (with Answers)

Problem 1: Fractional Knapsack

Problem:
You have a knapsack of capacity W = 50, and items:

Item Value Weight


A 60 10
B 100 20
C 120 30

Note: Each item is usually available once and items can be divided into fractions.

Select items (fractions allowed) to maximize value.

✅ Greedy Strategy:
Choose items with highest value/weight ratio.

Item Value/Weight
A 6
B 5
C 4

🧮 Steps:

 Take A → full (weight 10) → value 60 → remaining: 40


 Take B → full (weight 20) → value 100 → remaining: 20
 Take 20/30 of C → value = 120 × (20/30) = 80

✅ Total value = 60 + 100 + 80 = 240

Time: ( log⁡ ) (sort by ratio)


Problem 2: Job Sequencing with Deadlines

Problem:
You’re given n jobs with deadlines and profits.
Each job takes 1 unit of time. Maximize profit if only one job can be scheduled at a time.

Job Profit Deadline


A 100 2
B 19 1
C 27 2
D 25 1
E 15 3

✅ Greedy Strategy:

1. Sort jobs by profit (desc)


2. For each job, schedule it in the latest available slot before deadline.

🧮 Steps:

1. A → deadline 2 → slot 2
2. C → deadline 2 → slot 1
3. D → deadline 1 → slot full
4. B → deadline 1 → slot full
5. E → deadline 3 → slot 3

✅ Jobs: A (2), C (1), E (3)


Max Profit = 100 + 27 + 15 = 142

❌ Problem 3: Greedy Fails for Coin Change

Problem:
Coin denominations = [1, 3, 4]
Target = 6

Greedy Picks:
 4 → 1 coin
 1 → 2 coins
Total = 3 coins

Optimal:

 3 + 3 → 2 coins

Why does greedy fail?


Greedy doesn't reconsider better global combinations — lacks backtracking.

Problem 4: Minimum Platforms (Interval Cover)

Problem:
Given train arrival/departure times, find minimum platforms needed.

Arrival Departure
9:00 9:40
9:10 12:00
9:50 11:20
11:00 11:30
15:00 19:00
18:00 20:00

✅ Greedy Strategy:

1. Sort arrivals and departures separately


2. Use two pointers to simulate platform usage

🧮 Steps:

 Track platforms needed at every time


 Max platforms needed: 3

Time: O(nlog⁡n)O(n \log n)O(nlogn)

🧮 Graduate-Level Insight:
Problem Type Greedy Works? Notes
Minimum Spanning Tree ✅ Kruskal’s, Prim’s algorithms
Dijkstra (non-negative) ✅ Greedy with priority queue
Bellman-Ford (neg weights) ❌ Needs dynamic programming
Longest Path in DAG ❌ Needs DP / Topo sort

let’s now take a detailed step-by-step walkthrough of Problem 2: Job Sequencing with
Deadlines, including how the greedy algorithm schedules each job into a slot.

✅ Problem Recap: Job Sequencing


Each job takes 1 unit of time, and only one job can run at a time.
We aim to maximize total profit while respecting each job’s deadline.

🧮 Input Table

Job Profit Deadline


A 100 2
B 19 1
C 27 2
D 25 1
E 15 3

🧮 Step 1: Sort jobs by profit in descending order


Sorted jobs:
A (100), C (27), D (25), B (19), E (15)

🧮 Step 2: Allocate slots


We will use an array of size = max deadline = 3, indexed as:
Slot: 1 2 3
Status: [ - , - , - ]

Each job will try to go into the latest free slot before its deadline.

Step-by-Step Job Allocation


1. Job A → Profit = 100, Deadline = 2

 Try slot 2 → ✅ free → assign A

Slot: 1 2 3
Status: [ - , A , - ]

2. Job C → Profit = 27, Deadline = 2

 Slot 2 → ❌ taken by A
 Try slot 1 → ✅ free → assign C

Slot: 1 2 3
Status: [ C , A , - ]

3. Job D → Profit = 25, Deadline = 1

 Slot 1 → ❌ already has C


 Cannot schedule D

4. Job B → Profit = 19, Deadline = 1

 Slot 1 → ❌ full
 Cannot schedule B

5. Job E → Profit = 15, Deadline = 3

 Try slot 3 → ✅ free → assign E


Slot: 1 2 3
Status: [ C , A , E ]

✅ Final Scheduled Jobs and Profit


Slot Job
1 C
2 A
3 E

Selected Jobs: A, C, E
Total Profit: 100 + 27 + 15 = 142

Time Complexity:
 Sorting: ( log⁡ )
 Slot allocation (naïve): ( 2 )
 With disjoint-set (Union-Find): ( log⁡ )

Topic 3: Recursion Algorithms

What is Recursion?

A recursive algorithm solves a problem by solving smaller instances of the same


problem.

It consists of:

 Base case(s): where the recursion stops.


 Recursive case(s): where the function calls itself on a smaller input.
🧮 How It Works
Example: Factorial

!= ⋅ ( − 1) ⋅ ( − 2) ⋯ 1

Recursive definition:

1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ if = 0⁡⁡
factorial( ) = {
⋅ factorial( − 1) otherwise

✅ Pseudocode:

function factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)

Key Concepts in Recursive Algorithms


Term Description
Base Case Terminates recursion (no more calls)
Recursive Case Function calls itself with smaller input
Stack Frame Each call adds a new layer to the call stack
Stack Overflow Occurs if base case is never reached

Example 1: Fibonacci Sequence


F ( ) = ( − 1) + ( − 2), (0) = 0, (1) = 1

❌ Naive Recursive Pseudocode:

function fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
🧮 Time Complexity:

( ) = ( − 1) + ( − 2) ⇒ (2 )

Very inefficient due to overlapping subproblems

✅ Example 2: Recursive Binary Search


Efficient way to search sorted arrays.

Pseudocode:

function binarySearch(A, low, high, x):


if low > high:
return -1
mid = (low + high) // 2
if A[mid] == x:
return mid
else if A[mid] > x:
return binarySearch(A, low, mid - 1, x)
else:
return binarySearch(A, mid + 1, high, x)

Time Complexity: (log⁡ )

🧮 Where Recursion Appears in Algorithms


Problem Type Examples
Divide & Conquer Merge Sort, Quick Sort, Binary Search
Backtracking N-Queens, Sudoku, Maze Solving
Tree/Graph Traversal DFS, Tree Recursion
Dynamic Programming Fibonacci (with memoization)

Practice Problem 1: Tower of Hanoi


Move n disks from source to target, using an auxiliary pole.

Recursive Solution:
function hanoi(n, source, target, auxiliary):
if n == 1:
print "Move disk from", source, "to", target
else:
hanoi(n-1, source, auxiliary, target)
print "Move disk from", source, "to", target
hanoi(n-1, auxiliary, target, source)

Time Complexity: (2 )

Practice Problem 2: Sum of Digits


function sumDigits(n):
if n == 0:
return 0
else:
return (n mod 10) + sumDigits(n div 10)

Example:

(1234) = 4 + 3 + 2 + 1 = 10

Let's now explore more recursion practice problems, ranging from foundational to
advanced — each with a clear recursive formulation, pseudocode, and time complexity.

🧮 Recursion Practice Problems (with Step-by-Step


Breakdown)

Problem 1: Reverse a String

Task: Reverse a string using recursion.

✨ Recursive Insight:

 Base Case: If string length is 1 → return string


 Recursive Case: Return last character + reverse(rest)

✅ Pseudocode:

function reverse(S):
if length(S) == 1:
return S
return reverse(S[2..end]) + S[1]

Example:

reverse("ABCD") →
reverse("BCD") + A →
(reverse("CD") + B) + A →
(reverse("D") + C + B + A) →
"D" + C + B + A = "DCBA"

Time: ( ), where n is length of string

🧮 Problem 2: Count Ways to Climb Stairs

Task: You can climb 1 or 2 steps at a time. How many ways to reach the top of n
stairs?

✨ Recursive Insight:

 Base Case:
o (0) = 1, (1) = 1
 Recursive Case:
o ( )= ( − 1) + ( − 2)

✅ Pseudocode:

function climbStairs(n):
if n == 0 or n == 1:
return 1
return climbStairs(n-1) + climbStairs(n-2)

Time: (2 ) naïve
Use memoization to reduce to ( )
Problem 3: Check Palindrome (Recursive)

Task: Check if a string is a palindrome using recursion.

✨ Recursive Insight:

 Base Case: If start ≥ end → return true


 Recursive Case: Compare S[start] == S[end] and recurse on substring

✅ Pseudocode:

function isPalindrome(S, start, end):


if start >= end:
return true
if S[start] ≠ S[end]:
return false
return isPalindrome(S, start+1, end-1)

Time: ( )

Problem 4: Count Nodes in a Binary Tree

Task: Count number of nodes in a binary tree recursively.

✨ Recursive Insight:

 Base Case: If node is null → return 0


 Recursive Case:
o Total = 1 (current node) + left subtree count + right subtree count

✅ Pseudocode:
function countNodes(node):
if node == null:
return 0
return 1 + countNodes(node.left) + countNodes(node.right)

Time: ( ), where n is number of nodes


🧮 Problem 5: Permutations of a String

Task: Print all permutations of a string using recursion.

✨ Recursive Insight:

 Fix each character in turn, recurse on the rest.

✅ Pseudocode:

function permute(S, l, r):


if l == r:
print S
else:
for i = l to r:
swap(S[l], S[i])
permute(S, l+1, r)
swap(S[l], S[i]) // backtrack

Time: ( !)

Topic 4: Divide and Conquer

What is Divide and Conquer?

Divide and Conquer is a technique that solves a problem by:

1. Divide: Breaking it into smaller subproblems of the same type.


2. Conquer: Solving each subproblem recursively.
3. Combine: Merging the solutions of subproblems to form the final solution.

Key Properties:

Step Description
Divide Break input into smaller parts
Step Description
Conquer Solve each part independently
Combine Merge results into final answer

Classic Examples

Example 1: Merge Sort

Sort an array by:

 Dividing it into halves,


 Sorting each half recursively,
 Merging the sorted halves.

✅ Pseudocode:
function mergeSort(A, low, high):
if low < high:
mid = (low + high) // 2
mergeSort(A, low, mid)
mergeSort(A, mid+1, high)
merge(A, low, mid, high)

Time Complexity:

( ) = 2 ( /2) + ( ) ⇒ ( log⁡ )

⚡ Example 2: Binary Search

Search in a sorted array by dividing it in half each time.

✅ Pseudocode:

function binarySearch(A, low, high, x):


if low > high:
return -1
mid = (low + high) // 2
if A[mid] == x:
return mid
else if A[mid] > x:
return binarySearch(A, low, mid - 1, x)
else:
return binarySearch(A, mid + 1, high, x)

Time: (log⁡ )

Example 3: Maximum Subarray (Kadane’s via Divide & Conquer)

Find subarray with maximum sum.

Let the array be A[low..high]. Divide at midpoint:

 Max in left half


 Max in right half
 Max crossing the midpoint

Total time:

( ) = 2 ( /2) + ( ) ⇒ ( log⁡ )

Example 4: Matrix Multiplication (Strassen’s Algorithm)

Multiply two × matrices using 7 recursive multiplications instead of 8.

2 log⁡2 7 2.81
( ) = 7 ( /2) + ( )⇒ ( )≈ ( )

🧮 Example 5: Closest Pair of Points (2D)

Given n points in 2D space, find the closest pair.

 Sort by x and y
 Recursively divide
 Combine using strip in ( )

Time: ( log⁡ )

Summary Table
Problem Time Complexity Divide/Combine Cost
Merge Sort ( log⁡ ) Merge: ( )
Binary Search (log⁡ ) Constant
Maximum Subarray (DC) ( log⁡ ) Merge: ( )
Matrix Multiplication ( 2.81 ) Divide: 7 subcalls
Closest Pair of Points ( log⁡ ) Strip merge ( )

🧮 Practice Problem
Problem: Count Inversions in an Array

Definition: Inversion = pair (i, j) where i < j and [ > [ ]

Use modified Merge Sort to count inversions during merge.

Time: ( log⁡ )

Here's a set of practice problems that apply the Divide and Conquer paradigm — each
with step-by-step solutions, illustrations of the recursive process, and time complexity
analysis.

🧮 Divide and Conquer Practice Problems (with Solutions)

Problem 1: Find the Maximum and Minimum in an Array

Goal: Find both the min and max using the minimum number of comparisons.

✅ Strategy (Divide and Conquer):

 Divide the array into two halves


 Recursively find min and max in each half
 Compare results
Pseudocode:

function minMax(arr, low, high):


if low == high:
return (arr[low], arr[low]) // single element
if high == low + 1:
return (min(arr[low], arr[high]), max(arr[low], arr[high]))

mid = (low + high) // 2


(min1, max1) = minMax(arr, low, mid)
(min2, max2) = minMax(arr, mid+1, high)
return (min(min1, min2), max(max1, max2))

Time Complexity:

( ) = 2 ( /2) + 2 ⇒ ( )

✅ Better than naive 2 − 2 comparisons

Problem 2: Count Inversions in an Array

Definition: An inversion is a pair ( , ) such that:

< ⁡⁡⁡⁡⁡and []> [ ]

✅ Strategy: Modify Merge Sort

 While merging, count how many elements from the right array jump ahead of
left ones.

Pseudocode:

function countInversions(arr, low, high):


if low >= high:
return 0
mid = (low + high) // 2
count = countInversions(arr, low, mid)
+ countInversions(arr, mid+1, high)
+ mergeAndCount(arr, low, mid, high)
return count

function mergeAndCount(arr, low, mid, high):


i = low, j = mid+1, k = 0, inv_count = 0
temp = []

while i <= mid and j <= high:


if arr[i] <= arr[j]:
temp[k++] = arr[i++]
else:
temp[k++] = arr[j++]
inv_count += (mid - i + 1) // all remaining in left

// Copy remaining elements


...
return inv_count

Time: ( log⁡ )

Problem 3: Majority Element (appears > n/2 times)

Find the element that appears more than n/2 times in an array.

✅ Strategy (Divide and Conquer):

1. Divide array in half


2. Recursively find majority in each half
3. Count if any of them is the majority in the whole array

🧮 Time Complexity:

( ) = 2 ( /2) + ( ) ⇒ ( log⁡ )

🧮 Problem 4: Find the k-th Smallest Element (Order Statistics)

Given unsorted array, find the k-th smallest element.

✅ Strategy: QuickSelect (based on QuickSort)

function quickSelect(arr, low, high, k):


pivotIndex = partition(arr, low, high)
if pivotIndex == k:
return arr[pivotIndex]
elif k < pivotIndex:
return quickSelect(arr, low, pivotIndex - 1, k)
else:
return quickSelect(arr, pivotIndex + 1, high, k)

Average Time: ( )
Worst-case: ( 2 )

Problem 5: Closest Pair of Points (2D Geometry)

Given n points in 2D space, find pair with minimum Euclidean distance.

✅ Strategy:

1. Sort points by x and y


2. Recursively divide into halves
3. Merge by checking "strip" near dividing line

Time: ( log⁡ )

🧮 Summary Table
Problem Name Approach Time Complexity
Min & Max Divide pairs ( )
Count Inversions Merge Sort Based ( log⁡ )
Majority Element Recursive Checking ( log⁡ )
k-th Smallest (Select) QuickSelect Avg. ( )
Closest Pair of Points Geometric Divide ( log⁡ )

🧮 Topic 5: Dynamic Programming (DP)


What is Dynamic Programming?

Dynamic Programming solves complex problems by breaking them down into simpler
overlapping subproblems, and solving each only once, storing the results.

Key Concepts

Concept Description
An optimal solution to the problem contains optimal
Optimal Substructure
solutions to subproblems
Overlapping Subproblems Subproblems are solved multiple times
Memoization (Top-Down) Recursive + caching results
Tabulation (Bottom-Up) Iterative, build table from smallest cases

Classic DP Example: Fibonacci Numbers


Recursive (inefficient):

function fib(n):
if n <= 1: return n
return fib(n-1) + fib(n-2)

Time: (2 )

✅ DP with Memoization:

function fib(n, memo):


if n in memo: return memo[n]
if n <= 1: return n
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]

Time: ( )

✅ DP with Tabulation:
function fib(n):
dp = array of size n+1
dp[0] = 0
dp[1] = 1
for i = 2 to n:
dp[i] = dp[i-1] + dp[i-2]
return dp[n]

Most Common Dynamic Programming Problems


Let’s explore each one in detail:

1. 0/1 Knapsack Problem

 Each item: weight and value


 Take or leave each item to maximize value within weight capacity

DP Recurrence:

[ ][ ] = max⁡( [ −1][ ], + [ −1][ − ])

Time: ( ⋅ )

🧮 2. Longest Common Subsequence (LCS)

Given strings A and B, find the longest sequence common to both.

Recurrence:

0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ if = 0 or = 0
[ ][ ] = { [ − 1][ − 1] + 1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ if [ ] = [ ]⁡⁡⁡⁡
max⁡( [ − 1][ ], [ ][ − 1]) otherwise

Time: ( ⋅ )
3. Matrix Chain Multiplication

Find minimum cost to multiply a chain of matrices.

Recurrence:

[ ][ ] = min⁡( [ ][ ]+ [ +1][ ]+ −1 ⋅ ⋅ )
≤ <

3
Time: ( )

4. Edit Distance (Levenshtein)

Minimum number of insert, delete, and replace to convert string A to B.

Recurrence:

[ − 1][ − 1]⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ if [ ] = [ ]
[ ][ ] = {
1 + min⁡( [ − 1][ ], [ ][ − 1], [ − 1][ − 1]) otherwise

Time: ( ⋅ )

5. Longest Increasing Subsequence (LIS)

Find length of longest subsequence such that 1 < 2 <⋯<

Basic DP:

[ ] = max( [ ] + 1) ⁡⁡⁡⁡⁡⁡for all < and [ ] < [ ]

2
Time: ( ), or ( log⁡ ) with binary search

Summary Table
Problem Time Complexity Approach
0/1 Knapsack ( ⋅ ) DP table
Longest Common Subsequence ( ⋅ ) 2D table
Problem Time Complexity Approach
Matrix Chain Multiplication ( 3) DP on intervals
Edit Distance ( ⋅ ) 2D table
Longest Increasing Subseq. ( 2) 1D DP array

Here's a set of Dynamic Programming (DP) practice problems — one for each classic DP
type — with step-by-step solutions and time complexities.

🧮 Dynamic Programming Practice Problems (with


Solutions)

🧮 Problem 1: 0/1 Knapsack

Given n items, each with value v[i] and weight w[i], and a knapsack capacity W, find
the maximum total value of items that fit in the knapsack.

Example:

 Values: v = [60, 100, 120]


 Weights: w = [10, 20, 30]
 Capacity: W = 50

Solution:

Let dp[i][j] = max value using first i items and total weight j

Initialize dp[0][*] = 0

dp[i][j] = max(dp[i-1][j], dp[i-1][j - w[i]] + v[i]) if w[i] ≤ j

Final Answer:

Maximum value = 220


Take items 1 and 2 (weights 20+30)

Time: ( ⋅ )

🧮 Problem 2: Longest Common Subsequence (LCS)

Given two strings A and B, find the length of the longest subsequence common to
both.

Example:

 A = "ABCBDAB"
 B = "BDCABA"

Solution:

Let dp[i][j] be LCS of A[0..i] and B[0..j]

if A[i-1] == B[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])

Fill 2D table, answer is dp[m][n]

Final Answer:

LCS = "BCBA" or "BDAB" → Length = 4

Time: ( ⋅ )

🧮 Problem 3: Matrix Chain Multiplication


Given dimensions [10, 20, 30, 40, 30], find the minimum number of scalar
multiplications to compute the matrix product.

Solution:

Use DP table dp[i][j] = min multiplications for matrices ..

dp[i][j] = min over k of:


dp[i][k] + dp[k+1][j] + p[i-1] * p[k] * p[j]

Final Answer:

Minimum multiplications = 30000

3
Time: ( )

Problem 4: Edit Distance

Convert "intention" to "execution" using minimum number of insert/delete/replace


operations.

Solution:

Use dp[i][j] = min operations to convert A[0..i] to B[0..j]

if A[i-1] == B[j-1]: dp[i][j] = dp[i-1][j-1]


else: dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])

Final Answer:

Minimum operations = 5

Time: ( ⋅ )
Problem 5: Longest Increasing Subsequence (LIS)

Find the LIS length in [10, 22, 9, 33, 21, 50, 41, 60]

Solution:

Let dp[i] = length of LIS ending at index i

dp[i] = max(dp[j] + 1 for j < i and A[j] < A[i])

Final Answer:

LIS = [10, 22, 33, 50, 60] → Length = 5

2
Time: ( ), or ( log⁡ ) with binary search

🧮 Summary Table

Problem Solution Type Final Answer Time Complexity


0/1 Knapsack DP Table 220 ( ⋅ )
Longest Common
2D Table 4 ( ⋅ )
Subsequence
Matrix Chain Multiplication Interval DP 30000 ( 3)
Edit Distance 2D Table 5 ( ⋅ )
Longest Increasing Subseq. 1D Array (Bottom-Up) 5 ( 2 ) or ( log⁡ )

Let's now explore a set of Advanced Graduate-Level Dynamic Programming (DP)


problems. These problems are designed to test deep understanding of DP formulation,
state transitions, space optimization, and sometimes combining DP with bitmasking,
graphs, or greedy insights.
Advanced Graduate-Level Dynamic Programming
Problems (with Hints & Complexity)

1. DP + Bitmask: Traveling Salesman Problem (TSP)

Given n cities and a distance matrix dist[i][j], find the shortest path that visits all cities
exactly once and returns to the start.

🧮 DP State:

[ ][ ] = min cost to visit cities in ’mask’ and end at city ’i’

 mask: bitmask representing visited cities


 i: last visited city

Transition:

[ ][ ] = min⁡ ( [ ∖{ }][ ]+ [ ][ ])
∈ , ≠

2
Time: ( ⋅2 )

✅ Used in:

 Combinatorial optimization
 State compression

🧮 2. DP + Greedy Insight: Minimum Number of Refueling Stops

A car travels from point A to B with a certain fuel tank capacity. Given fuel stations on
the path, return the minimum number of stops to reach the destination.

DP Idea:
 Let dp[i] = max distance reachable with i refuels

Loop through stations and update dp from high to low.

2
Time: ( )

✅ Combines DP with greedy updates

🧮 3. Palindrome Partitioning

Given string S, partition it into fewest palindromic substrings.

DP State:

[ ] = min⁡{1+ [ ]∣ < ∧ [ +1… ] is palindrome}

Precompute all palindromes using dynamic programming.

2
Time: ( )

✅ Applications: Natural language processing, bioinformatics

🧮 4. DP on Trees: Maximum Weight Independent Set

Given a tree where each node has a value, select a subset of non-adjacent nodes
with the maximum sum.

DP States:

[ ][0] = max weight if u is not selected


[ ][1] = max weight if u is selected
Use DFS:

 If selected, can't include children


 If not selected, choose best of selecting or not selecting each child

Time: ( )

✅ Used in: Tree algorithms, compiler design, bioinformatics

🧮 5. Digit DP: Count Numbers with Given Properties

Count how many numbers ≤ have no two adjacent digits equal, or satisfy some
other digit-level condition.

DP State:

[ ][ ℎ ][ ] = number of valid numbers up to this position

 tight: whether we are constrained by the prefix of N


 prev_digit: digit before current

Time: ( ⋅ 10 ⋅ 2), where d is number of digits

✅ Used in number theory, combinatorics, competitive programming

🧮 Summary Table
Problem Technique Time Complexity
Traveling Salesman Problem (TSP) Bitmask DP ( 2⋅2 )
Minimum Refueling Stops DP + Greedy ( 2⋅2 )
Palindrome Partitioning 2D DP ( 2⋅2 )
Max Weight Independent Set (Tree) Tree DP (2 states) ( )
Problem Technique Time Complexity
Digit DP DP on digits ( ⋅ 10)

Advanced Dynamic Programming Problem Solutions

1. Traveling Salesman Problem (TSP) using Bitmask DP

Problem: Given an n x n distance matrix dist, find the minimum cost to visit all cities
exactly once and return to the starting city.

Approach: Use bitmasking to represent subsets of visited cities.

DP State:

 dp[mask][i] = minimum cost to visit cities in mask and end at city i

Transition:

for mask in range(1 << n):


for u in range(n):
if (mask >> u) & 1:
for v in range(n):
if not (mask >> v) & 1:
next_mask = mask | (1 << v)
dp[next_mask][v] = min(dp[next_mask][v], dp[mask][u] + dist[u][v])

Final Answer:

min(dp[(1 << n) - 1][i] + dist[i][0] for i in range(n))

Time Complexity: O(n^2 * 2^n)

🧮 2. Minimum Number of Refueling Stops

Problem: Given target distance target, initial fuel, and gas stations, find the minimum
number of refueling stops.

Approach: Let dp[i] = max distance reachable with i refuels


Transition: For each station:

for i in reversed(range(num_refuels + 1)):


if dp[i] >= station_pos:
dp[i + 1] = max(dp[i + 1], dp[i] + fuel_at_station)

Answer: Smallest i such that dp[i] >= target

Time Complexity: O(n^2)

🧮 3. Palindrome Partitioning

Problem: Partition a string into the fewest number of palindromic substrings.

Approach: Precompute palindrome substrings. Then:

DP State: dp[i] = min cuts needed for s[0..i]

Transition:

for i in range(1, n):


for j in range(i+1):
if is_palindrome[j][i]:
dp[i] = min(dp[i], 1 + dp[j-1]) if j > 0 else 0

Time Complexity: O(n^2)

🧮 4. Maximum Weight Independent Set in a Tree

Problem: Given a tree with node weights, find the maximum sum of non-adjacent
nodes.

Approach: Tree DP with two states per node:

 dp[u][0]: Max sum excluding node u


 dp[u][1]: Max sum including node u

Transition:

def dfs(u, parent):


dp[u][1] = weight[u]
for v in tree[u]:
if v != parent:
dfs(v, u)
dp[u][0] += max(dp[v][0], dp[v][1])
dp[u][1] += dp[v][0]

Final Answer: max(dp[root][0], dp[root][1])

Time Complexity: O(n)

🧮 5. Digit DP

Problem: Count numbers <= N where no two adjacent digits are equal.

Approach: Use digit-by-digit recursion with memoization.

DP State: dp[pos][tight][prev_digit]

Transition:

def dfs(pos, tight, prev_digit):


if pos == len(digits): return 1
if memo[pos][tight][prev_digit] is not None:
return memo[pos][tight][prev_digit]

res = 0
max_digit = digits[pos] if tight else 9
for d in range(0, max_digit + 1):
if d != prev_digit:
res += dfs(pos + 1, tight and d == max_digit, d)
memo[pos][tight][prev_digit] = res
return res

Time Complexity: O(d * 10 * 2) = O(d)

Here’s a comprehensive set of True/False and Multiple Choice Questions (MCQs)


covering all five algorithm design techniques:

✅ True / False Questions


(Correct answer is provided after each question)

1. Brute-force algorithms always produce optimal solutions.


→ ❌ False (They guarantee correctness but may not be optimal in time.)
2. Greedy algorithms make decisions by solving subproblems optimally, which
guarantees a globally optimal solution for all problems.
→ ❌ False (Only works for problems with greedy-choice property and optimal
substructure.)
3. Recursion can solve problems that exhibit overlapping subproblems and optimal
substructure.
→ ✅ True
4. Dynamic Programming is more efficient than naive recursion when subproblems
overlap.
→ ✅ True
5. Divide and Conquer splits problems into subproblems and combines their
solutions.
→ ✅ True
6. Memoization is a bottom-up approach to DP.
→ ❌ False (It is a top-down approach.)
7. Fractional Knapsack problem can be solved optimally by a greedy algorithm.
→ ✅ True
8. The TSP problem can be solved efficiently in polynomial time using dynamic
programming.
→ ❌ False (TSP is NP-hard; DP gives exponential solution with better structure.)

Multiple Choice Questions (MCQs)

Brute Force

1. What is the time complexity of generating all permutations of an array of size n?


o A) O(n)
o B) O(n log n)
o C) O(n²)
o D) O(n!) ✅
2. Brute-force algorithms are typically used when:
o A) Input size is large
o B) No optimal solution exists
o C) Input size is small ✅
o D) Problem is recursive only

Greedy Algorithms

3. Which of the following problems can be solved optimally using a greedy


strategy?
o A) 0/1 Knapsack
o B) Longest Increasing Subsequence
o C) Fractional Knapsack ✅
o D) Edit Distance
4. In Job Sequencing Problem, what is the key step in the greedy strategy?
o A) Sort by deadline
o B) Sort by profit (descending) ✅
o C) Sort by weight
o D) Sort by value/weight

Recursion

5. Recursion is efficient only if:


o A) There is no recomputation ✅
o B) It is used with greedy
o C) Depth is fixed
o D) It avoids DP
6. What data structure is used by the system to manage recursive function calls?
o A) Queue
o B) Heap
o C) Stack ✅
o D) Array

Divide and Conquer

7. Which of the following uses divide and conquer?


o A) Insertion Sort
o B) Merge Sort ✅
o C) Knapsack
o D) BFS
8. What is the recurrence relation of Merge Sort?
o A) T(n) = T(n-1) + O(n)
o B) T(n) = 2T(n/2) + O(n) ✅
o C) T(n) = T(n/2) + O(1)
o D) T(n) = T(n) + T(n-1)

Dynamic Programming

9. Which of the following is a necessary condition for dynamic programming?


o A) Optimal complexity
o B) Sorted input
o C) Overlapping subproblems ✅
o D) Exponential solution
10. What is the typical time complexity of solving LCS using DP?
o A) O(n)
o B) O(n log n)
o C) O(n × m) ✅
o D) O(2^n)
11. In DP, memoization is:
o A) Bottom-up
o B) Randomized
o C) Top-down ✅
o D) Greedy-like

✅ True / False Questions (Set 2)


12. Dynamic programming is a method to avoid recomputation in recursive
solutions.
→ ✅ True
13. In divide and conquer, subproblems are always overlapping.
→ ❌ False (They are independent; overlapping subproblems occur in DP.)
14. Memoization stores results before solving subproblems.
→ ❌ False (Memoization stores results after solving subproblems.)
15. Every problem that can be solved by divide and conquer can also be solved by
DP.
→ ❌ False (Only if the problem has overlapping subproblems and optimal
substructure.)
16. Greedy algorithms make decisions that are locally optimal, hoping to find a
global optimum.
→ ✅ True
17. Recursion must always be converted to DP to be efficient.
→ ❌ False (Not always necessary; depends on overlap.)
18. Merge Sort and Quick Sort both use the Divide and Conquer paradigm.
→ ✅ True
19. DP bottom-up approach typically uses less memory than top-down memoization.
→ ✅ True
20. Dynamic programming can solve NP-complete problems in polynomial time.
→ ❌ False (It can only help in pseudo-polynomial time for some cases.)

Multiple Choice Questions (MCQs) – Advanced Set

Brute Force

21. What is the worst-case time complexity of the brute-force string matching
algorithm?

 A) O(log n)
 B) O((n - m + 1) * m) ✅
 C) O(n²)
 D) O(m log n)

22. Brute-force is guaranteed to find:

 A) A solution quickly
 B) A solution using least memory
 C) All possible solutions (if designed that way) ✅
 D) An approximate solution

Greedy Algorithms

23. Which of the following is not typically solved using a greedy strategy?

 A) Huffman coding
 B) Fractional knapsack
 C) 0/1 knapsack ✅
 D) Activity selection
24. What must be true about a problem for greedy to give the optimal result?

 A) Sorted input
 B) No overlapping subproblems
 C) Greedy-choice property and optimal substructure ✅
 D) Strictly increasing input

25. Huffman coding is based on:

 A) Brute-force
 B) Divide and Conquer
 C) Greedy ✅
 D) Dynamic Programming

Recursion

26. The base case in recursion is important because:

 A) It allows sorting
 B) It stops infinite calls ✅
 C) It speeds up recursion
 D) It enables greedy approach

27. Which of the following problems is most naturally solved by recursion?

 A) Linear search
 B) Tower of Hanoi ✅
 C) Matrix multiplication
 D) DFS in tree

Divide and Conquer

28. In QuickSort, what is the average-case time complexity?

 A) O(n log n) ✅
 B) O(n²)
 C) O(n)
 D) O(log n)

29. Which algorithm does not follow divide-and-conquer?


 A) Merge Sort
 B) Binary Search
 C) Bellman-Ford ✅
 D) QuickSort

30. In divide-and-conquer, combining subproblem results typically takes:

 A) Constant time
 B) Linear time (depends on problem) ✅
 C) Exponential time
 D) Logarithmic time

Dynamic Programming

31. What is the main difference between divide and conquer and dynamic
programming?

 A) Recursion depth
 B) Use of arrays
 C) Overlapping subproblems in DP ✅
 D) Time complexity

32. Which problem is not typically solved using dynamic programming?

 A) Longest common subsequence


 B) Matrix chain multiplication
 C) Merge sort ✅
 D) Edit distance

33. Which of the following is a key characteristic of DP?

 A) Local optimal choice


 B) Divide and merge
 C) Storing subproblem solutions ✅
 D) Recursion only

34. The time complexity of bottom-up DP for 0/1 knapsack is:

 A) O(n)
 B) O(2^n)
 C) O(n log W)
 D) O(n * W) ✅
Here's a comprehensive Graduate-Level Advanced Quiz Set on algorithm design
techniques, including challenging True/False and Multiple Choice Questions (MCQs).
Each question includes the correct answer and often explanation, suitable for graduate-
level understanding.

Advanced True / False Questions


1. The Master Theorem applies only to divide-and-conquer recurrences where all
subproblems are of equal size.
→ ✅ True
(Example: T(n) = aT(n/b) + f(n), where each subproblem is size n/b.)
2. Greedy algorithms always produce optimal solutions for problems with
optimal substructure.
→ ❌ False
(Optimal substructure is necessary but not sufficient. You also need the greedy-
choice property.)
3. Dynamic programming can be used to solve NP-complete problems in
polynomial time.
→ ❌ False
(DP gives pseudo-polynomial time solutions for some NP-complete problems like
0/1 Knapsack.)
4. The recursive tree method for solving recurrences assumes constant work at
each level.
→ ❌ False
(The work may vary; e.g., T(n) = 2T(n/2) + n does O(n) at each level.)
5. All problems solvable using dynamic programming can also be solved with
divide-and-conquer.
→ ❌ False
(DP relies on overlapping subproblems; divide-and-conquer does not reuse
them.)
6. In digit DP, the ‘tight’ parameter ensures the generated number does not
exceed the original number.
→ ✅ True
7. Bitmasking is often used to compress the state space in DP problems involving
subsets.
→ ✅ True
🧮 Advanced Multiple Choice Questions (MCQs)

1. In the Traveling Salesman Problem (TSP), what does the bitmask dp[mask][i]
represent?

A) The total number of cities visited


B) The minimum cost of visiting all cities
C) The minimum cost to reach city i having visited cities in mask ✅
D) The number of ways to reach city i

2. Which of the following is NOT typically solved using dynamic


programming?

A) Longest Increasing Subsequence


B) Topological Sort ✅
C) Matrix Chain Multiplication
D) Edit Distance

3. Which recurrence best represents the time complexity of Merge Sort?

A) T(n) = T(n-1) + n
B) T(n) = 2T(n/2) + O(n) ✅
C) T(n) = T(n/2) + T(n-1)
D) T(n) = T(n) + T(1)

4. For which class of problems is the greedy algorithm guaranteed to find an


optimal solution?

A) All optimization problems


B) NP-complete problems
C) Problems with greedy-choice property and optimal substructure ✅
D) Divide-and-conquer problems
5. Which of the following properties is necessary for dynamic programming
to be applicable?

A) Input sorted in order


B) Local optimality
C) Overlapping subproblems ✅
D) Independent subproblems

6. You are solving a problem with n stages and at each stage k decisions.
What is the time complexity using top-down DP with memoization
(assuming constant cost per subproblem)?

A) O(nk) ✅
B) O(n + k)
C) O(n log k)
D) O(n²k²)

7. Which is the best strategy for reducing space complexity in a bottom-up


dynamic programming solution?

A) Use recursive calls


B) Store only required states per iteration ✅
C) Use global variables
D) Store full solution tree

8. Which problem is best suited to be solved using digit DP?

A) Counting numbers <= N that satisfy digit-based constraints ✅


B) Shortest path in graph
C) Sequence alignment
D) All-pairs minimum spanning tree

9. Which strategy would best suit solving the problem: 'Given a tree, find the
maximum-weight independent set'?
A) Brute-force
B) Greedy
C) Tree-based dynamic programming (DFS) ✅
D) Bitmask DP

10. Strassen’s algorithm for matrix multiplication is an example of:

A) Greedy
B) DP
C) Divide and Conquer ✅
D) Backtracking

✅ Advanced True/False – Solutions & Explanations


1. ✅ True
The Master Theorem applies to divide-and-conquer recurrences of the form:

( )= ( / )+ ( )

where subproblems are equal in size.

2. ❌ False
Even with optimal substructure, greedy algorithms require the greedy-choice
property to guarantee correctness. Without it, they may fail to find the optimal
solution.
3. ❌ False
DP can help in pseudo-polynomial time (like in 0/1 Knapsack) but cannot solve
NP-complete problems in polynomial time unless P = NP.
4. ❌ False
The recursive tree method accounts for varying work at each level. For example,
merge sort does O(n) work at each level even though the subproblem size halves.
5. ❌ False
Problems like Longest Common Subsequence benefit from DP due to
overlapping subproblems. Divide-and-conquer won't reuse solutions, making it
inefficient.
6. ✅ True
In digit DP, tight ensures that we don’t generate numbers greater than the input
number, thus constraining recursion.
7. ✅ True
Bitmask DP is ideal for problems with subset enumeration, like TSP or set cover.
Bitmasking helps reduce memory and improve clarity.

🧮 Advanced MCQs – Worked-Out Answers


1. TSP with Bitmask

✅ Correct Answer: C
In dp[mask][i], mask encodes the cities visited, and i is the last city. This allows computing
the minimum cost path ending at city i with a known set of visited cities.

2. DP Inapplicable

✅ Correct Answer: B – Topological Sort


Topological sorting is a linear-time algorithm (DFS-based) and does not involve
overlapping subproblems or optimization, so DP is not suitable.

3. Merge Sort Recurrence

✅ Correct Answer: B
Merge sort splits into 2 halves (T(n/2)) and merges in linear time (O(n)), giving:

( ) = 2 ( /2) + ( )

4. Greedy Strategy Success Criteria

✅ Correct Answer: C
The greedy method works optimally only when the problem has:

 Greedy-choice property (local optimum leads to global optimum)


 Optimal substructure

5. DP Prerequisites
✅ Correct Answer: C
Dynamic programming is useful when:

 Subproblems overlap
 Solutions can be reused

6. Top-down DP Time

✅ Correct Answer: A – O(nk)


With n stages and k options per stage, and constant work per subproblem, memoization
avoids recomputation leading to ( ).

7. Space Optimization

✅ Correct Answer: B
In bottom-up DP, if only the last row/state is needed (e.g., Fibonacci), we can reduce
space from O(n) to O(1) or from O(n×k) to O(k).

8. Digit DP Use Case

✅ Correct Answer: A
Digit DP is used for counting numbers under a certain limit that satisfy digit-level
conditions, such as “no adjacent equal digits”.

9. Tree Problem

✅ Correct Answer: C
To solve max-weight independent set on a tree, use DP with DFS:

 Include node → skip children


 Exclude node → choose best of children

10. Strassen’s Algorithm


✅ Correct Answer: C – Divide and Conquer
Strassen reduces matrix multiplication from 8 subproblems to 7 with additional
combination logic. This is a classic divide-and-conquer approach.

You might also like