3 - Algorithm Design Techniques
3 - Algorithm Design Techniques
Brute-Force
While not discussed under a specific "brute-force" heading, this technique is
implicitly covered in many basic algorithm examples (e.g., exhaustive search,
naive string matching).
Relevant Section:
o Chapter 32: String Matching — Naive string-matching algorithm
o Pages: Approx. 980–985
🧮 2. Greedy Algorithms
Chapter 16: Greedy Algorithms
Pages: 417–450
Topics Covered:
o Activity-selection problem
o Greedy-choice property and optimal substructure
o Huffman coding
3. Recursion
Recursion is a foundational concept discussed throughout the book, especially
in:
o Chapter 2: Getting Started (recursive insertion sort)
o Chapter 4: Divide-and-Conquer (recursive recurrence solving)
o Appendix A — Recursive mathematical techniques
Pages:
o Chapter 2: Pages 15–30
o Chapter 4: Pages 97–126
o Appendix A: Pages A1–A17
5. Dynamic Programming
Chapter 15: Dynamic Programming
Pages: 374–416
Topics Covered:
o Rod Cutting
o Matrix Chain Multiplication
o Longest Common Subsequence
o Optimal BSTs
✅ 1. Brute-Force Technique
Definition:
Brute-force (also called exhaustive search) solves problems by trying all possible
solutions and selecting the best.
🔍 Example Problem 1:
Problem: Given an array of integers, find all pairs that sum to a target value k.
Brute-force Approach:
for i in range(len(arr)):
for j in range(i + 1, len(arr)):
if arr[i] + arr[j] == k:
print((arr[i], arr[j]))
Time Complexity: 𝑶( )y:
✅ Practice Question 1:
Answer:
Substrings = ["a", "b", "c", "ab", "ac", "bc", "abc"]
Palindromes = "a", "b", "c"
✅ Count = 3
Problem:
Given a list of cities and the distances between each pair, find the shortest possible
route that visits each city once and returns to the origin.
Cities = A, B, C, D
Distance matrix (symmetric):
A B C D
A 0 2 9 10
B 1 0 6 4
C 15 7 0 8
D 6 3 12 0
Brute-force Solution:
1. Generate all permutations of the cities (excluding the starting city A).
2. Compute total round-trip distance for each permutation.
3. Select the minimum.
min_cost = float('inf')
best_path = []
Problem:
Given a set of integers, determine if there exists a subset whose sum is exactly a target
value T.
Brute-force Approach:
Time Complexity: (2 ⋅ )
Solution (Brute-force):
Feature Comments
🔍 Simple Implementation Easy to code and reason about
Inefficient for Large Input Exponential or quadratic time
✅ Guarantees Correctness Always finds the optimal answer
Often First Step Used as baseline for improvement
Given n cities, and a symmetric matrix of distances between each pair, find the shortest
round-trip that visits each city exactly once and returns to the starting point.
Cities: A, B, C, D
Step-by-step:
BCD
BDC
CBD
CDB
DBC
DCB
Time Complexity: 𝑶( !)
Thus:
( ) = ( − 1)! ⋅ ( ) = ( !)
Given a set of n integers and a target sum T, determine whether there exists a subset
that sums to T.
Example:
Set = [3, 34, 4, 12, 5, 2], T = 9
Step-by-step:
Example:
Subsets of [3, 4, 5]
→ [], [3], [4], [5], [3,4], [3,5], [4,5], [3,4,5]
→ Total: 2^3 = 8 subsets
Each subset requires up to ( ) time to sum.
Time Complexity: 𝑶( ⋅ )
2 subsets
Each subset summed in ( )
Given text of length n, and pattern of length m, determine if the pattern exists in the
text.
Example:
Text = abcabcabc, Pattern = abc
n = 9, m = 3
Step-by-step:
✅ MATLAB Code:
function naive_string_match(T, P)
n = length(T); % length of text
m = length(P); % length of pattern
for i = 1:(n - m + 1)
% Extract substring of length m
window = T(i : i + m - 1);
fprintf('\n');
end
🧮 Example Usage:
T = 'abcabcabc';
P = 'abc';
naive_string_match(T, P);
Output:
(( − + 1) ⋅ )≈ ( ⋅ )
🔍 Step-by-Step Explanation
Recall the Naive String Matching Time:
( )=( − + 1) ⋅
Asymptotic Approximation
We now want to simplify this expression for Big-O notation.
Let’s say:
n = 1,000,000
m = 10
Then:
Case 2: When ≈
Then:
2
( − + 1) ⋅ = ( − /2 + 1) ⋅ /2 = ( /2 + 1) ⋅ /2 ≈ /4
✅ Final Takeaway
In asymptotic notation, constants and small differences like +1 or -m are ignored:
(( − + 1) ⋅ )= ( ⋅ )
Because:
− +1≤
And we always keep the dominant terms in Big-O
Definition:
A Greedy Algorithm makes a series of locally optimal choices in the hope that this leads
to a globally optimal solution.
It doesn’t backtrack or revise previous decisions.
Key Characteristics:
Feature Description
A global optimum can be reached by choosing the
Greedy-Choice Property
best option at each step.
The optimal solution of a problem contains optimal
Optimal Substructure
solutions to subproblems.
Given n activities with start and finish times, select the maximum number of activities
that don’t overlap.
Activities:
A1: (1, 4)
A2: (3, 5)
A3: (0, 6)
A4: (5, 7)
A5: (8, 9)
A6: (5, 9)
✅ Greedy Strategy:
Step-by-Step:
Time Complexity:
Sorting: ( log )
Selection: ( )
Total: ( log )
Problem:
Given characters with frequencies, build a binary prefix code minimizing total weighted
path length.
Greedy Strategy:
Example:
Char Freq
a 5
b 9
c 12
d 13
e 16
f 45
🧮 Practice Problem 1:
Q: You have coins of denominations [1, 5, 10, 25].
You want to make 63¢ using the least number of coins.
What coins will the greedy algorithm pick?
Steps:
25 → pick 2 (50)
10 → pick 1 (60)
1 → pick 3 (63)
Note: Greedy works only if coin system is canonical (not always optimal for arbitrary
denominations)
✅ Summary Table
Problem Greedy Solution Exists?
Activity Selection ✅ Yes
Huffman Coding ✅ Yes
Coin Change (US system) ✅ Yes
Problem Greedy Solution Exists?
Coin Change (non-canonical) ❌ No
Problem:
You have a knapsack of capacity W = 50, and items:
Note: Each item is usually available once and items can be divided into fractions.
✅ Greedy Strategy:
Choose items with highest value/weight ratio.
Item Value/Weight
A 6
B 5
C 4
🧮 Steps:
Problem:
You’re given n jobs with deadlines and profits.
Each job takes 1 unit of time. Maximize profit if only one job can be scheduled at a time.
✅ Greedy Strategy:
🧮 Steps:
1. A → deadline 2 → slot 2
2. C → deadline 2 → slot 1
3. D → deadline 1 → slot full
4. B → deadline 1 → slot full
5. E → deadline 3 → slot 3
Problem:
Coin denominations = [1, 3, 4]
Target = 6
Greedy Picks:
4 → 1 coin
1 → 2 coins
Total = 3 coins
Optimal:
3 + 3 → 2 coins
Problem:
Given train arrival/departure times, find minimum platforms needed.
Arrival Departure
9:00 9:40
9:10 12:00
9:50 11:20
11:00 11:30
15:00 19:00
18:00 20:00
✅ Greedy Strategy:
🧮 Steps:
🧮 Graduate-Level Insight:
Problem Type Greedy Works? Notes
Minimum Spanning Tree ✅ Kruskal’s, Prim’s algorithms
Dijkstra (non-negative) ✅ Greedy with priority queue
Bellman-Ford (neg weights) ❌ Needs dynamic programming
Longest Path in DAG ❌ Needs DP / Topo sort
let’s now take a detailed step-by-step walkthrough of Problem 2: Job Sequencing with
Deadlines, including how the greedy algorithm schedules each job into a slot.
🧮 Input Table
Each job will try to go into the latest free slot before its deadline.
Slot: 1 2 3
Status: [ - , A , - ]
Slot 2 → ❌ taken by A
Try slot 1 → ✅ free → assign C
Slot: 1 2 3
Status: [ C , A , - ]
Slot 1 → ❌ full
Cannot schedule B
Selected Jobs: A, C, E
Total Profit: 100 + 27 + 15 = 142
Time Complexity:
Sorting: ( log )
Slot allocation (naïve): ( 2 )
With disjoint-set (Union-Find): ( log )
What is Recursion?
It consists of:
!= ⋅ ( − 1) ⋅ ( − 2) ⋯ 1
Recursive definition:
1 if = 0
factorial( ) = {
⋅ factorial( − 1) otherwise
✅ Pseudocode:
function factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
function fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
🧮 Time Complexity:
( ) = ( − 1) + ( − 2) ⇒ (2 )
Pseudocode:
Recursive Solution:
function hanoi(n, source, target, auxiliary):
if n == 1:
print "Move disk from", source, "to", target
else:
hanoi(n-1, source, auxiliary, target)
print "Move disk from", source, "to", target
hanoi(n-1, auxiliary, target, source)
Time Complexity: (2 )
Example:
(1234) = 4 + 3 + 2 + 1 = 10
Let's now explore more recursion practice problems, ranging from foundational to
advanced — each with a clear recursive formulation, pseudocode, and time complexity.
✨ Recursive Insight:
✅ Pseudocode:
function reverse(S):
if length(S) == 1:
return S
return reverse(S[2..end]) + S[1]
Example:
reverse("ABCD") →
reverse("BCD") + A →
(reverse("CD") + B) + A →
(reverse("D") + C + B + A) →
"D" + C + B + A = "DCBA"
Task: You can climb 1 or 2 steps at a time. How many ways to reach the top of n
stairs?
✨ Recursive Insight:
Base Case:
o (0) = 1, (1) = 1
Recursive Case:
o ( )= ( − 1) + ( − 2)
✅ Pseudocode:
function climbStairs(n):
if n == 0 or n == 1:
return 1
return climbStairs(n-1) + climbStairs(n-2)
Time: (2 ) naïve
Use memoization to reduce to ( )
Problem 3: Check Palindrome (Recursive)
✨ Recursive Insight:
✅ Pseudocode:
Time: ( )
✨ Recursive Insight:
✅ Pseudocode:
function countNodes(node):
if node == null:
return 0
return 1 + countNodes(node.left) + countNodes(node.right)
✨ Recursive Insight:
✅ Pseudocode:
Time: ( !)
Key Properties:
Step Description
Divide Break input into smaller parts
Step Description
Conquer Solve each part independently
Combine Merge results into final answer
Classic Examples
✅ Pseudocode:
function mergeSort(A, low, high):
if low < high:
mid = (low + high) // 2
mergeSort(A, low, mid)
mergeSort(A, mid+1, high)
merge(A, low, mid, high)
Time Complexity:
( ) = 2 ( /2) + ( ) ⇒ ( log )
✅ Pseudocode:
Time: (log )
Total time:
( ) = 2 ( /2) + ( ) ⇒ ( log )
2 log2 7 2.81
( ) = 7 ( /2) + ( )⇒ ( )≈ ( )
Sort by x and y
Recursively divide
Combine using strip in ( )
Time: ( log )
Summary Table
Problem Time Complexity Divide/Combine Cost
Merge Sort ( log ) Merge: ( )
Binary Search (log ) Constant
Maximum Subarray (DC) ( log ) Merge: ( )
Matrix Multiplication ( 2.81 ) Divide: 7 subcalls
Closest Pair of Points ( log ) Strip merge ( )
🧮 Practice Problem
Problem: Count Inversions in an Array
Time: ( log )
Here's a set of practice problems that apply the Divide and Conquer paradigm — each
with step-by-step solutions, illustrations of the recursive process, and time complexity
analysis.
Goal: Find both the min and max using the minimum number of comparisons.
Time Complexity:
( ) = 2 ( /2) + 2 ⇒ ( )
While merging, count how many elements from the right array jump ahead of
left ones.
Pseudocode:
Time: ( log )
Find the element that appears more than n/2 times in an array.
🧮 Time Complexity:
( ) = 2 ( /2) + ( ) ⇒ ( log )
Average Time: ( )
Worst-case: ( 2 )
✅ Strategy:
Time: ( log )
🧮 Summary Table
Problem Name Approach Time Complexity
Min & Max Divide pairs ( )
Count Inversions Merge Sort Based ( log )
Majority Element Recursive Checking ( log )
k-th Smallest (Select) QuickSelect Avg. ( )
Closest Pair of Points Geometric Divide ( log )
Dynamic Programming solves complex problems by breaking them down into simpler
overlapping subproblems, and solving each only once, storing the results.
Key Concepts
Concept Description
An optimal solution to the problem contains optimal
Optimal Substructure
solutions to subproblems
Overlapping Subproblems Subproblems are solved multiple times
Memoization (Top-Down) Recursive + caching results
Tabulation (Bottom-Up) Iterative, build table from smallest cases
function fib(n):
if n <= 1: return n
return fib(n-1) + fib(n-2)
Time: (2 )
✅ DP with Memoization:
Time: ( )
✅ DP with Tabulation:
function fib(n):
dp = array of size n+1
dp[0] = 0
dp[1] = 1
for i = 2 to n:
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
DP Recurrence:
Time: ( ⋅ )
Recurrence:
0 if = 0 or = 0
[ ][ ] = { [ − 1][ − 1] + 1 if [ ] = [ ]
max( [ − 1][ ], [ ][ − 1]) otherwise
Time: ( ⋅ )
3. Matrix Chain Multiplication
Recurrence:
[ ][ ] = min( [ ][ ]+ [ +1][ ]+ −1 ⋅ ⋅ )
≤ <
3
Time: ( )
Recurrence:
[ − 1][ − 1] if [ ] = [ ]
[ ][ ] = {
1 + min( [ − 1][ ], [ ][ − 1], [ − 1][ − 1]) otherwise
Time: ( ⋅ )
Basic DP:
2
Time: ( ), or ( log ) with binary search
Summary Table
Problem Time Complexity Approach
0/1 Knapsack ( ⋅ ) DP table
Longest Common Subsequence ( ⋅ ) 2D table
Problem Time Complexity Approach
Matrix Chain Multiplication ( 3) DP on intervals
Edit Distance ( ⋅ ) 2D table
Longest Increasing Subseq. ( 2) 1D DP array
Here's a set of Dynamic Programming (DP) practice problems — one for each classic DP
type — with step-by-step solutions and time complexities.
Given n items, each with value v[i] and weight w[i], and a knapsack capacity W, find
the maximum total value of items that fit in the knapsack.
Example:
Solution:
Let dp[i][j] = max value using first i items and total weight j
Initialize dp[0][*] = 0
Final Answer:
Time: ( ⋅ )
Given two strings A and B, find the length of the longest subsequence common to
both.
Example:
A = "ABCBDAB"
B = "BDCABA"
Solution:
if A[i-1] == B[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
Final Answer:
Time: ( ⋅ )
Solution:
Final Answer:
3
Time: ( )
Solution:
Final Answer:
Minimum operations = 5
Time: ( ⋅ )
Problem 5: Longest Increasing Subsequence (LIS)
Find the LIS length in [10, 22, 9, 33, 21, 50, 41, 60]
Solution:
Final Answer:
2
Time: ( ), or ( log ) with binary search
🧮 Summary Table
Given n cities and a distance matrix dist[i][j], find the shortest path that visits all cities
exactly once and returns to the start.
🧮 DP State:
Transition:
[ ][ ] = min ( [ ∖{ }][ ]+ [ ][ ])
∈ , ≠
2
Time: ( ⋅2 )
✅ Used in:
Combinatorial optimization
State compression
A car travels from point A to B with a certain fuel tank capacity. Given fuel stations on
the path, return the minimum number of stops to reach the destination.
DP Idea:
Let dp[i] = max distance reachable with i refuels
2
Time: ( )
🧮 3. Palindrome Partitioning
DP State:
2
Time: ( )
Given a tree where each node has a value, select a subset of non-adjacent nodes
with the maximum sum.
DP States:
Time: ( )
Count how many numbers ≤ have no two adjacent digits equal, or satisfy some
other digit-level condition.
DP State:
🧮 Summary Table
Problem Technique Time Complexity
Traveling Salesman Problem (TSP) Bitmask DP ( 2⋅2 )
Minimum Refueling Stops DP + Greedy ( 2⋅2 )
Palindrome Partitioning 2D DP ( 2⋅2 )
Max Weight Independent Set (Tree) Tree DP (2 states) ( )
Problem Technique Time Complexity
Digit DP DP on digits ( ⋅ 10)
Problem: Given an n x n distance matrix dist, find the minimum cost to visit all cities
exactly once and return to the starting city.
DP State:
Transition:
Final Answer:
Problem: Given target distance target, initial fuel, and gas stations, find the minimum
number of refueling stops.
🧮 3. Palindrome Partitioning
Transition:
Problem: Given a tree with node weights, find the maximum sum of non-adjacent
nodes.
Transition:
🧮 5. Digit DP
Problem: Count numbers <= N where no two adjacent digits are equal.
DP State: dp[pos][tight][prev_digit]
Transition:
res = 0
max_digit = digits[pos] if tight else 9
for d in range(0, max_digit + 1):
if d != prev_digit:
res += dfs(pos + 1, tight and d == max_digit, d)
memo[pos][tight][prev_digit] = res
return res
Brute Force
Greedy Algorithms
Recursion
Dynamic Programming
Brute Force
21. What is the worst-case time complexity of the brute-force string matching
algorithm?
A) O(log n)
B) O((n - m + 1) * m) ✅
C) O(n²)
D) O(m log n)
A) A solution quickly
B) A solution using least memory
C) All possible solutions (if designed that way) ✅
D) An approximate solution
Greedy Algorithms
23. Which of the following is not typically solved using a greedy strategy?
A) Huffman coding
B) Fractional knapsack
C) 0/1 knapsack ✅
D) Activity selection
24. What must be true about a problem for greedy to give the optimal result?
A) Sorted input
B) No overlapping subproblems
C) Greedy-choice property and optimal substructure ✅
D) Strictly increasing input
A) Brute-force
B) Divide and Conquer
C) Greedy ✅
D) Dynamic Programming
Recursion
A) It allows sorting
B) It stops infinite calls ✅
C) It speeds up recursion
D) It enables greedy approach
A) Linear search
B) Tower of Hanoi ✅
C) Matrix multiplication
D) DFS in tree
A) O(n log n) ✅
B) O(n²)
C) O(n)
D) O(log n)
A) Constant time
B) Linear time (depends on problem) ✅
C) Exponential time
D) Logarithmic time
Dynamic Programming
31. What is the main difference between divide and conquer and dynamic
programming?
A) Recursion depth
B) Use of arrays
C) Overlapping subproblems in DP ✅
D) Time complexity
A) O(n)
B) O(2^n)
C) O(n log W)
D) O(n * W) ✅
Here's a comprehensive Graduate-Level Advanced Quiz Set on algorithm design
techniques, including challenging True/False and Multiple Choice Questions (MCQs).
Each question includes the correct answer and often explanation, suitable for graduate-
level understanding.
1. In the Traveling Salesman Problem (TSP), what does the bitmask dp[mask][i]
represent?
A) T(n) = T(n-1) + n
B) T(n) = 2T(n/2) + O(n) ✅
C) T(n) = T(n/2) + T(n-1)
D) T(n) = T(n) + T(1)
6. You are solving a problem with n stages and at each stage k decisions.
What is the time complexity using top-down DP with memoization
(assuming constant cost per subproblem)?
A) O(nk) ✅
B) O(n + k)
C) O(n log k)
D) O(n²k²)
9. Which strategy would best suit solving the problem: 'Given a tree, find the
maximum-weight independent set'?
A) Brute-force
B) Greedy
C) Tree-based dynamic programming (DFS) ✅
D) Bitmask DP
A) Greedy
B) DP
C) Divide and Conquer ✅
D) Backtracking
( )= ( / )+ ( )
2. ❌ False
Even with optimal substructure, greedy algorithms require the greedy-choice
property to guarantee correctness. Without it, they may fail to find the optimal
solution.
3. ❌ False
DP can help in pseudo-polynomial time (like in 0/1 Knapsack) but cannot solve
NP-complete problems in polynomial time unless P = NP.
4. ❌ False
The recursive tree method accounts for varying work at each level. For example,
merge sort does O(n) work at each level even though the subproblem size halves.
5. ❌ False
Problems like Longest Common Subsequence benefit from DP due to
overlapping subproblems. Divide-and-conquer won't reuse solutions, making it
inefficient.
6. ✅ True
In digit DP, tight ensures that we don’t generate numbers greater than the input
number, thus constraining recursion.
7. ✅ True
Bitmask DP is ideal for problems with subset enumeration, like TSP or set cover.
Bitmasking helps reduce memory and improve clarity.
✅ Correct Answer: C
In dp[mask][i], mask encodes the cities visited, and i is the last city. This allows computing
the minimum cost path ending at city i with a known set of visited cities.
2. DP Inapplicable
✅ Correct Answer: B
Merge sort splits into 2 halves (T(n/2)) and merges in linear time (O(n)), giving:
( ) = 2 ( /2) + ( )
✅ Correct Answer: C
The greedy method works optimally only when the problem has:
5. DP Prerequisites
✅ Correct Answer: C
Dynamic programming is useful when:
Subproblems overlap
Solutions can be reused
6. Top-down DP Time
7. Space Optimization
✅ Correct Answer: B
In bottom-up DP, if only the last row/state is needed (e.g., Fibonacci), we can reduce
space from O(n) to O(1) or from O(n×k) to O(k).
✅ Correct Answer: A
Digit DP is used for counting numbers under a certain limit that satisfy digit-level
conditions, such as “no adjacent equal digits”.
9. Tree Problem
✅ Correct Answer: C
To solve max-weight independent set on a tree, use DP with DFS: