Algorithms and Complexity: Unit I–V Solutions
Unit I
1. Euclid’s Algorithm Pseudocode
Euclid’s algorithm computes the greatest common divisor (gcd) of two integers by repeated remainder
operations. A standard pseudocode (from Wikipedia) is:
function gcd(a, b):
while b ≠ 0:
t := b
b := a mod b
a := t
return a
This returns the gcd of a and b 1 . In each iteration the larger argument is replaced by its remainder
modulo the smaller, so the pair of numbers shrinks until one becomes zero. The final nonzero value is the
gcd. (Euclid’s algorithm runs in roughly $O(\log \min(a,b))$ time in the size of the inputs.) 1
2. Asymptotic Notation for Algorithm Complexity
Asymptotic notations such as Big-O, Theta, and Omega provide a concise way to express how an
algorithm’s time or space requirements grow with input size 2 3 . In this framework, one ignores
constant factors and lower-order terms, focusing on leading growth rates. Big–O notation, $O(f(n))$,
denotes an upper bound on running time (worst-case complexity) 3 . For example, saying an algorithm is
$O(n^2)$ means its running time grows no faster than a constant times $n^2$ for large $n$. Big–Ω
notation, $\Omega(f(n))$, denotes a lower bound (best-case growth) 4 , and Big–Θ, $\Theta(f(n))$, denotes
a tight bound (the running time is bounded both above and below by multiples of $f(n)$) 5 . In practice,
Big–O is most commonly used to classify algorithms by worst-case growth. For instance, an insertion sort
has worst-case time $O(n^2)$ 3 , meaning the slowest growth as input grows is quadratic, even though its
best-case is linear ($\Omega(n)$). These notations allow us to compare algorithms abstractly by their large-
$n$ behavior 2 3 .
Unit II
3. Brute-Force String-Matching Algorithm
The brute-force pattern-matching method checks the pattern P of length m at each possible position in the
text T of length n. A typical pseudocode is:
1
function bruteForceMatch(T, P):
n := length(T), m := length(P)
for i from 0 to n - m:
j := 0
while j < m and T[i+j] == P[j]:
j := j + 1
if j == m:
report a match at position i
// end for
This tries to align P starting at each index i of T and checks character by character 6 . For example,
matching “ABA” in “ABABABA” would check positions 0,1,2… etc. In the worst case (e.g. T = “AAAA…A”, P =
“AAA…B”), the inner loop often compares almost all m characters and backtracks only one position, leading
to $O(m\,(n-m+1))=O(nm)$ comparisons 7 8 . Thus the worst-case time complexity of brute-force
matching is $O(nm)$. In average “random” text, mismatches often occur early, and average complexity is
about $O(n+m)$ 8 . In summary, brute-force matching takes $O(nm)$ time in the worst case 7 (and on
average about linear time, $O(n+m)$, for typical texts 8 ).
4. Traveling Salesperson via Exhaustive Search
The Traveling Salesperson Problem (TSP) asks for the shortest Hamiltonian circuit through n cities with
given inter-city distances. An exhaustive search (brute-force) approach generates all possible tours and
selects the one of minimum total length. Concretely, one fixes a starting city and considers all permutations
of the remaining cities as the visit order, computing each tour’s total distance and choosing the smallest
9 . For example, consider four cities {A,B,C,D} with pairwise distances (symmetric) given by:
A–B=4, A–C=2, A–D=1, B–C=13, B–D=9, C–D=8.
We list all possible tours starting and ending at A:
- A→B→C→D→A: cost = 4 + 13 + 8 + 1 = 26
- A→B→D→C→A: cost = 4 + 9 + 8 + 2 = 23
- A→C→B→D→A: cost = 2 + 13 + 9 + 1 = 25
- A→C→D→B→A: cost = 2 + 8 + 9 + 4 = 23
- A→D→B→C→A: cost = 1 + 9 + 13 + 2 = 25
- A→D→C→B→A: cost = 1 + 8 + 13 + 4 = 26.
By computing each, we see the minimum tour length is 23, achieved by A–B–D–C–A (and equivalently A–C–
D–B–A) 10 . In general, brute-force TSP tries all $(n-1)!/2$ tours (for symmetric distances) and picks the
smallest length 9 . The example above illustrates that exhaustive search will find the optimal tour (length
23, tour A–B–D–C–A) by checking all possibilities 10 .
Unit III
5. Coin-Change by Dynamic Programming
The coin-change problem (e.g. count ways or minimize coins) can be solved by dynamic programming by
breaking it into subproblems. For example, to count the number of ways to make amount N using given
coin denominations, one builds a DP table or array of size N+1. Let dp[j] = number of ways to make sum
2
j. We initialize dp[0]=1 (one way to make sum 0) and dp[j]=0 for j>0. Then for each coin value c and for
each sum j from c to N, we add dp[j-c] to dp[j] . In pseudocode (from Simplilearn):
• Initialize array dp[0..N] with dp[0]=1 and others 0.
• For each coin value c:
for j from c to N:
dp[j] = dp[j] + dp[j-c] .
• Return dp[N] 11 .
This way, dp[j] accumulates all ways to form j using available coins. Equivalently, one can fill a 2D table
where rows index coins and columns index sums. The recurrence is: if the current coin’s value is larger than
the sum, we inherit the previous value; otherwise we take dp[i][j] = dp[i-1][j] + dp[i][j -
coinValue] 12 . For example, coins = {1,2,3}, target 4: the DP computes that dp[4]=4 , corresponding to
the combinations [1+1+1+1], [1+1+2], [1+3], [2+2]. The DP solution runs in time $O(N\,k)$ for k
denominations, and correctly finds the optimal count/ways by combining smaller subproblems 11 12 .
6. Dijkstra’s Shortest-Path Algorithm (Greedy)
Dijkstra’s algorithm finds shortest paths from a source in a weighted graph with nonnegative edges using a
greedy strategy. The pseudocode (from Wikipedia) is:
function Dijkstra(Graph, source):
// Initialization
for each vertex v in Graph:
dist[v] ← ∞
prev[v] ← UNDEFINED
add v to the set Q of unvisited vertices
dist[source] ← 0
// Main loop
while Q is not empty:
u ← vertex in Q with minimum dist[u]
remove u from Q
for each neighbor v of u:
alt ← dist[u] + weight(u,v)
if alt < dist[v]:
dist[v] ← alt
prev[v] ← u
return dist[], prev[]
At each step the algorithm picks the unvisited vertex u with the smallest current distance from the source
(greedy choice) 13 . It then relaxes each edge (u,v): if going from source→…→u→v is shorter than the
known dist[v] , it updates dist[v] = dist[u] + w(u,v) 14 15 . This process repeats until all
vertices are visited. The dist[] array then holds the shortest-path distances from the source, and the
prev[] pointers trace the actual paths. (For example, on a sample graph with source = A, Dijkstra would
3
set dist[A]=0 initially and then update neighbors in increasing order of distance until all vertices are
settled.) Dijkstra’s method is greedy because it always fixes the next-closest vertex optimally at each
iteration 13 14 .
Unit IV
7. Steps of the Simplex Method
The simplex method solves a linear programming (LP) problem by iterating over corner solutions of the
feasible region. The basic steps are:
1. Convert to standard form. Rewrite the LP as a maximization with all constraints as equations by
adding slack variables 16 . For example, each “≤” constraint $a_1x_1 + … + a_nx_n ≤ b$ becomes
$a_1x_1 + … + a_nx_n + s = b$ with $s≥0$ 16 .
2. Set up initial tableau. Construct the simplex tableau with one row per constraint (including slack
variables) and one bottom row for the (negated) objective function. This is the initial basic feasible
solution 17 .
3. Identify pivot column (entering variable). Look at the bottom (objective) row: if all coefficients of
nonbasic variables are nonnegative, the current solution is optimal and stop. Otherwise, choose a
column with a negative (most negative) coefficient – this corresponds to the variable that will enter
the basis 18 .
4. Identify pivot row (leaving variable). For the chosen pivot column, compute the ratio of the right-
hand side to each positive entry in that column. The smallest ratio identifies the pivot row 19 . The
variable currently basic in that row will leave the basis.
5. Pivot. Perform a row operation to make the pivot element =1 and to eliminate other entries in the
pivot column (i.e. divide the pivot row by the pivot element and use it to zero out that column). This
yields a new basic feasible solution with one variable entering and one leaving 19 .
6. Repeat. Return to step 3 with the new tableau and repeat until the objective row has no negative
entries (for maximization). At that point, the optimal solution has been reached.
These are the standard simplex iterations: convert to slack-form, pick the most negative column in the
objective row (pivot column) 18 , find the limiting row by the minimum ratio 19 , pivot, and repeat. The
method will terminate with an optimal corner solution or determine that the LP is unbounded.
8. Ford–Fulkerson (Max Flow / Min Cut) Algorithm
The Ford–Fulkerson method (in its shortest-augmenting-path variant, e.g. Edmonds–Karp) computes a
maximum flow in a network by repeatedly finding augmenting paths in the residual graph. In outline:
• Initialize flow to 0 on all edges.
• Construct the residual graph $G_f$ (initially capacities minus current flow; initially same as original
capacities).
• Augmenting path loop: As long as there is a path from source s to sink t in the residual graph
$G_f$, do:
• Find one augmenting path $p$ (e.g. by BFS for the shortest path).
• Compute the bottleneck capacity $\text{res_cap}(p) = \min_{(u,v)\in p} \text{res_cap}(u,v)$.
4
• Augment flow: Increase the flow along path $p$ by $\text{res_cap}(p)$ (push flow on forward edges
and subtract on reverse edges).
• Update the residual capacities accordingly (reduce forward residual by the bottleneck and increase
reverse residual).
In pseudocode form (from [Link]) this is:
flow = 0
for each edge (u,v): flow(u,v) = 0
while there exists a path p from s to t in G_f:
r = min{ residual_capacity(u,v) : (u,v) in p }
flow = flow + r
for each edge (u,v) in p:
if (u,v) is a forward edge: flow(u,v) += r
else: flow(v,u) -= r
return flow
20 The loop continues until no augmenting path exists, at which point the flow is maximum by the
21
Max-Flow Min-Cut Theorem. The set of vertices reachable from s in the final residual graph defines the
minimum s–t cut.
Unit V
9. P, NP, and NP-Complete Problems
Complexity classes P, NP, and NP-Complete are central in theoretical computer science. Class P consists of
all decision problems solvable in polynomial time by a deterministic algorithm; formally, an algorithm runs
in polynomial time if its runtime is bounded by $O(n^k)$ for some $k$ 22 . Class NP consists of all decision
problems whose solutions can be verified in polynomial time (equivalently, solvable in poly-time by a
nondeterministic machine) 23 . Clearly $P \subseteq NP$: every problem with an efficient solution also has
efficiently verifiable solutions.
An NP-hard problem is at least as “hard” as any problem in NP (informally, all NP problems can be reduced
to it); it need not itself be in NP. An NP-complete problem is a problem in NP that is also NP-hard; it is
among the hardest in NP 24 25 . Equivalently, a problem is NP-complete if it is in NP and every NP problem
can be transformed (via a polynomial-time reduction) into it 24 . Famous examples include the Boolean
satisfiability problem (SAT) – Cook–Levin’s theorem shows SAT is NP-complete, meaning any NP problem can
be converted into a SAT instance 26 . If any NP-complete problem (like SAT or 3-SAT) were solved in
polynomial time, then $P=NP$. However, no polynomial-time algorithms are known for NP-complete
problems, and the conjecture $P \neq NP$ (that some problems are verifiable much faster than they are
solvable) remains a major open question 22 24 .
10. N-Queen Problem via Backtracking
The N-Queen problem asks: how to place $N$ queens on an $N \times N$ chessboard so that no two
queens attack each other (no two share the same row, column, or diagonal). A classic solution uses
5
backtracking. We place queens one row at a time and check for conflicts; if a placement leads to no safe
move in a later row, we backtrack. In more detail 27 : start in row 1 and try placing a queen in each column;
for each safe placement (no other queen in same column or diagonal), mark it and recurse to the next row.
If a row has no valid column, backtrack to the previous row and try the next column there. This exhaustive
search prunes illegal branches early.
For example, when $N=4$, the algorithm finds two solutions. One solution is the array [2, 4, 1, 3], meaning
queens at positions (row1,col2), (row2,col4), (row3,col1), (row4,col3) 28 . The backtracking algorithm
systematically finds these arrangements. An example placement is shown below:
Figure: One of the solutions for 4-queens (columns [3,1,4,2]). No two queens share a column or diagonal.
In summary, the backtracking approach explores each row in turn, placing a queen in a safe column and
recursing, and uses backtracking to undo placements when a dead end is reached 27 . It eventually finds all
valid configurations (for $N=4$, these are [2,4,1,3] and [3,1,4,2] 28 ).
Sources: Authoritative algorithm textbooks and references were used for definitions, pseudocode, and
complexity results (see citations above for each answer).
1 Euclidean algorithm - Wikipedia
[Link]
2 3 4 5 Types of Asymptotic Notations in Complexity Analysis of Algorithms - GeeksforGeeks
[Link]
6 Brute force Pattern Matching - Data Structures Tutorial | Study Glance
[Link]
7 [Link]
[Link]
8 algorithm - brute force string pattern matching average analysis - Stack Overflow
[Link]
9 10 Traveling Salesman Problem
[Link]
11 12 Coin Change Problem with Dynamic Programming: A Complete Guide | Simplilearn
[Link]
13 14 15 Dijkstra's algorithm - Wikipedia
[Link]
16 17 18 19 4.2: Maximization By The Simplex Method - Mathematics LibreTexts
[Link]
04%3A_Linear_Programming_The_Simplex_Method/4.02%3A_Maximization_By_The_Simplex_Method
20 21 Ford-Fulkerson Algorithm | Brilliant Math & Science Wiki
[Link]
6
22 23 24 25 26 P versus NP problem - Wikipedia
[Link]
27 28 N Queen Problem - GeeksforGeeks
[Link]