0% found this document useful (0 votes)
68 views4 pages

MCS211 Solved Assignment

Uploaded by

shreesaiengg1998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views4 pages

MCS211 Solved Assignment

Uploaded by

shreesaiengg1998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

MCS-211: Design and Analysis of Algorithms

Solved Assignment (Concise answers and workings)

Q1.
a) Design an efficient algorithm to list primes in range 501..2000.
Answer: Use Sieve of Eratosthenes: create boolean array A[0..2000], mark multiples. Time
complexity: O(n log log n) where n=2000. Space O(n).

b) Differentiate between cubic-time and factorial-time algorithms with examples.


Cubic-time: running time Θ(n^3). Example: naive matrix multiplication of three nested
loops for n x n matrices.
Factorial-time: running time Θ(n!). Example: brute-force enumeration of all permutations
for traveling salesman by checking all tours.
Cubic-time grows polynomially; factorial-time grows extremely fast and is infeasible for
modest n.

c) Algorithm to multiply two n x n square matrices (standard).


Pseudo:
for i in 1..n:
for j in 1..n:
C[i][j] = 0
for k in 1..n:
C[i][j] += A[i][k] * B[k][j]
Time complexity: Θ(n^3).

d) Asymptotic bounds and Big-O/Θ example.


Given f(n) = 100 n^4 + 1000 n^3 + 100000.
As n→∞, dominant term is 100 n^4.
So: f(n) = Θ(n^4), f(n) = O(n^4), and f(n) = Ω(n^4).

e) Left-to-right binary exponentiation (Exponentiation by squaring).


Algorithm to compute x^e:
result = 1
for each bit of e from most-significant to least:
result = result * result
if current bit == 1:
result = result * x
Example: compute 3^9.
Binary of 9 = 1001.
Start result=1.
bit1 (1): result=1*1=1; bit=1 => result *=3 -> 3
bit2 (0): result=3*3=9; bit=0 => unchanged
bit3 (0): result=9*9=81; bit=0 => unchanged
bit4 (1): result=81*81=6561; bit=1 => result *=3 -> 19683
So 3^9 = 19683. Complexity: O(log e) multiplications.

f) Bubble sort explanation.


Repeatedly compare adjacent pairs and swap if out of order. Best-case (already sorted):
Θ(n) if optimized with early-exit. Worst-case: Θ(n^2).

g) Recurrence relations – Master’s method.


i) T(n) = 4 T(n/4) + n^1.
Here a=4, b=4 => n^{log_b a} = n^{log_4 4} = n^1.
Since f(n) = Θ(n^{log_b a} * log^k n) with k=0, T(n) = Θ(n^1 log n) = Θ(n log n).

ii) T(n) = 4 T(3n/4) + n^1.


This is nonstandard because subproblem size is 3n/4. Use Akra–Bazzi:
Find p s.t. a*(c)^p = 1 where a=4, c=3/4.
4*(3/4)^p = 1 => (3/4)^p = 1/4 => p = ln(1/4)/ln(3/4) = ln4 / ln(4/3) > 1.
Since f(n) = n^1 = O(n^{p - ε}) (because p>1), solution is T(n) = Θ(n^p) where p = ln4 /
ln(4/3) (a constant ≈ 2.409...). So T(n) = Θ(n^{p}) with p>1.

Q2.
a) What is an Optimisation Problem? Greedy approach & fractional knapsack.
An optimisation problem seeks the best solution according to an objective under
constraints.
Greedy approach: make locally optimal choice hoping global optimality follows (works for
e.g., fractional knapsack, activity selection).

Fractional knapsack example (given):


Profits p = [30,16,18,20,10,7]
Weights w = [5,4,6,4,5,7]
Capacity = 20

Compute profit/weight ratios:


Item1: 30/5 = 6.0
Item2: 16/4 = 4.0
Item3: 18/6 = 3.0
Item4: 20/4 = 5.0
Item5: 10/5 = 2.0
Item6: 7/7 = 1.0

Order by ratio: Item1, Item4, Item2, Item3, Item5, Item6.


Select greedily:
Take Item1 (w=5,p=30) rem cap 15
Take Item4 (w=4,p=20) rem cap 11
Take Item2 (w=4,p=16) rem cap 7
Take Item3 (w=6,p=18) rem cap 1
Take 1/5 of Item5 (w=1,p=2)
Capacity filled exactly 20.
Total profit = 30+20+16+18+2 = 86 (optimal for fractional knapsack).

b) Huffman coding (brief).


Given frequencies: a:5, b:25, c:10, d:15, e:8, f:7, g:30.
Build Huffman tree by repeatedly merging least two frequencies. (Omit full tree drawing
here; final codes depend on tie-breaking.)
Result: a prefix-free optimal code; average code length minimized.

c) Merge procedure and complexity.


Merge two sorted halves into one sorted array in Θ(n) time.
Given array [19,18,16,12,11,10,9,8], recursive merge sort splits and sorts; final sorted
ascending: [8,9,10,11,12,16,18,19]. Merge sort complexity Θ(n log n).

d) Divide and conquer multiplication & binary search.


Karatsuba multiplication: splits numbers into halves, uses 3 recursive multiplications;
time Θ(n^{log_2 3}) ≈ Θ(n^{1.585}).
Binary search: Θ(log n) time.

e) Topological sorting & strongly connected components.


Topological sort: linear ordering of DAG vertices so every directed edge u->v, u comes
before v. Can be done by DFS finishing times or Kahn's algorithm. SCCs: use Kosaraju's or
Tarjan's algorithm (O(V+E)).

Q3.
Graph algorithms (Prim's, Kruskal's, Dijkstra's) — explanations and steps:
a) Prim's algorithm (summary): start from any vertex, grow MST by adding minimum-weight
edge connecting tree to a new vertex; complexity O(E + V log V) with binary heap or O(E
log V).
Kruskal's algorithm: sort edges by weight and add if they don't form a cycle (use union-
find). Complexity O(E log E).

b) Dijkstra’s shortest path (summary): maintain distances, select vertex with smallest
tentative distance, relax neighbors. Complexity O(E + V log V).
(For the given sample graph, apply algorithms step-by-step on paper — omitted here for
brevity but straightforward to follow.)

c) Optimal Binary Search Tree (OBST) dynamic programming:


Given probabilities:
i: 0..4
p = [0.10, 0.15, 0.20, 0.10] (these are p1..p4; p0 is not used)
q = [0.05,0.10,0.10,0.10,0.10] (dummy probs q0..q4)
Use standard DP to compute expected search cost and structure. (Omitted full table due to
space; method: compute e[i,j], w[i,j], root[i,j].)

Q4.
a) Decision problem definition and examples.
Decision problem: yes/no question about an input (e.g., "Is there a Hamiltonian cycle?").
Optimization variant asks for best value (e.g., shortest Hamiltonian tour length).
(i) Travelling Salesman Problem — optimization: find minimal tour cost. Decision version:
is there a tour ≤ K? (Yes/No)
(ii) Graph Colouring — decision: can vertices be colored with k colors? (Yes/No)
(iii) 0-1 Knapsack — optimization: maximize profit subject to weight; decision: is there
subset with profit ≥ K?

b) P and NP classes.
P: problems solvable in polynomial time by deterministic Turing machine.
NP: problems whose solutions can be verified in polynomial time (nondeterministic poly-
time). Examples P: sorting, shortest path (Dijkstra). Examples NP: Hamiltonian cycle
(decision), subset-sum (decision).
Relationship: P ⊆ NP; open whether P = NP.

c) NP-Hard and NP-Complete definitions.


NP-Hard: at least as hard as every problem in NP (not necessarily in NP).
NP-Complete: both in NP and NP-Hard. Use polynomial-time reductions to show NP-
completeness. Polynomial-time reduction example: 3-SAT ≤_p CLIQUE, etc.
d) Definitions of problems:
(i) SAT Problem: satisfiability of boolean formula (is there an assignment making formula
true?) — NP-Complete (Cook-Levin).
(ii) Clique problem: given graph G and k, is there a clique of size ≥ k? (Decision) — NP-
Complete.
(iii) Hamiltonian Cycle: does a graph contain a cycle visiting every vertex exactly once?
— NP-Complete.
(iv) Subset Sum: given set of integers and target S, is there subset summing to S? — NP-
Complete (decision version).

--- End of concise solutions. ---

You might also like