0% found this document useful (0 votes)
19 views28 pages

Algorithm BinomialCoefficient

The document outlines several sorting algorithms including Selection Sort, Merge Sort, Quick Sort, Insertion Sort, and their complexities. It also covers long integer multiplication using Karatsuba's method and Strassen's matrix multiplication, detailing their recursive structures and time complexities. Additionally, it describes graph traversal algorithms like Depth-First Search (DFS) and Breadth-First Search (BFS), along with the computation of the binomial coefficient using dynamic programming.

Uploaded by

shrikg000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views28 pages

Algorithm BinomialCoefficient

The document outlines several sorting algorithms including Selection Sort, Merge Sort, Quick Sort, Insertion Sort, and their complexities. It also covers long integer multiplication using Karatsuba's method and Strassen's matrix multiplication, detailing their recursive structures and time complexities. Additionally, it describes graph traversal algorithms like Depth-First Search (DFS) and Breadth-First Search (BFS), along with the computation of the binomial coefficient using dynamic programming.

Uploaded by

shrikg000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

ALGORITHM SelectionSort(A[0..

n−1])
//Sortsagivenarraybyselectionsort
//Input:AnarrayA[0..n−1]oforderableelements
//Output:ArrayA[0..n−1]sortedinnondecreasingorder
fori←0 to n−2do
min←i
forj←i+1to n−1do
ifA[j]<A[min] min←j
swap A[i] and A[min]

MergeSort :
ALGORITHM Mergesort(A[0..n−1])
//Sorts array A[0..n − 1]by recursive mergesort
//Input: An array A[0..n − 1]of orderable elements
//Output: Array A[0..n − 1]sorted in nondecreasing order
if n>1
copy A[0.. n/2 −1]to B[0.. n/2 −1]//to make partitions
copy A[ n/2 ..n−1]to C[0.. n/2 −1]
Mergesort(B[0.. n/2 −1])
Mergesort(C[0.. n/2 −1])
Merge(B, C, A) //see below

ALGORITHM Merge(B[0..p −1],C[0..q −1],A[0..p +q −1])


//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1]and C[0..q − 1]both sorted
//Output: Sorted array A[0..p + q − 1]of the elements of B and C
i ←0; j ←0; k←0
while i<p and j<q do
if B[i]≤ C[j]
A[k]←B[i]; i ←i +1
else
A[k]←C[j]; j ←j +1
k ←k+1
if i =p
copy C[j..q − 1]to A[k..p + q −1]
else copy B[i..p − 1]to A[k..p + q −1]

1. input's size: n – number of elements to be sorted. (Assuming for simplicity that n is a


power of 2)
2. basic operation: comparison
3. No worst, average, and best cases
4. Let T(n) = number of times the basic operation is executed.
T(n) = 2T(n/2) + Tdivide_merge(n) for n > 1, T(1) = 0

Solve using back substitution:


First expansion:
T(n) = 2T(n/2) + cn
Second expansion:
= 4T(n/4) + cn + cn
= 4T(n/4) + 2cn
Third expansion:
= 8T(n/8) + 3cn
After k steps:
T(n) = 2^k * T(n / 2^k) + k * cn
Base Case:
We stop when n / 2^k = 1
→ 2^k = n
→ k = log₂(n)
Final Substitution:
T(n) = n * T(1) + cn * log₂(n)
T(n) = n * c₀ + cn * log₂(n)
So, T(n) = Θ(n log n)

Quick Sort :
Algorithm quicksort(A, left, right)
// Sorts a set of elements recursively using quicksort
// Input: An array A, with indices from left to right
// Output: A sorted array A in ascending order

if left < right


s ← partition(A, left, right) // Determine the pivot's correct position
quicksort(A, left, s-1) // Sort the left subarray
quicksort(A, s+1, right) // Sort the right subarray

Algorithm Partition(A, left, right)


// Partitions a subarray A indexed from left to right
pivot ← A[left]
i ← left + 1
j ← right
while True
while pivot ≥ A[i] and i ≤ right
increment i
while pivot < A[j]
decrement j
if i < j
swap(A[i], A[j])
else
swap(A[left], A[j])
return j

1. Input size:
Let n be the number of elements to be sorted.
(Assume n is a power of 2 for simplicity.)
2. Basic operation:
Comparison (during the partitioning process).
3. Case distinctions:
Quicksort’s performance depends on how balanced the partitions are:
o Best case: pivot splits array equally
o Worst case: pivot is always smallest or largest
o Average case: partitions are reasonably balanced
4. Recurrence relation (for number of comparisons):
Let T(n) be the number of comparisons (basic operations).

Back substitution:
T(n) = 2T(n/2) + cn
= 2[2T(n/4) + c(n/2)] + cn
= 4T(n/4) + 2cn
= 8T(n/8) + 3cn
...
= 2^k * T(n / 2^k) + k * cn
Stopping condition:
n / 2^k = 1 → 2^k = n → k = log₂n
Final step:
T(n) = n * T(1) + cn * log₂n
T(n) = 0 + cn log n
T(n) = O(n log n)

For worst case :


Back substitution:
T(n) = T(n−1) + cn
= [T(n−2) + c(n−1)] + cn
= T(n−2) + c(n−1 + n)
= T(n−3) + c(n−2 + n−1 + n)
...
= T(1) + c(2 + 3 + ... + n)
= 0 + c * (n(n+1)/2 − 1)
= O(n²)

Long integer multiplication


// Multiplies two n-digit integers a and b using Karatsuba’s divide and conquer method
// Input: a and b as n-digit integers
// Output: Product c = a × b
Multiply(a, b)
if n = 1
return a × b // Base case: single-digit multiplication
m ← ⌊n / 2⌋ // Mid-point to split the digits
// Step 1: Divide a and b into two halves
a1 ← higher-order m digits of a
a0 ← lower-order m digits of a
b1 ← higher-order m digits of b
b0 ← lower-order m digits of b
// Step 2: Recursive multiplications
c0 ← Multiply(a0, b0) // Low part
c2 ← Multiply(a1, b1) // High part
c1 ← Multiply(a0 + a1, b0 + b1) − c0 − c2 // Cross terms
// Step 3: Combine the results
c ← c2 × 10^(2m) + c1 × 10^m + c0
return c

Analysis of Long Integer Multiplication Using Divide & Conquer

1. Input size:
n – number of digits in integers a and b.
(Assume for simplicity that n is a power of 2.)
2. Basic operation:
Single-digit multiplication
3. No best, average, or worst case distinctions
The number of operations is deterministic regardless of input.
4. Let M(n) = number of single-digit multiplications executed.

M(n)=3M(n/2)+O(n), M(1)= 1

Solve using back substitution:


M(n) = 3M(n/2) + cn
= 3[3M(n/4) + c(n/2)] + cn
= 9M(n/4) + 3c(n/2) + cn
= 9M(n/4) + 2.5cn
= 27M(n/8) + 7.5cn
= ...x
General Form:
M(n) = 3^k × M(n / 2^k) + kcn
Stopping Condition:
When n / 2^k = 1, then k = log₂n

Final Complexity:
M(n) = n^log₂3 + cn log₂n
M(n) = O(n^1.585)

Strassen’s matrix
// Multiplies two n × n matrices a and b using Strassen’s divide and conquer method
// Input: a and b are square matrices of size n × n (n is a power of 2)
// Output: Matrix c = a × b

Multiply(a, b):
if n = 1
return a × b

// Step 1: Divide a and b into 4 submatrices each


a1, a2, a3, a4 ← submatrices of a
b1, b2, b3, b4 ← submatrices of b

// Step 2: Compute 7 intermediate matrices


d ← Multiply(a1, b2 − b4)
e ← Multiply(a4, b3 − b1)
f ← Multiply(a3 + a4, b1)
g ← Multiply(a1 + a2, b4)
h ← Multiply(a3 − a1, b1 + b2)
i ← Multiply(a2 − a4, b3 + b4)
j ← Multiply(a1 + a4, b1 + b4)

// Step 3: Compute submatrices of result matrix c


c1 ← e + i + j − g
c2 ← d + g
c3 ← e + f
c4 ← d + h + j − f

return c = combine(c1, c2, c3, c4)

Time Complexity (with Back Substitution):


• 7 recursive calls on n/2 size
• Additional matrix addition/subtraction is O(n²)

Recurrence:
T(n) = 7T(n/2) + cn²
Solving (back substitution):
T(n) = 7^k T(n / 2^k) + kcn²
When n / 2^k = 1 → k = log₂n
So,
T(n) = n^log₂7 + cn² log₂n
⇒ T(n) = O(n^2.807)
InsertionSort
ALGORITHM InsertionSort(A[0..n −1])
//Sorts a given array by insertion sort
//Input: An array A[0..n − 1]of n orderable elements
//Output: Array A[0..n − 1]sorted in nondecreasing order
for i ←1ton−1do
v ←A[i]
j ←i−1
while j ≥0 and A[j]>vdo
A[j +1]←A[j]
j ←j−1
A[j +1]←v

• Input size:
n – number of elements in the array.

• Basic operation:
Comparison between elements (A[j] > key)

• Best, Worst, and Average Cases:

Time
Case Comparisons Shifts
Complexity

Best (Sorted
n−1 0 O(n)
array)

Worst (Reverse
n(n−1)/2 n(n−1)/2 O(n²)
order)

Average ~n²/4 ~n²/4 O(n²)


DFS
// Explores all vertices reachable from vertex v in graph G using DFS
// Input: Graph G = (V, E) represented as an adjacency list or matrix
// v – the starting vertex
// Output: Visited[] array showing reachable vertices\
DFS(G, v):
//visted <--0
mark v as visited
for each vertex u adjacent to v do
if u is not visited then
DFS(G, u)
Explanation of Variables
• G = graph with vertices V and edges E
• v = current vertex being explored
• visited[] = Boolean array or set to keep track of explored vertices
• adjacent to v = all neighbors connected directly to v

How It Works:
• Start from a node v
• Visit it and mark it as visited
• Recursively visit each unvisited neighbor
• Backtrack when there are no more unvisited neighbors

Time Complexity:
Let:
• V = number of vertices
• E = number of edges
Then:
• For adjacency list representation:
T(n)=O(V+E)
(Each vertex and each edge is visited once)
• For adjacency matrix: O(V2)

BFS
// Traverses a graph G starting from vertex v using Breadth-First Search
// Input: Graph G = (V, E) as adjacency list or matrix
// v – starting vertex
// Output: Visited[] array showing BFS traversal order

BFS(G, v):
// vistited <- 0 --all are 0
create an empty queue Q
mark v as visited
enqueue v into Q

while Q is not empty do


u ← dequeue Q
for each vertex w adjacent to u do
if w is not visited then
mark w as visited
enqueue w into Q

Time Complexity:

Let:

• V = number of vertices
• E = number of edges

If using adjacency list:

T(n)=O(V+E)

(Each vertex and edge is visited once)

Adjacency Matrix O(V²)

Feature DFS (Depth-First Search) BFS (Breadth-First Search)

Depth-wise (go deep first, then Level-wise (explore all


Traversal order
backtrack) neighbors first)

Data Structure used Stack (implicit via recursion) Queue

May go deep without visiting Visits all immediate neighbors


Visited order
all nearby nodes first

Implementation Recursive or explicit stack Iterative using a queue

O(V + E) with adjacency


Time Complexity Same: O(V + E) or O(V²)
listO(V²) with matrix

Space Complexity O(V) (visited + recursion stack) O(V) (visited + queue)


Feature DFS (Depth-First Search) BFS (Breadth-First Search)

Useful for path finding, Useful for shortest path in


Use Case
topological sort unweighted graphs

Shortest path
No (not guaranteed) Yes (in unweighted graphs)
guarantee?

Game trees, puzzles, connected Network routing, level-order


Suitable for
components traversal

Binomial Coefficient
Algorithm BinomialCoefficient(n, r)
// Computes the binomial coefficient C(n, r) using dynamic programming
// Input: Two non-negative integers n and r (0 ≤ r ≤ n)
// Output: Value of C(n, r)

1. Create a 2D array C[0..n][0..r]


2. for i ← 0 to n do
3. for j ← 0 to min(i, r) do
4. if j = 0 or j = i then
5. C[i][j] ← 1
6. else
7. C[i][j] ← C[i-1][j-1] + C[i-1][j]
8. return C[n][r]
Analysis
• Time Complexity: O(n×r)O(n×r)
• Two nested loops: outer runs n+1n+1 times, inner runs up to r+1r+1 times.
• Space Complexity: O(n×r)O(n×r)
• Stores a 2D table of size (n+1)×(r+1)(n+1)×(r+1).

Conclusion

• The dynamic programming approach to computing the binomial coefficient is


efficient and avoids the exponential time complexity of the naive recursive
approach. It is particularly useful for large values of n and r.

How This Algorithm Differs from Other Approaches
• Versus Recursive Approach:
• The recursive formula is C(n,r)=C(n−1,r−1)+C(n−1,r)C(n,r)=C(n−1,r−1)+C(n−1,r),
but direct recursion recalculates the same values many times, leading to
exponential time complexity.
• The dynamic programming approach stores intermediate results, ensuring each
subproblem is solved only once.
• Versus Factorial Formula:
• The formula C(n,r)=n!r!(n−r)!C(n,r)=r!(n−r)!n! is efficient for small numbers, but
can cause integer overflow for large nn.
• The DP approach avoids large intermediate values and is more numerically
stable for programming.
• Versus Pascal’s Triangle:
• The DP table is essentially a portion of Pascal’s Triangle, built up to the
required nn and rr.

Algorithm FloydsAlgorithm(W)
// Computes shortest paths between all pairs of vertices using dynamic programming
// Input: n x n matrix W (edge weights or ∞)
// Output: n x n matrix D (shortest path weights)

1. Let D[0..n-1][0..n-1] ← W[0..n-1][0..n-1] // Initialize D with edge weights ( 2. D ← W )


2. for k ← 1 to n do // Consider each vertex as an intermediate
3. for i ← 1 to n do // For each source vertex
4. for j ← 1 to n do // For each destination vertex
5. D[i, j]←min{D[i,j], D[i, k]+ D[k,j]}

7. return D

Analysis
• Time Complexity: O(n3)due to the three nested loops iterating over all vertices.
• Space Complexity: O(n2) for storing the distance matrix DD.
Conclusion
Floyd’s Algorithm provides a simple and elegant solution to the all-pairs shortest path problem,
capable of handling graphs with negative edge weights (except negative cycles). It is especially
suitable for dense graphs or when shortest paths between all pairs are required.

Algorithm Warshall(A[1..n, 1..n])


// Implements Warshall's algorithm for computing the transitive closure
// Input: Adjacency matrix A of a digraph with n vertices
// Output: Transitive closure matrix R(n)

1. R(0) ← A // Initialize R(0) as the adjacency matrix


2. for k ← 1 to n do // For each intermediate vertex k
3. for i ← 1 to n do // For each source vertex i
4. for j ← 1 to n do // For each destination vertex j
5. R(k)[i, j] ← R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j])
// Update: path from i to j exists if it already did,
// or if there is a path from i to k and from k to j

6. return R(n) // R(n) is the transitive closure matrix

Analysis
• Time Complexity: O(n3)(three nested loops over n vertices)1
• Space Complexity: O(n2) (for the reachability matrix).
Conclusion
Warshall’s Algorithm is an efficient and systematic method to compute the transitive
closure of a directed graph. It determines, for every pair of vertices, whether a path
exists, enabling fast answers to reachability queries

Algorithm Knapsack01(v[1..n], w[1..n], n, W)


// Solves the 0/1 Knapsack Problem using dynamic programming
// Input: Arrays v (values), w (weights), n (number of items), W (capacity)
// Output: Maximum value that fits in the knapsack

1. Create a 2D array m[0..n, 0..W]


2. for j ← 0 to W do
3. m[0, j] ← 0 // No items, zero value
4. for i ← 1 to n do
5. m[i, 0] ← 0 // Zero capacity, zero value
6. for j ← 1 to W do
7. if w[i] > j then
8. m[i, j] ← m[i-1, j] // Cannot include item i
9. else
10. m[i, j] ← max(m[i-1, j], m[i-1, j-w[i]] + v[i]) // Include or exclude item i
11. return m[n, W] // Maximum value achievable

Analysis
• Time Complexity: O(nW)O(nW)
Two nested loops: one over items, one over capacities.
• Space Complexity: O(nW)O(nW)
For the DP table. (Can be optimized to O(W)O(W) if only the last row is needed

Conclusion

• The dynamic programming approach to the 0/1 Knapsack Problem is efficient


and avoids the exponential time of brute-force solutions. It is widely used in
resource allocation, budgeting, and optimization problems.

Algorithm MFKnapsack(i, j)
// Implements the memory function method for the 0/1 Knapsack problem
// Input: i (number of items considered), j (knapsack capacity)
// Output: Value of an optimal feasible subset of the first i items
//row 0 and column 0 initialized with 0’s
1. if V[i, j] < 0 then
2. if j < W[i] then
3. value ← MFKnapsack(i-1, j) // Cannot include item i
4. else
5. value ← max( MFKnapsack(i-1, j),
6. v[i] + MFKnapsack(i-1, j - Weights[i]) )
7. V[i, j] ← value // Store the result in the table
8. return F[i, j] // Return the memoized value

Analysis
• Time Complexity: O(nW)O(nW)
Each subproblem F[i, j] is solved at most once.
• Space Complexity: O(nW)O(nW)
For the memoization table and recursion stack.

Conclusion
The memory function (memoization) approach for the 0/1 Knapsack problem is efficient and
elegant. It combines the clarity of recursion with the efficiency of dynamic programming,
making it suitable for problems where only a subset of subproblems is needed.

How This Differs from Other Approaches

Approach Method Fills All Time Space


Subproblems? Complexity Complexity

Brute-force recursion Top-down No Exponential Stack

DP (Tabulation) Bottom-up Yes O(nW)O(nW) O(nW)O(nW)

Memory function Top-down +


(Memo) Memo Only needed ones O(nW)O(nW) O(nW)O(nW)

• Memory function: Only solves and stores subproblems that are actually needed.
• Tabulation: Fills the entire DP table regardless of need.

If you want a worked example or need this format for another algorithm, just let me know!
Share
Export
Rewrite
can u include intialization
can u include intialization
Answer
Absolutely! Here is the Memory Function (Memoization) Knapsack Algorithm with explicit
initialization, all in perfect exam-ready, textbook style:

Algorithm: MFKnapsack (Memory Function Method for 0/1 Knapsack)


Purpose
To solve the 0/1 Knapsack problem efficiently using memory functions (memoization),
combining recursion with dynamic programming to avoid redundant computations.

Input
• A nonnegative integer ii: Number of the first items being considered.
• A nonnegative integer jj: The current knapsack capacity.
• Global variables:
• Arrays Weights[1..n] and Values[1..n] for item weights and values.
• Table F[0..n, 0..W] for memoization.
Output
• The value of an optimal feasible subset of the first ii items with capacity jj.

Algorithm Steps
text
Algorithm MFKnapsack(i, j)
// Implements the memory function method for the 0/1 Knapsack problem
// Input: i (number of items considered), j (knapsack capacity)
// Output: Value of an optimal feasible subset of the first i items

// --- Initialization ---


1. For s ← 0 to n do
2. F[s, 0] ← 0
3. For t ← 0 to W do
4. F[0, t] ← 0
5. For s ← 1 to n do
6. For t ← 1 to W do
7. F[s, t] ← -1

// --- Recursive Memory Function ---


8. Function MFKnapsack(i, j)
9. if F[i, j] < 0 then
10. if j < Weights[i] then
11. value ← MFKnapsack(i-1, j) // Cannot include item i
12. else
13. value ← max( MFKnapsack(i-1, j),
14. Values[i] + MFKnapsack(i-1, j - Weights[i]) )
15. F[i, j] ← value // Store the result in the table
16. return F[i, j] // Return the memoized value

// --- To solve the problem, call MFKnapsack(n, W) ---

Comments
• Initialization:
• All entries in row 0 and column 0 of F are set to 0, representing zero items or
zero capacity.
• All other entries are set to -1, indicating unsolved subproblems.
• The function MFKnapsack(i, j) computes the optimal value recursively and stores results
in F.
• If a subproblem has already been solved (F[i, j] ≠ -1), its value is returned immediately.

Analysis
• Time Complexity: O(nW)
Each subproblem F[i, j] is solved at most once.
• Space Complexity: O(nW)
For the memoization table and recursion stack.

Conclusion
The memory function (memoization) approach for the 0/1 Knapsack problem is efficient and
elegant. It combines the clarity of recursion with the efficiency of dynamic programming, and
initialization ensures correct base cases for the recursion.
1. VT ← {v0} // The set of tree vertices, can be initialized with any vertex v0
2. ET ← ∅ // The set of edges in the minimum spanning tree

3. for i ← 1 to |V| − 1 do
4. Find a minimum-weight edge e* = (v*, u*) among all edges (v, u)
5. such that v is in VT and u is in V − VT
6. VT ← VT ∪ {u*} // Add the selected vertex to the tree
7. ET ← ET ∪ {e*} // Add the selected edge to the tree

8. return ET // Return the edges of the minimum spanning tree

Comments
• The algorithm starts with any vertex and grows the tree by always adding the minimum-
weight edge connecting a vertex in the tree to a vertex outside the tree.
• The process repeats until all vertices are included in the tree and the MST
has ∣V∣−1 edges
Conclusion
Prim’s Algorithm is a greedy method that always chooses the locally optimal (minimum-weight)
edge to expand the MST, ensuring a globally optimal solution456. It is especially efficient for
dense graphs.

How It Differs from Other MST Algorithms


• Prim’s grows the MST from one vertex, always connecting the nearest outside
vertex456.
• Kruskal’s adds the next lightest edge that doesn’t form a cycle, possibly connecting
different trees.
• Prim’s is typically more efficient for dense graphs, while Kruskal’s is often preferred for
sparse graphs.

Algorithm: Dijkstra(G, s)
// Dijkstra’s algorithm for single-source shortest paths
// Input: A weighted connected graph G=(V,E)G=(V,E) with nonnegative edge weights and a
source vertex ss
// Output: For every vertex vv in VV, the length dvdv of a shortest path from ss to vv and its
penultimate vertex pvpv

// Step 1: Initialization
for each vertex v in V do
d[v] ← ∞ // Tentative distance
P[v] ← NIL // Predecessor

d[s] ← 0 // Distance from source to itself


Soln ← ∅ // Set of visited/solved vertices
W←V // Unvisited vertices

// Step 2: Main Loop


while W ≠ ∅ do
u ← extract-min(W) // Select vertex with min d[u]
Soln ← Soln ∪ {u}
W ← W - {u}

for each vertex v ∈ V adjacent to u do


if d[v] > d[u] + C(u, v) then
d[v] ← d[u] + C(u, v) // Relaxation step
P[v] ← u // Update path

Analysis
• Time Complexity:
• O(∣V∣2)O(∣V∣2) with a simple array-based priority queue
• O((∣V∣+∣E∣)log⁡∣V∣)O((∣V∣+∣E∣)log∣V∣) with a binary heap
• Space Complexity: O(∣V∣)O(∣V∣) for distance, predecessor, and queue structures

To find all subsets of a given set whose sum equals a specified value dd, using backtracking with
pruning for efficiency.

Input
• A sorted array of positive numbers w[1..n] (in non-decreasing order)
• A target sum dd
Output
• All subsets of w[1..n] whose elements sum to d

Algorithm SumOfSubsets(S, k, r)
// S: Current sum
// k: Current index (1-based)
// r: Remaining sum (sum of w[k] to w[n])
// x[1..n]: Solution vector

1. x[k] ← 1 // Include current element w[k]

2. if S + w[k] == d then
3. write x[1..n] // Found a valid subset; output the solution

4. else if (S + w[k] + w[k+1]) ≤ d then


5. SumOfSubsets(S + w[k], k + 1, r - w[k])
// Explore including next element

6. if (S + r - w[k]) ≥ d and (S + w[k+1]) ≤ d then


7. x[k] ← 0 // Exclude current element w[k]
8. SumOfSubsets(S, k + 1, r - w[k])
// Explore excluding current element

// To start: call SumOfSubsets(0, 1, sum of all w[1..n])

Analysis
• Time Complexity: O(2n) in the worst case, but pruning often makes it much faster in
practice.
• Space Complexity: O(n) for the recursion stack and solution vector.

Conclusion
This backtracking algorithm efficiently finds all subsets that sum to a target value using pruning
to avoid unnecessary work. It is much more efficient than brute force, especially when the input
set is sorted.

How This Differs from Other Approaches


• Brute Force: Checks all 2n2n subsets, no pruning.
• Dynamic Programming: Finds one solution or counts solutions, but does not list all
subsets.
• Backtracking (this algorithm): Lists all solutions and uses pruning for efficiency.
Algorithm: NQueens(k, n)
// Implements the backtracking solution to the N-Queens problem
// Input:
// - k: current row number
// - n: total number of queens (board size)
// Output:
// - Prints all valid placements of n queens on an n × n chessboard

Algorithm NQueens(k, n)
// Places queens on an n×n chessboard using backtracking
for i ← 1 to n do // Try placing queen in each column of row k
if place(k, i) then // Check if position (k, i) is safe
x[k] ← i // Place queen at column i in row k
if k == n then
print x[1..n] // A valid solution is found
else
NQueens(k + 1, n) // Recur for next row

Helper Function: place(k, i)


// Checks if it's safe to place a queen at (row k, column i) text
Function place(k, i)
for j ← 1 to (k - 1) do // Check all previous rows
if (x[j] == i) OR // Same column
(abs(x[j] - i) == abs(j - k)) // Same diagonal
return false // Conflict found
return true // Safe to place

Analysis
• Time Complexity: Worst case O(n!)(due to pruning, much faster than brute force).
• Space Complexity: O(n) for the solution vector and recursion stack.
The N-Queens algorithm uses backtracking to efficiently explore all valid arrangements of
queens on an n×n board, printing each solution as it is found. The place function prunes invalid
placements early, making the algorithm practical even for moderate values of nn.

You might also like