0% found this document useful (0 votes)
35 views20 pages

Algorithms - Last Yr Ans Key PDF

The document is an examination question paper for a B.E./B.Tech. course on Algorithms, covering various topics such as algorithm analysis, minimum spanning trees, greedy strategies, dynamic programming, NP-hard problems, and specific algorithms like KMP and Bellman-Ford. It includes both theoretical questions and practical applications, requiring students to demonstrate their understanding of algorithmic concepts and problem-solving techniques. The paper is structured into two parts, with Part A focusing on short-answer questions and Part B on detailed explanations of algorithms.

Uploaded by

sandhya9230
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views20 pages

Algorithms - Last Yr Ans Key PDF

The document is an examination question paper for a B.E./B.Tech. course on Algorithms, covering various topics such as algorithm analysis, minimum spanning trees, greedy strategies, dynamic programming, NP-hard problems, and specific algorithms like KMP and Bellman-Ford. It includes both theoretical questions and practical applications, requiring students to demonstrate their understanding of algorithmic concepts and problem-solving techniques. The paper is structured into two parts, with Part A focusing on short-answer questions and Part B on detailed explanations of algorithms.

Uploaded by

sandhya9230
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

JCT COLLEGE OF ENGINEERING AND TECHNOLOGY

(AUTONOMOUS)
Question Paper Code: 2CS2021D404

B.E., /B.Tech., DEGREE EXAMINATIONS, NOVEMBER/DECEMBER-2024

Fourt Semester

B.E../B.Tech., (CSE)

CS3401 – ALGORITHMS

(Regulations 2021)

Part-A:
Answer the following (10 × 2 = 20 marks)
1.Write the difference between worst case, best case, and average case analysis.

Worst Case: The maximum time or space an algorithm takes for the most difficult input
of size n. It provides an upper bound on performance.
Example: Searching for an element not in the array using linear search.

Best Case: The minimum time or space an algorithm requires under the most favorable
conditions.
Example: Searching for the first element in an array using linear search.

Average Case: The expected time or space for typical inputs, assuming a probability
distribution of inputs.

2. Give the significance of lower bounds in algorithm analysis.

Lower bounds represent the minimum amount of time or space an algorithm must take to
solve a problem, regardless of the approach used. They are significant because:

1. Theoretical Benchmark: Lower bounds help establish the best possible


efficiency any algorithm can achieve for a problem.
2. Optimization Goal: They guide the development of optimal algorithms by
showing the minimum computational resources required.

For example, comparison-based sorting has a lower bound of O(nlog⁡n)O(n \log


n)O(nlogn) comparisons.
3. Specify the purpose of Minimum Spanning Tree (MST).

The purpose of a Minimum Spanning Tree (MST) is to connect all the vertices in a
weighted, connected, and undirected graph with:

4. Minimum Total Weight: The sum of edge weights in the MST is minimized.
5. No Cycles: The MST forms a tree, ensuring no cycles are present while still
connecting all vertices.

4. Infer the properties of strong connectivity in a directed graph.

Bidirectional Reachability
Strongly Connected Components (SCCs)
Irreducibility
Transitivity

5.Mention the key elements of the Greedy Strategy.

Greedy Choice Property


Optimal Substructure
Iterative Approach
No Backtracking

6. Describe the steps to find the maximum and minimum of a set of numbers using
divide and conquer.

Divide: Split the array into two halves.


Conquer:

 Recursively find the maximum and minimum of each half.


 If a subarray has one element, it is both the maximum and minimum.
 If a subarray has two elements, compare them to find the maximum and
minimum.

Combine: Compare the maxima of the two halves to find the overall maximum and the
minima to find the overall minimum.
7. List out the applications of N-Queens problem.

Algorithm Design and Analysis: Used to study backtracking and optimization


techniques.
Constraint Satisfaction Problems: Models problems with placement and conflict
constraints.
Artificial Intelligence: Demonstrates search strategies and state-space exploration.
Parallel Computing: Helps in testing parallel algorithms for large search spaces.
Robotics and Scheduling: Simulates task allocation and path planning under constraints.
Chess Programming: Forms the basis for algorithms in chess-related problems.

8. Outline the characteristics of dynamic programming.

Optimal Substructure
Overlapping Subproblems:
Memoization or Tabulation:
Bottom-Up Approach.
State and Transition
Time and Space Efficiency

9. Give examples on NP hard problems.

Knapsack Problem
Travelling Salesman Problem (TSP)
Graph Coloring
Hamiltonian Path Problem
Subset Sum Problem
Bin Packing Problem

10.What is an intractable problem?

An intractable problem is a problem for which no efficient solution algorithm is known.


Specifically, it refers to problems that cannot be solved in a reasonable amount of time as
the size of the input grows. These problems generally require exponential time or space,
making them impractical to solve for large instances.
Part-B:

11a. Explain the concept of asymptotic analysis in algorithm design. Discuss the
significance of Big O, Omega, and Theta notations with examples. (13 marks)

Concept of Asymptotic Analysis in Algorithm Design (3 Marks)

Asymptotic analysis is a method used to describe the behavior of an algorithm as the


input size grows. It helps in evaluating the efficiency of an algorithm in terms of time and
space complexity for large inputs, ignoring constant factors and low-order terms. The
goal is to express how the algorithm's performance scales with increasing input size.

Significance of Big O, Omega, and Theta Notations (7 Marks)

1. Big O Notation (O):


o Definition: Big O describes the upper bound of an algorithm's time or
space complexity. It represents the worst-case scenario or the maximum
time the algorithm will take.
o Significance: It helps to understand the maximum time an algorithm could
take, ensuring that it doesn't grow too fast as the input size increases.
o Example:

For bubble sort, the worst-case time complexity is
O(n2)O(n^2)O(n2). In the worst case, for an array of size nnn,
bubble sort will perform n2n^2n2 comparisons.
 Insertion sort: The worst-case time complexity is also
O(n2)O(n^2)O(n2) but behaves better on partially sorted data.
2. Omega Notation (Ω):
o Definition: Omega notation represents the lower bound of an algorithm's
time or space complexity. It describes the best-case scenario, or the
minimum time the algorithm will take.
o Significance: It provides insight into the least amount of work the algorithm
will perform, helping to gauge its performance in ideal conditions.
o Example:
For merge sort, the best case is Ω(nlog⁡n)Ω(n \log n)Ω(nlogn),

as it always divides the problem into halves and merges them
efficiently.
 For linear search, the best-case time complexity is Ω(1)Ω(1)Ω(1),
occurring when the element to be searched is the first element in
the list.
3. Theta Notation (Θ):
o Definition: Theta notation provides the tight bound, meaning it describes
both the upper and lower bounds. It represents the exact asymptotic
behavior of an algorithm.
o Significance: Theta notation gives a more precise understanding of an
algorithm’s growth rate, showing that it grows at the same rate in both the
best and worst cases.
o Example:
 Merge Sort: Its time complexity is Θ(nlog⁡n)Θ(n \log
n)Θ(nlogn) because in both the best and worst cases, merge sort
divides the array into halves and merges them efficiently.
 Binary Search: Its time complexity is Θ(log⁡n)Θ(\log n)Θ(logn)
as it always halves the search space regardless of the input
configuration.

Summary of Notations (3 Marks)


Notation Meaning Description Example

Merge Sort:
Upper Bound (Worst- Describes the maximum time
Big O (O) O(nlog⁡n)O(n \log
case) an algorithm can take
n)O(nlogn)

Omega Lower Bound (Best- Describes the minimum time Linear Search:
(Ω) case) the algorithm will take Ω(1)Ω(1)Ω(1)

Merge Sort:
Tight Bound (Exact Describes the exact growth
Theta (Θ) Θ(nlog⁡n)Θ(n \log
asymptotic behavior) rate of the algorithm
n)Θ(nlogn)

11b. Describe the Knuth Morris Pratt (KMP) algorithm for string matching. How
does it improve upon the Naïve approach in terms of time complexity? (13 marks)

Knuth-Morris-Pratt (KMP) Algorithm for String Matching (2 Marks)

The Knuth-Morris-Pratt (KMP) algorithm is an efficient string matching algorithm


used to find occurrences of a "pattern" (substring) within a "text" (string). The primary
advantage of the KMP algorithm is its ability to avoid unnecessary re-checks of
characters in the text that have already been matched with the pattern, thus improving the
efficiency compared to the naive approach.

Key Concepts (6 Marks)

1. Prefix Function (LPS Array):


KMP uses a preprocessing step to construct an array called the Longest Prefix
Suffix (LPS) array. The LPS array helps track the longest proper prefix of the
pattern that is also a suffix, enabling the algorithm to skip re-evaluating
previously matched characters when a mismatch occurs.
2. Matching Process:
Once the LPS array is computed, the actual matching of the pattern in the text is
done in a way that minimizes redundant comparisons, leading to a faster search.

Steps of the KMP Algorithm (5 Marks)

1. Preprocessing Phase:
o Compute the LPS array for the pattern. This array stores the length of the
longest proper prefix of the pattern that is also a suffix for each prefix of the
pattern.
o Example:
For pattern ABAB, the LPS array would be [0, 0, 1, 2].
2. Pattern Matching Phase:
o Start matching the pattern with the text.
o If a mismatch occurs, use the LPS array to shift the pattern, skipping over
characters that have already been matched, instead of starting from the
beginning of the pattern.
o The shift is determined by the value in the LPS array, reducing unnecessary
re-checks of characters.

12a. Explicate the Breadth First Search (BFS) algorithm for graph traversal. How
does BFS work in unweighted graphs and its applications. (13 marks)

Breadth First Search (BFS) Algorithm for Graph Traversal (2 Marks)

Breadth First Search (BFS) is an algorithm used for traversing or searching through a
graph. It explores the graph level by level, visiting all nodes at the present depth level
before moving on to nodes at the next depth level. BFS can be applied to both directed
and undirected graphs.

BFS Algorithm Steps: (5 Marks)

1. Initialization:
o Start at a source node and mark it as visited.
o Use a queue to manage the nodes to be explored. The queue ensures that
nodes are processed in the order they are discovered (FIFO: First In, First
Out).
2. Exploration:
o Dequeue a node from the front of the queue.
o Visit all unvisited neighbors of this node, mark them as visited, and enqueue
them.
o Repeat until the queue is empty.
3. Termination:
o BFS terminates when all reachable nodes have been visited.
Time and Space Complexity:

 Time Complexity: O(V+E)O(V + E)O(V+E), where VVV is the number of


vertices and EEE is the number of edges. This is because each vertex is processed
once, and each edge is examined once.
 Space Complexity: O(V)O(V)O(V) for storing the visited nodes and the queue.

Applications of BFS: (6 Marks)

1. Shortest Path in Unweighted Graphs: BFS is often used to find the shortest
path in unweighted graphs, as it explores nodes in order of their distance from the
source.
2. Finding Connected Components: BFS can be used to find all nodes in a
connected component of an undirected graph. Starting from a node, BFS will
explore all reachable nodes and mark them as part of the same component.
3. Cycle Detection: BFS can detect cycles in an undirected graph. If BFS
encounters a node that has already been visited and is not the immediate parent of
the current node, a cycle is present.
4. Level Order Traversal of Trees: BFS is used in tree traversal to explore nodes
level by level, often used in algorithms related to tree structures, such as printing
nodes by levels or finding the height of a tree.
5. Web Crawling: BFS is used in web crawlers to explore web pages. It starts from
an initial page and explores all the linked pages, level by level.

12b. Illustrate the Bellman Ford algorithm for finding the shortest path in a
weighted graph. Analyze its time complexity and provide an example where it can
handle negative weight edges. (13 Marks)

Bellman-Ford Algorithm for Finding the Shortest Path in a Weighted Graph (3 Marks)

The Bellman-Ford algorithm is used to find the shortest path from a single source
vertex to all other vertices in a weighted graph, and it works for graphs with negative
edge weights. It is more general than Dijkstra’s algorithm because it can handle graphs
with negative weight edges, though it cannot handle graphs with negative weight cycles.

Key Features:

 Works for graphs with negative weight edges.


 Can detect negative weight cycles.
 Suitable for directed and undirected graphs.
Algorithm Overview (5 Marks)

1. Initialization:
o Set the distance to the source node as 0: distance[source] = 0.
o Set the distance to all other nodes as infinity: distance[v] = ∞ for all v ≠
source.
2. Relaxation (Main Step):
o For each edge in the graph, attempt to relax it. Relaxing an edge means
updating the shortest distance to a vertex if a shorter path is found.
o Relaxing an edge (u, v) with weight w means updating the distance to v if
distance[u] + w < distance[v].

Repeat the relaxation step V-1 times, where V is the number of vertices. The idea
is that in the worst case, a shortest path can have at most V-1 edges, so repeating
the relaxation for V-1 times ensures all shortest paths are found.

3. Negative Weight Cycle Detection:


o After V-1 iterations, check if any distance can still be updated. If yes, the
graph contains a negative weight cycle.

Step-by-Step Execution: (5 Marks)

1. Initialization:

css
Copy code
distance[A] = 0
distance[B] = ∞
distance[C] = ∞
distance[D] = ∞
distance[E] = ∞
distance[F] = ∞

2. Relaxation (1st iteration):


o Relax edge A -> B: distance[B] = min(∞, 0 + 4) = 4
o Relax edge B -> C: distance[C] = min(∞, 4 + (-5)) = -1
o Relax edge A -> D: distance[D] = min(∞, 0 + 2) = 2
o Relax edge D -> E: distance[E] = min(∞, 2 + 1) = 3
o Relax edge E -> F: distance[F] = min(∞, 3 + 2) = 5
o Relax edge C -> F: distance[F] = min(5, -1 + 3) = 2

After 1st iteration:

css
Copy code
distance[A] = 0
distance[B] = 4
distance[C] = -1
distance[D] = 2
distance[E] = 3
distance[F] = 2

3. Relaxation (2nd iteration) and subsequent iterations: The algorithm continues


relaxing edges for a total of V-1 iterations (5 times for this graph). The distances
stabilize, and no further updates are made.
4. Final Distances: After the algorithm completes all iterations:

css
Copy code
distance[A] = 0
distance[B] = 4
distance[C] = -1
distance[D] = 2
distance[E] = 3
distance[F] = 2

5. Negative Weight Cycle Detection: No further changes occur after V-1 iterations,
so there is no negative weight cycle detected in this example.

13a. Explain how quick sort works with the following example array [8,2,5,1,7,6,3,4]
(13 marks)

Quick Sort Algorithm (2 Marks)

Quick Sort is a divide-and-conquer sorting algorithm that works by selecting a pivot


element from the array and partitioning the other elements into two sub-arrays: one with
elements smaller than the pivot, and the other with elements greater than the pivot. This
process is recursively repeated on the sub-arrays.

Steps for Quick Sort: (5 Marks)

1. Choose a Pivot: Choose an element from the array (typically, the first, last, middle,
or random element).
2. Partition the Array: Reorganize the array so that elements less than the pivot are
on the left, and elements greater than the pivot are on the right. The pivot will be in
its final sorted position after this step.
3. Recursively Apply: Apply the same process to the left and right sub-arrays (the
elements before and after the pivot) until the base case is reached (sub-arrays of
size 1 or 0).
Example: Quick Sort on the Array [8, 2, 5, 1, 7, 6, 3, 4]

Step-by-Step Execution (3Marks)

1. Initial Array: [8, 2, 5, 1, 7, 6, 3, 4]


2. Choose a Pivot: Let's choose the last element as the pivot. So, the pivot is 4.
3. Partitioning:
o Start with two pointers: one at the beginning (i = -1) and one at the end (j
= 0 to 7).
o Traverse the array from left to right (with j), and whenever an element
smaller than the pivot is found, increment i and swap that element with the
element at index i.

Time Complexity: (3 Marks)

 Best and Average Case: O(nlog⁡n)O(n \log n)O(nlogn), where nnn is the number
of elements. This occurs when the pivot divides the array into two nearly equal
parts at each step.
 Worst Case: O(n2)O(n^2)O(n2), which happens when the pivot is always the
smallest or largest element (e.g., if the array is already sorted or nearly sorted). This
happens because the partitioning step will result in very unbalanced sub-arrays.
 Space Complexity: O(log⁡n)O(\log n)O(logn) due to the recursion stack in the best
case (balanced partitioning).

13b. Explore the Matrix Chain Multiplication problem in dynamic programming.


(13 Marks)

Matrix Chain Multiplication Problem (2 Marks)

The Matrix Chain Multiplication problem is a classic optimization problem in dynamic


programming. The goal is to determine the most efficient way to multiply a given
sequence of matrices, such that the total number of scalar multiplications is minimized.

Problem Statement: (5 Marks)

Given a sequence of matrices, A1, A2, ..., An, where matrix Ai has dimensions p[i-
1] x p[i], we are tasked with finding the optimal parenthesization of the matrix chain
multiplication that minimizes the total number of scalar multiplications.

Matrix Multiplication Rules:

 The matrix multiplication of two matrices A (with dimensions a x b) and B (with


dimensions b x c) results in a matrix C with dimensions a x c.
 The number of scalar multiplications required to multiply A and B is a * b * c.
Example:

Given matrices:

makefile
Copy code
A1 = 10x20, A2 = 20x30, A3 = 30x40, A4 = 40x30

The goal is to find the optimal way to parenthesize the matrix chain multiplication to
minimize the scalar multiplications.

Key Observations: (2 Marks)

1. Naive Approach: In a naive approach, we would multiply matrices sequentially, but


this can lead to inefficient calculations.
2. Optimal Parenthesization: We need to choose the parenthesization order that
minimizes the scalar multiplications.

Dynamic Programming Solution: (4 Marks)

To solve the Matrix Chain Multiplication problem efficiently, we use dynamic


programming to store intermediate results and avoid redundant computations.

Steps in Dynamic Programming:

1. Define the State:


o Let m[i][j] represent the minimum number of scalar multiplications
required to multiply matrices Ai to Aj.
o s[i][j] stores the index k at which the optimal split occurs in the
subproblem of multiplying matrices Ai to Aj.
2. Recurrence Relation:
o For m[i][j], we need to consider all possible ways to split the product of
matrices Ai * Ai+1 * ... * Aj into two parts:
 Compute the cost of multiplying matrices Ai...Ak and Ak+1...Aj
for each possible k where i ≤ k < j.
 The formula for m[i][j] is:

css
Copy code
m[i][j] = min{m[i][k] + m[k+1][j] + p[i-1] * p[k] *
p[j]} for all k between i and j-1.

 Here, p[i-1] * p[k] * p[j] represents the cost of multiplying


the two resulting matrices.
3. Base Case:
o When i = j, there is only one matrix, so m[i][i] = 0 (no multiplication
needed).
4. Fill the Table:
o Fill the table m using increasing chain lengths (subproblem sizes) from 2 to
n.
o For each subproblem, compute the cost and store it in the table.

14a. State and explain N queens problem and graph coloring problem with suitable
example. (13 Marks)

N-Queens problem (2 Marks)

The N-Queens problem is a classical problem in computer science and combinatorics. It


asks how to place N queens on an N x N chessboard such that no two queens can attack
each other. In chess, a queen can attack any piece that is placed in the same row, column,
or diagonal as itself. The goal is to find all possible ways to place N queens on the board
such that no two queens threaten each other.

Problem Statement: (7 Marks)

 You are given an N x N chessboard.


 The task is to place N queens on the chessboard such that no two queens share the
same row, column, or diagonal.

Example:

For N = 4, the board is a 4x4 grid. The goal is to place 4 queens on this board in such a
way that no two queens can attack each other. Here's a solution:

css
Copy code
. Q . .
. . . Q
Q . . .
. . Q .

In this arrangement:

 Each queen is placed in a different row and column.


 No two queens share a diagonal.

Solution Approach:

 Backtracking: The N-Queens problem is typically solved using a backtracking


approach. In this method, the algorithm places a queen in a valid position in one row
and then moves to the next row, trying to place a queen in a valid position. If the
algorithm encounters a conflict (i.e., two queens attack each other), it backtracks
and moves the queen to another position.

Steps (2 Marks)

1. Start placing queens in the first row.


2. For each row, try placing the queen in different columns one by one.
3. For each placement, check whether it is safe (i.e., no two queens threaten each
other).
4. If a valid placement is found, move to the next row and repeat the process.
5. If a solution is found for all rows, return the solution. If no solution is possible,
backtrack.

Time Complexity: (2 Marks)

 O(N!): The time complexity of this algorithm is factorial because there are N!
possible ways to place queens on the board.

14b. Describe Knapsack problem and travelling salesman problem with suitable
examples. (13 Marks)

Traveling Salesman Problem (TSP) (7 Marks)

The Traveling Salesman Problem (TSP) is a classic optimization problem where the
goal is to find the shortest possible route that visits each city exactly once and returns to
the origin city. It is a problem in graph theory, where cities are nodes, and the distances
between them are edges.

Problem Statement:

Given a set of n cities and the distances between every pair of cities, find the shortest
possible route that visits each city exactly once and returns to the starting city.

Example:

Suppose we have 4 cities labeled A, B, C, and D, and the distances between them are as
follows:

From/To A B C D
From/To A B C D

A 0 10 15 20

B 10 0 35 25

C 15 35 0 30

D 20 25 30 0

The task is to find the shortest possible route that visits all the cities exactly once and
returns to the starting city.

Knapsack Problem: (6 Marks)

 Goal: Maximize the total value of items placed in a knapsack without exceeding the
weight limit.
 Example: Given items with weights and values, select the most valuable
combination without exceeding the knapsack's capacity.
 Solution: Solved using Dynamic Programming with time complexity O(n * W) for
the 0/1 knapsack.

Traveling Salesman Problem (TSP):

 Goal: Find the shortest possible route that visits each city exactly once and returns
to the origin city.
 Example: Given a set of cities and distances between them, determine the shortest
path visiting each city once and returning to the starting point.
 Solution: Can be solved using Brute Force (O(n!)) or Dynamic Programming
(Held-Karp) with time complexity O(n^2 * 2^n).

Both problems are NP-hard, meaning there is no known polynomial-time algorithm to


solve them in general. These problems are fundamental in optimization and have wide
applications in logistics, resource management, and network design.

15a. Illuminate the process of finding the kth smallest number in an unsorted array.
Discua the different algorithm available, such as Quickselect and compare their
time complexities. (13 Marks)
Kth smallest number: (2 Marks)

Finding the kth smallest number in an unsorted array is a common problem in computer
science. Instead of fully sorting the array to get the kth smallest element, more efficient
algorithms can be used to solve the problem faster, especially for large datasets. Below are
different algorithms that can be employed to find the kth smallest number and their
associated time complexities.

Problem Definition: (5 Marks)

Given an unsorted array arr[] of n elements, and an integer k, we need to find the kth
smallest element in the array. For example, if arr = [7, 10, 4, 3, 20, 15] and k =
4, the 4th smallest element is 10 after sorting the array.

Approaches to Solve the Problem:

1. Naive Approach (Sorting)

 Algorithm:
o Sort the array in non-decreasing order and return the element at the index
k-1 (0-based index).
o Sorting can be done using algorithms like QuickSort or MergeSort.
 Steps:
1. Sort the array.
2. Return the element at position k-1.
 Time Complexity:
o O(n log n) for sorting (using comparison-based sorting algorithms like
MergeSort or QuickSort).
o Space Complexity: O(1) if in-place sorting is used (like QuickSort), or O(n)
for algorithms that require extra space (like MergeSort).
 Pros:
o Simple to implement.
o The sorted array can be useful for other purposes as well.
 Cons:
o Sorting the entire array is inefficient if you only need the kth smallest
element.
o It is not optimal for large arrays.

2. Quickselect Algorithm

 Algorithm:
o Quickselect is a variation of QuickSort that selects a pivot and partitions the
array into two subarrays—one with elements smaller than the pivot and the
other with elements greater than the pivot. The process continues until the
kth smallest element is found.
 Steps:
1. Choose a pivot element from the array.
2. Partition the array around the pivot, so that elements less than the pivot are on the
left and greater are on the right.
3. If the pivot’s position is k-1, return the pivot as the kth smallest element.
4. Otherwise, repeat the process recursively on the left or right subarray based on the
position of k.

Time Complexity: (6 Marks)

o Average case: O(n). On average, Quickselect will partition the array into
two roughly equal halves and recurse into one of the subarrays, making it
linear in time.
o Worst case: O(n^2). If the pivot always happens to be the smallest or
largest element, the partitioning step will not effectively reduce the problem
size, leading to quadratic time complexity.

Space Complexity: O(1) (in-place operation with no additional space needed, apart from
recursive stack space).

Pros:

o Average case time complexity of O(n) is efficient for large datasets.


o In-place algorithm with low space overhead.

Cons:

o The worst-case time complexity is O(n^2), but this is rare if a good pivot is
chosen.
o The algorithm may need to be optimized with a better pivot selection
strategy (e.g., random pivot).

15b. . Explain P. NP, NP Complete and NP Hard problem with suitable examples.
(13 Marks)

Complexity Classes: P, NP, NP-Complete, and NP-Hard (10 Marks)

In computational complexity theory, problems are classified based on their difficulty. The
P, NP, NP-Complete, and NP-Hard classes are central to understanding the complexity
of algorithmic problems. Below is an explanation of each class with examples.
1. P (Polynomial Time)

Definition:

 P is the class of decision problems (problems with a yes/no answer) that can be
solved in polynomial time by a deterministic Turing machine.
 A problem belongs to P if there exists an algorithm that solves the problem in time
proportional to a polynomial function of the size of the input.

2. NP (Nondeterministic Polynomial Time)

Definition:

 NP is the class of decision problems for which a proposed solution can be verified
in polynomial time by a deterministic Turing machine.
 In other words, even if we don’t know how to solve the problem efficiently, we can
verify a candidate solution in polynomial time.

3. NP-Complete

Definition:

 NP-Complete is the class of decision problems that are both:


1. In NP (i.e., solutions can be verified in polynomial time).
2. NP-hard, meaning that every problem in NP can be reduced to it in
polynomial time (i.e., it is at least as hard as every other problem in NP).
 If an NP-Complete problem can be solved in polynomial time, then all problems in
NP can be solved in polynomial time, implying that P = NP (which is a major open
question in computer science).

4. NP-Hard

Definition:

 NP-Hard is the class of problems that are at least as hard as the hardest problems in
NP.
 A problem is NP-Hard if every problem in NP can be reduced to it in polynomial
time. However, an NP-Hard problem does not necessarily belong to NP, meaning its
solution might not be verifiable in polynomial time.

Characteristics: (3 Marks)

 NP-Hard problems are the hardest problems in the sense that every problem in NP
can be reduced to them in polynomial time. However, unlike NP-Complete
problems, NP-Hard problems do not need to have a solution that can be verified in
polynomial time.
PART-C

16a i. Find all possible routes using Hamiltonian circuit for the following graph
consider the starting vertex as ‘a’.

a b

e
c
d


f g

Hamiltonian Circuit: (3 Marks)

A Hamiltonian circuit is a route that visits every vertex exactly once and returns to the
starting vertex. Since the starting vertex is given as a, we need to explore all possible
routes that visit all other vertices and then return to a.

Steps to Find Hamiltonian Circuits: (10 Marks)

1. Start at vertex 'a': We will consider a as the starting point.


2. Visit each vertex exactly once: After starting at a, visit each other vertex once and
only once, and then return to a at the end.

Explanation:

 a → b → d → a: This is one valid route where you start at a, go to b, then to d, and


finally return to a.
 a → d → b → a: This is another valid route where you start at a, go to d, then to b,
and finally return to a.

Summary:

The Hamiltonian circuits for the graph, considering a as the starting vertex, are:

1. a → b → d → a
2. a → d → b → a
These are all the possible Hamiltonian circuits in the graph, based on the assumption of
the given connectivity between the vertices. If the graph structure or connections are
different, the Hamiltonian circuits would change accordingly.

16.b. The company wants to find the most efficient route for a delivery truck that
needs to visit several cities. The goal is to minimize the total distance travelled while
visiting each exactly once and returning to the origin city.

The problem you're describing is the Traveling Salesman Problem (TSP), which is a
classic optimization problem in computer science and operations research. The goal is to
find the most efficient route for a delivery truck to visit several cities, each exactly once,
and return to the starting city while minimizing the total distance traveled. Here's a step-
by-step breakdown of how to approach this problem:

Problem Definition: (7 Marks)

Given:

 A set of cities.
 The distance between each pair of cities.
 The truck must start at a designated city (often called the origin city), visit all other
cities exactly once, and return to the origin city.

The objective is to minimize the total distance traveled by the truck.

Steps to Approach the Problem:

1. Representation of the Problem

2. Constraints

3. Objective

Methods to Solve the TSP:

1. Brute Force (Exhaustive Search)

2. Dynamic Programming (Held-Karp Algorithm)

3. Greedy Algorithm (Approximation)

4. Heuristic and Metaheuristic Algorithms


5. Linear Programming (LP) and Integer Linear Programming (ILP)

Example Walkthrough (TSP with 4 cities): (6 Marks)

Let’s assume you have 4 cities: A,B,C,DA, B, C, DA,B,C,D, and the distances between
the cities are given as follows:

City Pair Distance

A→B 10

A→C 15

A→D 20

B→C 35

B→D 25

C→D 30

Using Brute Force (since it's a small example), we can list all the possible routes starting
at AAA and calculate the total distance for each route:

1. A→B→C→D→AA \to B \to C \to D \to AA→B→C→D→A: 10+35+30+20=9510 + 35 +


30 + 20 = 9510+35+30+20=95
2. A→B→D→C→AA \to B \to D \to C \to AA→B→D→C→A: 10+25+30+15=8010 + 25 +
30 + 15 = 8010+25+30+15=80
3. A→C→B→D→AA \to C \to B \to D \to AA→C→B→D→A: 15+35+25+20=9515 + 35 +
25 + 20 = 9515+35+25+20=95
4. A→C→D→B→AA \to C \to D \to B \to AA→C→D→B→A: 15+30+25+10=8015 + 30 +
25 + 10 = 8015+30+25+10=80
5. A→D→B→C→AA \to D \to B \to C \to AA→D→B→C→A: 20+25+35+15=9520 + 25 +
35 + 15 = 9520+25+35+15=95
6. A→D→C→B→AA \to D \to C \to B \to AA→D→C→B→A: 20+30+35+10=9520 + 30 +
35 + 10 = 9520+30+35+10=95

From these, the optimal solution is the route A→B→D→C→AA \to B \to D \to C \to
AA→B→D→C→A with a total distance of 80.

You might also like