Question Paper code:642301
B.E/B.TECH DEGREE EXAMINATIONS, MAY 2025
Third Semester
CD3291-DATA STRUCTURES AND ALGORITHMS
(Common to B.E Computer Science and Engineering (Cyber Security)/
B.Tech Information Technology)
END SEMESTER EXAMINATION APR/MAY-25
PART-A-(10X2=20 Marks)
1.Define an algorithm and its properties.
Algorithm:
An algorithm is a finite set of well-defined instructions to solve a specific problem or perform a task.
Properties:
1. Finiteness – It must terminate after a limited number of steps.
2. Definiteness – Each step must be clear and unambiguous.
3. Input – Takes zero or more inputs.
4. Output – Produces at least one output.
5. Effectiveness – All steps must be simple and executable.
2.Define abstract data type. What does it focus on?
Abstract Data Type (ADT):
An abstract data type is a model for data structures that defines the data and the operations that can be
performed on it, without specifying how the operations are implemented.
Focus:
It focuses on the behaviour and functionality of the data type rather than the details of its
implementation.
3.Identify the data structures to represent stack.
Data Structures to Represent a Stack:
1. Array – A fixed-size structure where elements are added or removed from the top.
2. Linked List – A dynamic structure where nodes are linked, allowing flexible stack operations.
4.What is a deque? Mention two real-world applications of deque.
Deque (Double-Ended Queue):
A deque is a linear data structure that allows insertion and deletion of elements from both ends — front
and rear.
Real-World Applications:
1. Task scheduling in operating systems.
2. Palindrome checking in strings.
5.Define Collision in hashing.
Collision in Hashing:
A collision occurs in hashing when two different keys are hashed to the same index or location in the
hash table.
Example:
If key1 and key2 produce the same hash value, a collision happens.
6.Identify the key differences between internal and external sorting.
Internal Sorting:
Performed when the entire dataset fits into the main memory (RAM). All sorting operations are done in-
memory, making it faster and suitable for smaller datasets.
External Sorting:
Used when the dataset is too large to fit into main memory. It involves storing and accessing data from
external storage (like hard disks), and is suitable for handling large-scale data efficiently.
7.Construct an AVL tree by inserting the following elements 10,20,30.
Insert 10:
10
Insert 20:
10
\
20
Insert 30: Causes imbalance (Right-Right case) at 10, so perform left rotation.
Balanced AVL tree after rotation:
20
/ \
10 30
8.Define tree traversal and its types.
Tree Traversal:
Tree traversal is the process of visiting all the nodes in a tree data structure in a systematic way.
Types of Tree Traversal:
1. Inorder (Left, Root, Right)
2. Preorder (Root, Left, Right)
3. Postorder (Left, Right, Root)
9.List the three operations that can be performed on a graph.
Three operations that can be performed on a graph:
1. Insertion of vertices or edges
2. Deletion of vertices or edges
3. Traversal of the graph (e.g., DFS, BFS)
10.Identify the complexity of classes P and NP.
Complexity of Classes P and NP:
P (Polynomial time): Class of problems that can be solved efficiently by a deterministic
algorithm in polynomial time.
NP (Nondeterministic Polynomial time): Class of problems for which a given solution can be
verified in polynomial time by a deterministic algorithm.
PART B-(5*13=65)
11 a) Explain the features and characteristics of object-oriented programming with suitable
examples.
1. Features of Object-Oriented Programming (7 marks)
a) Encapsulation (2 marks)
Bundling data (attributes) and methods (functions) that operate on the data into a single unit
called a class.
Controls access to data by using access specifiers (private, public, protected).
Example:
class Car:
def __init__(self, speed):
self.__speed = speed # Private attribute
def get_speed(self):
return self.__speed
b) Inheritance (2 marks)
Mechanism to create a new class (child/subclass) from an existing class (parent/superclass).
Allows code reuse and establishes a relationship between classes.
Example:
class Animal:
def sound(self):
print("Animal sound")
class Dog(Animal):
def sound(self):
print("Bark")
c) Polymorphism (2 marks)
Ability of different classes to be treated as instances of the same class through a common
interface.
The same method name behaves differently in different classes (method overriding).
Example:
class Bird:
def sound(self):
print("Chirp")
class Cat:
def sound(self):
print("Meow")
def make_sound(animal):
animal.sound()
make_sound(Bird())
make_sound(Cat())
d) Abstraction (1 mark)
Hiding complex implementation details and showing only essential features to the user.
Achieved using abstract classes and interfaces.
Example:
from abc import ABC, abstractmethod
class Shape(ABC):
@abstractmethod
def area(self):
pass
2. Characteristics of Object-Oriented Programming (6 marks)
a) Objects and Classes (2 marks)
Objects are instances of classes. Classes define properties (attributes) and behaviors (methods).
Objects represent real-world entities with state and behavior.
Example:
class Student:
def __init__(self, name):
self.name = name
s1 = Student("Alice")
b) Message Passing (1 mark)
Objects communicate with each other by sending messages (calling methods).
This interaction is essential for collaboration between objects.
Example:
class Printer:
def print_message(self, message):
print(message)
p = Printer()
p.print_message("Hello")
c) Dynamic Binding (1 mark)
Method to resolve calls to methods at runtime, allowing flexibility and polymorphism.
Enables overriding methods to be called depending on the object type.
d) Reusability and Modularity (2 marks)
Code can be reused through inheritance and composition.
Programs are divided into modules (classes), making it easier to manage and update.
(OR)
11 b) Explain the various asymptotic notations with examples.
1. Introduction to Asymptotic Notations (2 marks)
Asymptotic notations are mathematical tools used to describe the growth rate of an algorithm's running
time or space requirements as the input size approaches infinity. They help in analyzing the efficiency of
algorithms independent of machine-specific constants.
2. Types of Asymptotic Notations (11 marks)
a) Big O Notation (O) — Upper Bound
Definition: Represents the worst-case upper bound on the running time of an algorithm.
It provides the maximum time an algorithm can take for any input of size n.
Expresses the upper limit on growth rate.
Mathematical Definition:
f(n) = O(g(n)) if there exist positive constants c and n0 such that
0≤f(n)≤ c⋅ g(n) for all n≥ n0.
Example:
If f(n)=3n+2,then f(n)=O(n) because the linear term dominates for large n.
b) Omega Notation (Ω) — Lower Bound
Definition: Represents the best-case lower bound on the running time of an algorithm.
It describes the minimum time an algorithm will take for any input of size n.
Mathematical Definition:
f(n)=Ω(g(n))if there exist positive constants c and n0 such that
0≤c⋅g(n)≤f(n)for all n≥ n0.
Example:
If f(n)=2n+5, then f(n)=Ω(n).
c) Theta Notation (Θ) — Tight Bound
Definition: Represents the tight bound or exact asymptotic behavior of an algorithm.
When an algorithm's running time is bounded both above and below by the same function g(n), it
is Θ(g(n)).
Mathematical Definition:
f(n)=Θ(g(n))if there exist positive constants c1, c2, and n0 such that
c1⋅g(n)≤f(n)≤c2⋅g(n) for all n≥n0.
Example:
For f(n)=5n+10, f(n)=Θ(n)because it grows linearly with n.
12 a) Select the appropriate data structure(array-based or linked list) for managing a stack of
books in a library system, considering the operations that need to be supported.
1. Introduction (2 marks)
A stack is a linear data structure that follows the Last In, First Out (LIFO) principle. In a library system,
a stack of books allows you to add (push) a new book on top and remove (pop) the topmost book.
To implement this stack, we can choose between two common data structures:
Array-based implementation
Linked list-based implementation
2. Stack Operations in Library Context (2 marks)
The main stack operations required:
Push: Add a new book to the top of the stack
Pop: Remove the topmost book from the stack
Peek/Top: View the book on top without removing it
Other considerations:
Memory usage
Flexibility of stack size
Frequency of operations
3. Comparison: Array vs Linked List (6 marks)
Feature Array-Based Stack Linked List-Based Stack
Memory allocation Fixed size (static allocation) Dynamic size (allocates as needed)
Space efficiency May waste memory if array is large Efficient memory usage
Insertion/Deletion (Top) O(1) – Fast O(1) – Fast
Resizing complexity Expensive (need to copy array) Not required (grows dynamically)
Implementation simplicity Easier to implement Slightly more complex (pointers)
Overflow Possible if size limit is reached No overflow unless memory is full
4. Recommendation and Justification (3 marks)
Recommended: Linked List-Based Stack
Justification:
In a library system, the number of books in the stack may not be known in advance.
A linked list allows dynamic memory allocation, so books can be added/removed flexibly
without worrying about exceeding a fixed size.
Stack operations (push/pop) are efficient in both approaches, but linked lists avoid overflow
issues.
5. Example (Optional - for better understanding)
# Linked List Node class
class BookNode:
def __init__(self, title):
self.title = title
self.next = None
# Stack class using Linked List
class BookStack:
def __init__(self):
self.top = None
def push(self, title):
new_book = BookNode(title)
new_book.next = self.top
self.top = new_book
def pop(self):
if self.top is None:
return "Stack is empty"
removed_book = self.top.title
self.top = self.top.next
return removed_book
Final Conclusion:
For managing a stack of books in a library system, a linked list-based stack is more appropriate due to
dynamic memory management, no size limitation, and efficient stack operations.
(OR)
12 b)Identify the stack operations with necessary steps for evaluating a postfix operations.
1. Introduction to Postfix Expression (2 Marks)
A postfix expression (also called Reverse Polish Notation) is a mathematical expression where
operators follow their operands.
Example: 5 6 + instead of 5 + 6.
It does not require parentheses, and is evaluated using a stack.
2. Stack Operations Used (2 Marks)
The stack is a LIFO (Last In First Out) data structure.
Operations used:
o Push(): Insert an operand onto the stack.
o Pop(): Remove the top operand from the stack.
o Apply operator to the operands popped from the stack.
3. Algorithm Steps for Evaluation (4 Marks)
1. Scan the postfix expression from left to right.
2. If the token is an operand (number), push it onto the stack.
3. If the token is an operator:
o Pop the top two elements from the stack.
o Apply the operator to them: operand2 operator operand1 (note: operand2 is popped first).
o Push the result back to the stack.
4. Repeat until the end of the expression.
5. The final result will be at the top of the stack.
4. Example with Stack Steps (4 Marks)
Evaluate the postfix expression: 5 3 2 * +
Step Token Stack Action
1 5 5 Push operand
Step Token Stack Action
2 3 5, 3 Push operand
3 2 5, 3, 2 Push operand
4 * 5, 6 Pop 2 & 3 → 3*2=6 → Push 6
5 + 11 Pop 6 & 5 → 5+6=11 → Push 11
Result = 11
5. Conclusion (1 Mark)
Postfix expression evaluation is simple and efficient using a stack.
It avoids ambiguity and is widely used in compilers and calculators.
13 a) Construct the merge sort algorithm to sort the following data:10,15,0,17,20,30,70,6
Merge Sort Algorithm Explanation (2 marks)
Definition:
Merge Sort is a divide and conquer algorithm that:
1. Divides the input array into two halves.
2. Recursively sorts each half.
3. Merges the two sorted halves into one sorted array.
Time Complexity: O(n log n)
Algorithm Steps (3 marks)
1. Divide the array into halves until each subarray has one element.
2. Conquer (sort): Recursively sort each subarray.
3. Combine (merge): Merge subarrays to produce sorted arrays.
Given Input:
[10, 15, 0, 17, 20, 30, 70, 6]
Step 1: [10, 15, 0, 17, 20, 30, 70, 6]
Split into: [10, 15, 0, 17] and [20, 30, 70, 6]
Step 2: [10, 15, 0, 17] → [10, 15] and [0, 17]
[20, 30, 70, 6] → [20, 30] and [70, 6]
Step 3: [10, 15] → [10] and [15]
[0, 17] → [0] and [17]
[20, 30] → [20] and [30]
[70, 6] → [70] and [6]
Step 4: [10] and [15] → [10, 15]
[0] and [17] → [0, 17]
[10, 15] and [0, 17] → [0, 10, 15, 17]
[20] and [30] → [20, 30]
[70] and [6] → [6, 70]
[20, 30] and [6, 70] → [6, 20, 30, 70]
Step 5: Merge [0, 10, 15, 17] and [6, 20, 30, 70]
Final answer: [0, 6, 10, 15, 17, 20, 30, 70]
(OR)
13 b)Solve the problem of collision handling in hashing by applying load factors technique with
examples.
1. Definition of Hashing and Collision (2 marks)
Hashing: A technique to map data (keys) to fixed-size values (hash codes), typically using a
hash function.
Collision: Occurs when two different keys produce the same hash index.
2. What is Load Factor? (2 marks)
Load Factor (α) is a measure that indicates how full the hash table is.
Load Factor(α)=nm\text{Load Factor} (\alpha) = \frac{n}{m}
Where:
o n = number of elements stored
o m = size of the hash table
A high load factor → more collisions
A low load factor → underutilization of memory
3. Collision Handling with Load Factor-Based Rehashing (2 marks)
When load factor exceeds a certain threshold (e.g., 0.7), rehashing is triggered:
Create a larger hash table (usually double the current size, preferably a prime number).
Re-insert all existing keys into the new table using the new hash function or updated modulus.
4. Techniques Used (3 marks)
1. Open Addressing (Linear Probing, Quadratic Probing)
o Search next empty slot when collision occurs.
o Rehash when load factor crosses threshold (e.g., 0.7)
2. Separate Chaining
o Each index stores a linked list of entries.
o Rehash when average chain length becomes too long (based on load factor).
5. Example (3 marks)
Given:
Hash table size m=5m = 5
Elements to insert: 10, 20, 15, 7, 8
Step 1: Hash Function
h(k)=k mod 5
Key Hash Index
10 0
20 0 (collision with 10)
15 0 (collision)
7 2
8 3
Load factor after inserting 5 elements:
α=55=1.0⇒Trigger Rehash\alpha = \frac{5}{5} = 1.0 \Rightarrow \text{Trigger Rehash}
Step 2: Rehash
Increase table size to 11 (next prime after 10)
New hash function: h(k)=kmod 11h(k) = k \mod 11
Key New Index (mod 11)
10 10
20 9
15 4
7 7
Key New Index (mod 11)
8 8
6. Conclusion (1 mark)
Using load factor-based rehashing, we:
Minimize collisions
Optimize space and performance
Ensure efficient searching, insertion, and deletion in hash tables
14 a) Apply the binary search tree using the following data elements.
40,35,53,15,73,31,5,85,51,63,79.
Step-by-Step BST Insertion: (13 mark)
We'll insert the elements one by one while maintaining BST rules:
Left child < Parent node
Right child > Parent node
Insert 1st element:
40 → becomes the root node.
40
Insert 2nd element:
35 < 40 → insert to left of 40.
40
/
35
Insert 3rd element:
53 > 40 → insert to right of 40.
40
/ \
35 53
Insert 4th element:
15 < 40 → go left
15 < 35 → insert to left of 35.
40
/ \
35 53
/
15
Insert 5th element:
73 > 40 → go right
73 > 53 → insert to right of 53.
40
/ \
35 53
/ \
15 73
Insert 6th element:
31 < 40 → left
31 < 35 → left
31 > 15 → insert to right of 15.
40
/ \
35 53
/ \
15 73
\
31
Insert 7th element:
5 < 40 → left
5 < 35 → left
5 < 15 → insert to left of 15.
40
/ \
35 53
/ \
15 73
/ \
5 31
Insert 8th element:
85 > 40 → right
85 > 53 → right
85 > 73 → insert to right of 73.
40
/ \
35 53
/ \
15 73
/ \ \
5 31 85
Insert 9th element:
51 > 40 → right
51 < 53 → insert to left of 53.
40
/ \
35 53
/ / \
15 51 73
/ \ \
5 31 85
Insert 10th element:
63 > 40 → right
63 > 53 → right
63 < 73 → insert to left of 73.
40
/ \
35 53
/ / \
15 51 73
/ \ / \
5 31 63 85
Insert 11th element:
79 > 40 → right
79 > 53 → right
79 > 73 → right
79 < 85 → insert to left of 85.
40
/ \
35 53
/ / \
15 51 73
/ \ / \
5 31 63 85
/
79
Final Binary Search Tree Diagram
40
/ \
35 53
/ / \
15 51 73
/ \ / \
5 31 63 85
/
79
(OR)
14 b)Develop an algorithm used to perform single and double rotation on AVL tree.
Definition: (2 marks)
An AVL Tree is a self-balancing binary search tree where the difference between heights of left and
right subtrees (balance factor) for any node is -1, 0, or 1.
If the balance factor goes outside this range after insertion or deletion, rotations are applied to restore
balance.
Types of Rotations in AVL Tree(2 marks)
1. Single Rotations
Used when the tree is unbalanced in one direction.
Types:
o Left Rotation (LL Case)
o Right Rotation (RR Case)
2. Double Rotations
Used when unbalanced node’s child leans opposite.
Types:
o Left-Right Rotation (LR Case)
o Right-Left Rotation (RL Case)
1. Single Right Rotation (LL Case) (3 Marks)
Condition:
A node is inserted in the left subtree of the left child (heavy left-left)
Algorithm:
RightRotate(y):
x = y.left
T2 = x.right
x.right = y
y.left = T2
// Update heights
y.height = max(height(y.left), height(y.right)) + 1
x.height = max(height(x.left), height(x.right)) + 1
return x // new root after rotation
2. Single Left Rotation (RR Case) (2 marks)
Condition:
A node is inserted in the right subtree of the right child
Algorithm:
LeftRotate(x):
y = x.right
T2 = y.left
y.left = x
x.right = T2
// Update heights
x.height = max(height(x.left), height(x.right)) + 1
y.height = max(height(y.left), height(y.right)) + 1
return y // new root after rotation
3. Left-Right Rotation (LR Case) (2 mark)
Condition:
A node is inserted in the right subtree of the left child
Algorithm:
LeftRightRotate(z):
z.left = LeftRotate(z.left)
return RightRotate(z)
4. Right-Left Rotation (RL Case) (2 mark)
Condition:
A node is inserted in the left subtree of the right child
Algorithm:
RightLeftRotate(z):
z.right = RightRotate(z.right)
return LeftRotate(z)
Example :
Insertions: 30 → 20 → 10
This triggers LL Case, perform Right Rotation
Before:
30
/
20
/
10
After Right Rotation on 30:
20
/ \
10 30
15 a) Explain the various representations of graph with example in detail.
1. Introduction to Graphs (1 Mark)
A graph is a non-linear data structure consisting of vertices (nodes) and edges (links).
Can be directed/undirected, weighted/unweighted.
Used in modeling networks, maps, social media, etc.
2. Types of Graph Representations (1 Mark)
The three major representations are:
Adjacency Matrix
Adjacency List
Incidence Matrix
3. Adjacency Matrix (3 Marks)
Definition:
A 2D matrix of size V×VV \times V, where V = number of vertices.
matrix[i][j] = 1 if there's an edge from i to j (unweighted), or the edge weight (weighted).
Example:
Graph:
Vertices: A, B, C
Edges: A-B, B-C
ABC
A0 1 0
B 1 0 1
C0 1 0
Pros:
Fast edge lookup: O(1)
Easy to implement
Cons:
Uses O(V²) space, not ideal for sparse graphs
4. Adjacency List (3 Marks)
Definition:
Each vertex stores a list of connected vertices.
Suitable for sparse graphs.
Example:
Same graph:
A: B
B: A, C
C: B
Pros:
Space-efficient: O(V + E)
Easy to iterate over neighbours
Cons:
Checking edge existence takes O(V) in worst case
5. Incidence Matrix (3 Marks)
Definition:
A matrix of size V×EV \times E
Undirected: 1 if vertex is incident to the edge
Directed: -1 (source), +1 (destination)
Example:
Graph:
Vertices: A, B, C
Edges: e1: A-B, e2: B-C
e1 e2
A1 0
B 1 1
C0 1
Pros:
Good for mathematical analysis
Useful in matrix operations
Cons:
Not commonly used in programming
Space complexity: O(V × E)
6. Comparison & Summary Table (2 Marks)
Representation Space Complexity Edge Lookup Use Case
Adjacency Matrix O(V²) O(1) Dense graphs
Adjacency List O(V + E) O(V) Sparse graphs
Incidence Matrix O(V × E) O(E) Theoretical analysis
(OR)
15 b) Explain Dijkstra’s algorithm for finding the shortest path in a graph, and analyze its time
complexity.
1. Introduction (1 Mark)
Dijkstra’s algorithm is a greedy algorithm used to find the shortest path from a source vertex to
all other vertices in a weighted graph.
Works only for non-negative edge weights.
2. Assumptions & Requirements (1 Mark)
Graph is connected and weighted
Edge weights must be non-negative
Data structures: Priority queue / Min-heap (for optimal time)
3. Algorithm Steps (4 Marks)
Given a graph G=(V,E)G = (V, E), and source node s:
Initialize:
Set dist[v] = ∞ for all vertices v ≠ s
Set dist[s] = 0 (distance to itself is zero)
Use a priority queue to store vertices by shortest discovered distance
Steps:
1. Insert the source vertex s into the priority queue.
2. Repeat until the queue is empty:
o Extract the node u with the minimum distance from the queue.
o For each neighbor v of u:
If dist[u] + weight(u, v) < dist[v], update:
dist[v] = dist[u] + weight(u, v)
Add v to the queue with updated distance.
3. After processing all vertices, dist[] contains the shortest distances from s to every other node.
4. Example (4 Marks)
Graph:
Vertices: A, B, C, D, E
Edges:
A → B (4), A → C (1)
C → B (2), B → E (4), C → D (4), D → E (1)
Step-by-Step:
Vertex Distance from A Previous
A 0 -
C 1 (A→C) A
B 3 (A→C→B) C
D 5 (A→C→D) C
E 6 (A→C→D→E) D
Shortest paths:
A → C: 1
A → C → B: 3
A → C → D: 5
A → C → D → E: 6
5. Time Complexity Analysis (2 Marks)
Let:
VV = number of vertices
EE = number of edges
Using Min-Heap + Adjacency List:
Insertion/Extraction from heap: O(log V)
Each vertex and edge processed once
Total time complexity:
O((V+E) log V)
Using Adjacency Matrix:
O(V²) — slower, used when graph is dense
6. Applications / Limitations (1 Mark)
Applications:
GPS routing
Network routing protocols
AI pathfinding (games, robots)
Limitations:
Fails with negative weight edges
Slower than other algorithms (like Bellman-Ford) for dense graphs
PART C- (1*15=15 Marks)
16 a) Construct an expression tree for the expression a+(b*c)+d*(e+f). Give the outputs when you
apply preorder , in order and post order traversals.
1. Expression Parsing (2 Marks)
Given infix expression:
a + (b * c) + d * (e + f)
➤ Operator Precedence:
* and / have higher precedence than + and -
Parentheses override precedence
➤ Fully parenthesized:
((a + (b * c)) + (d * (e + f)))
2. Building the Expression Tree (4 Marks)
We'll construct the binary expression tree as per the above structure.
Final Expression Tree:
+
/ \
+ *
/ \ / \
a * d +
/\ /\
b c e f
3. Preorder Traversal (Prefix) (2 Marks)
Preorder: Root → Left → Right
++a*bc*d+ef
Answer: ++a*bc*d+ef
4. Inorder Traversal (Infix) (2 Marks)
Inorder: Left → Root → Right
(For readability, add parentheses)
((a + (b * c)) + (d * (e + f)))
Answer: ((a+(b*c))+(d*(e+f)))
5. Postorder Traversal (Postfix) (2 Marks)
Postorder: Left → Right → Root
abc*+def+*+
Answer: abc*+def+*+
6. Final (1 Mark)
Traversal Type Expression
Preorder ++a*bc*d+ef
Inorder ((a+(b*c))+(d*(e+f)))
Postorder abc*+def+*+
(OR)
16 b) Construct the minimum spanning tree (MST) for the given graph using prim’s algorithm.
Find the cost of minimum spanning tree.
1. Prim's Algorithm (3 Marks)
Prim's algorithm is a greedy algorithm that finds an MST for a weighted undirected graph. It starts with
an arbitrary vertex and iteratively adds the cheapest edge that connects a vertex in the MST to a vertex
not yet in the MST, until all vertices are included.
Key Concepts:
Spanning Tree: A subgraph that is a tree and connects all the vertices in the graph.
Minimum Spanning Tree (MST): A spanning tree with the lowest possible total edge weight.
Greedy Approach: At each step, it makes a locally optimal choice in the hope of finding a
global optimum.
2. Applying Prim's Algorithm Step-by-Step (8 Marks)
Let's apply Prim's algorithm to the given graph. We can start from any vertex. Let's choose vertex 'a' as
our starting point.
Graph Edges and Weights:
(a, b): 4
(a, h): 8
(b, c): 8
(b, h): 11
(c, d): 7
(c, i): 2
(c, f): 4
(d, e): 9
(d, f): 14
(e, f): 10
(i, h): 7
(i, g): 6
(g, h): 1
(g, f): 2
Initialization:
MST Set (S) = { }
Edges considered = { }
Current Vertex = 'a'
Step-by-step construction:
Current MST Candidate Edges (Connecting Chosen Cumulative
Step Weight
Vertices (S) S to V-S) Edge Cost
1 {a} (a,b): 4, (a,h): 8 (a,b) 4 4
2 {a, b} (a,h): 8, (b,c): 8, (b,h): 11 (a,h) 8 4 + 8 = 12
(b,c): 8, (b,h): 11, (i,h): 7, (g,h):
3 {a, b, h} (g,h) 1 12 + 1 = 13
1
(b,c): 8, (b,h): 11, (i,h): 7, (i,g):
4 {a, b, h, g} (g,f) 2 13 + 2 = 15
6, (g,f): 2
(b,c): 8, (b,h): 11, (i,h): 7, (i,g):
5 {a, b, h, g, f} (c,f) 4 15 + 4 = 19
6, (c,f): 4, (d,f): 14, (e,f): 10
(b,c): 8, (b,h): 11, (i,h): 7, (i,g):
6 {a, b, h, g, f, c} 6, (c,d): 7, (c,i): 2, (d,f): 14, (c,i) 2 19 + 2 = 21
(e,f): 10
(b,c): 8, (b,h): 11, (i,h): 7, (i,g):
{a, b, h, g, f, c,
7 6, (c,d): 7, (d,f): 14, (e,f): 10, (c,d) 7 21 + 7 = 28
i}
(d,e): 9
{a, b, h, g, f, c, (b,c): 8, (b,h): 11, (i,h): 7, (i,g):
8 (d,e) 9 28 + 9 = 37
i, d} 6, (d,f): 14, (e,f): 10, (d,e): 9
All vertices are now included.
The edges in the Minimum Spanning Tree are:
(a,b), (a,h), (g,h), (g,f), (c,f), (c,i), (c,d), (d,e)
3. Cost of the Minimum Spanning Tree (2 Marks)
To find the cost, sum the weights of all the edges included in the MST:
Cost = 4 (a,b) + 8 (a,h) + 1 (g,h) + 2 (g,f) + 4 (c,f) + 2 (c,i) + 7 (c,d) + 9 (d,e)
Cost = 4 + 8 + 1 + 2 + 4 + 2 + 7 + 9
Cost = 37
The Minimum Spanning Tree constructed using Prim's algorithm consists of the following edges:
(a,b), (a,h), (g,h), (g,f), (c,f), (c,i), (c,d), and (d,e).
The total cost of this Minimum Spanning Tree is 37.
Faculty in charge Dean/SoC