0% found this document useful (0 votes)
7 views39 pages

Data Structure

The document provides an overview of fundamental data structures and algorithms, including definitions and examples of data types, arrays, linked lists, stacks, queues, searching, hashing, and sorting algorithms. It covers concepts such as time and space complexity, asymptotic notations, and the efficiency of algorithms. Additionally, it discusses the differences between iteration and recursion, as well as various implementations of data structures in C.

Uploaded by

badcaptain.785
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views39 pages

Data Structure

The document provides an overview of fundamental data structures and algorithms, including definitions and examples of data types, arrays, linked lists, stacks, queues, searching, hashing, and sorting algorithms. It covers concepts such as time and space complexity, asymptotic notations, and the efficiency of algorithms. Additionally, it discusses the differences between iteration and recursion, as well as various implementations of data structures in C.

Uploaded by

badcaptain.785
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

✅ 1.

Introduction

Basic Terminology

 Data: Raw facts and figures.

 Information: Processed data.

 Data Structure: A way of organizing and storing data so it can be


accessed and modified efficiently.

 Algorithm: A step-by-step procedure to solve a problem.

Elementary Data Organization

 Data can be organized into:

o Primitive Types: int, float, char, etc.

o Non-Primitive Types: Arrays, Lists, Trees, Graphs

o Linear Structures: Arrays, Linked Lists, Stacks, Queues

o Non-Linear Structures: Trees, Graphs

✅ 2. Built-in Data Types in C

Data Size Format


Type (Typical) Specifier

int 4 bytes %d

float 4 bytes %f

char 1 byte %c

double 8 bytes %lf

✅ 3. Algorithm and Efficiency

Efficiency of an Algorithm

 Measured in terms of time and space required.

 Helps compare and choose optimal solutions.

Time Complexity
 Amount of time an algorithm takes relative to input size.

Space Complexity

 Amount of memory used by an algorithm.

✅ 4. Asymptotic Notations

Used to describe the growth behavior of an algorithm:

Represent
Notation Meaning
s

Upper
Big-O (O) Worst-case
bound

Big-Ω Lower
Best-case
(Omega) bound

Big-Θ Average-case (Tight Exact


(Theta) bound) bound

✅ 5. Time-Space Trade-off

 More time → less memory OR more memory → less time.

 Example: Using extra space (hash table) to reduce search time.

✅ 6. Abstract Data Types (ADT)

 ADT: A model of data structure that defines the data and operations
but not the implementation.

 Examples: Stack, Queue, List, Tree.

📘 Arrays

✅ 1. Definition

 A linear collection of homogeneous data elements.


 Fixed size, stored in contiguous memory locations.

✅ 2. Types of Arrays

 1D Array: Single row of elements.

#include<iostream>

#include<conio.h>

int main(){
int ArrayName[5];
for(int i=1; i<5; i++){
std::cin>>ArrayName[i];
}
for(int i=1; i<5; i++){
std::cout<<ArrayName[i];
}
getch();
}

 2D Array: Matrix (rows and columns).

 3D and nD Arrays: Multi-level arrays.

✅ 3. Representation in Memory

Row-Major Order

 Elements of rows are stored sequentially.

Column-Major Order

 Elements of columns are stored sequentially.

✅ 4. Index Formulae

Let:

 A be the base address

 W = size of one element (in bytes)


1-D Array

LOC(A[i]) = A + W × (i - L)

2-D Array

For row-major:

LOC(A[i][j]) = A + W × [N × (i - Lr) + (j - Lc)]

For column-major:

LOC(A[i][j]) = A + W × [M × (j - Lc) + (i - Lr)]

 M = number of rows

 N = number of columns

 Lr, Lc = Lower bounds

(Similar extensions exist for 3D and nD arrays.)

✅ 5. Applications of Arrays

 Matrix operations

 Searching & sorting

 Storing polynomials

 Hashing

 Lookup tables

✅ 6. Sparse Matrices

Definition: Matrices with most elements as zero.

Representation Methods:

 Triplet (3-tuple) form

 Array of structures

 Linked list

Example of Triplet:

Row Col Value


0 1 5

1 2 8

🔗 Linked Lists

✅ 1. Types of Linked Lists

Type Description

Singly Linked List Each node points to


(SLL) next

Doubly Linked List Each node has prev &


(DLL) next

Circular Linked List Last node points to


(CLL) first

✅ 2. Array vs Pointer Implementation

Array Implementation

 Uses static memory (fixed size).

 Example:

struct Node {

int data;

int next; // index of next node

};

Pointer Implementation

 Dynamic memory allocation using pointers.

struct Node {

int data;

struct Node* next;

};
✅ 3. Operations on Linked Lists

Operati
Description
on

Insertio At beginning, end, or at a


n position

Deletio Remove a node by position or


n value

Travers Visit each node and process


al data

✅ 4. Polynomial Representation Using Linked List

Single-variable Polynomial

3x^2 + 2x + 5

Represent each term as:

struct PolyNode {

int coeff;

int power;

struct PolyNode* next;

};

Operations

 Addition: Add coefficients of like powers.

 Subtraction: Subtract coefficients.

 Multiplication: Multiply each term with every term of the other


polynomial.

Two-variable Polynomial

 Each node contains two powers (e.g., x^2y^3).

struct PolyNode {
int coeff;

int powerX;

int powerY;

struct PolyNode* next;

};
Ut 2
Unit 2

🧱 1. STACKS

✅ Abstract Data Type (ADT) – Stack

 A linear data structure that follows the LIFO (Last In First Out)
principle.

 Operations are done at one end: the top of the stack.

✅ Primitive Stack Operations

Operation Description

push() Add an element to the top

pop() Remove the top element

peek() or
View the top element
top()

isEmpty() Check if stack is empty

Check if stack is full (in array


isFull()
implementation)

✅ Array Implementation of Stack in C

#define SIZE 100

int stack[SIZE], top = -1;

void push(int val) {

if (top == SIZE - 1)

printf("Stack Overflow\n");

else

stack[++top] = val;
}

int pop() {

if (top == -1)

printf("Stack Underflow\n");

else

return stack[top--];

✅ Linked List Implementation of Stack

struct Node {

int data;

struct Node* next;

};

struct Node* top = NULL;

void push(int val) {

struct Node* newNode = malloc(sizeof(struct Node));

newNode->data = val;

newNode->next = top;

top = newNode;

int pop() {

if (top == NULL)

return -1;
int val = top->data;

struct Node* temp = top;

top = top->next;

free(temp);

return val;

✅ Applications of Stack

 Reversing data

 Expression evaluation

 Function call management (recursion)

 Balancing symbols

✅ Prefix and Postfix Expressions

 Infix: A + B

 Prefix: +AB

 Postfix: AB+

✅ Evaluation of Postfix Expression

Algorithm:

1. Read from left to right.

2. Push operands onto stack.

3. When operator is found, pop two operands, apply the operator, push
result.

Example:
Postfix: 23*54*+9-
Result: ((2*3) + (5*4)) - 9 = 6 + 20 - 9 = 17

🔁 2. Iteration and Recursion


✅ Principles of Recursion

 A function calls itself with a base case and a recursive case.

void recursiveFunc() {

if (base_condition) return;

recursiveFunc();

✅ Tail Recursion

 Recursive call is the last operation in the function.

 Can be optimized to iteration.

✅ Examples

✅ 1. Binary Search (Recursion)

int binarySearch(int arr[], int low, int high, int key) {

if (low > high) return -1;

int mid = (low + high) / 2;

if (arr[mid] == key) return mid;

else if (arr[mid] > key)

return binarySearch(arr, low, mid - 1, key);

else

return binarySearch(arr, mid + 1, high, key);

✅ 2. Fibonacci Numbers

 Recursive:

int fib(int n) {

if (n <= 1) return n;

return fib(n - 1) + fib(n - 2);


}

 Iterative:

int fibIter(int n) {

int a = 0, b = 1, c;

for (int i = 2; i <= n; i++) {

c = a + b;

a = b;

b = c;

return b;

✅ 3. Tower of Hanoi

void hanoi(int n, char from, char to, char aux) {

if (n == 1) {

printf("Move disk 1 from %c to %c\n", from, to);

return;

hanoi(n-1, from, aux, to);

printf("Move disk %d from %c to %c\n", n, from, to);

hanoi(n-1, aux, to, from);

✅ Iteration vs Recursion – Tradeoffs

Feature Iteration Recursion

Speed Faster Slower (overhead)

Memory Uses loops only Uses call stack

Readabili Less elegant for Cleaner for complex


Feature Iteration Recursion

ty some problems

Use Simple, repetitive Divide-and-conquer


Cases tasks problems

📦 3. QUEUES

✅ Queue Basics

 FIFO (First In First Out)

 Elements are inserted at rear and deleted from front

✅ Operations on Queue

Operati
Description
on

create() Initialize the queue

enqueue
Add element to rear
()

dequeue Remove element from


() front

isFull() Check if queue is full

isEmpty( Check if queue is


) empty

✅ Array Implementation of Queue

#define SIZE 100

int queue[SIZE], front = -1, rear = -1;

void enqueue(int val) {


if (rear == SIZE - 1)

printf("Queue Overflow\n");

else {

if (front == -1) front = 0;

queue[++rear] = val;

int dequeue() {

if (front == -1 || front > rear)

printf("Queue Underflow\n");

else

return queue[front++];

✅ Circular Queue

 Avoids unused space in array queue by wrapping around.

#define SIZE 5

int queue[SIZE], front = -1, rear = -1;

void enqueue(int val) {

if ((rear + 1) % SIZE == front)

printf("Queue Full\n");

else {

if (front == -1) front = 0;

rear = (rear + 1) % SIZE;

queue[rear] = val;
}

✅ Linked List Implementation of Queue

struct Node {

int data;

struct Node* next;

};

struct Node *front = NULL, *rear = NULL;

void enqueue(int val) {

struct Node* newNode = malloc(sizeof(struct Node));

newNode->data = val;

newNode->next = NULL;

if (rear == NULL)

front = rear = newNode;

else {

rear->next = newNode;

rear = newNode;

int dequeue() {

if (front == NULL) return -1;

int val = front->data;

struct Node* temp = front;


front = front->next;

if (front == NULL) rear = NULL;

free(temp);

return val;

✅ Deque (Double-Ended Queue)

 Insertion and deletion allowed at both front and rear.

 Types:

o Input-restricted: insertion at one end only

o Output-restricted: deletion at one end only

✅ Priority Queue

 Each element has a priority.

 Elements with higher priority are dequeued before lower ones.

 Can be implemented using:

o Array

o Heap

o Linked list (sorted)

📌 Summary

 Stacks: LIFO, useful in recursion, expression evaluation.

 Queues: FIFO, essential in scheduling, buffering.

 Recursion simplifies logic; iteration optimizes performance.

 Choosing between iteration and recursion depends on the


problem’s needs (readability vs. performance).
Unit 3
Unit 3

🔍 1. Searching

✅ Concept of Searching

 Searching refers to the process of finding a desired element (key)


from a collection of elements.

 Efficiency depends on:

o Size of data

o Type of data structure

o Method of search

✅ 1.1 Sequential Search (Linear Search)

 Checks every element one-by-one until the key is found.

int linearSearch(int arr[], int n, int key) {

for (int i = 0; i < n; i++) {

if (arr[i] == key)

return i;

return -1;

 Time Complexity: O(n)

 Best Case: O(1)

 Worst Case: O(n)

✅ 1.2 Index Sequential Search

 Combines indexing with sequential search.

 Steps:
1. Divide array into blocks.

2. Create an index table for block headers.

3. Use index to find correct block.

4. Perform linear search within the block.

 More efficient than linear search for large data.

✅ 1.3 Binary Search

 Works on sorted arrays only.

 Repeatedly divides the search interval in half.

int binarySearch(int arr[], int low, int high, int key) {

while (low <= high) {

int mid = (low + high) / 2;

if (arr[mid] == key)

return mid;

else if (arr[mid] < key)

low = mid + 1;

else

high = mid - 1;

return -1;

 Time Complexity: O(log n)

 Best Case: O(1)

 Worst Case: O(log n)

🔐 2. Hashing
✅ Concept of Hashing

 Technique to map a key to an index in a table using a hash


function.

index = hash(key)

 Ideal hashing distributes keys uniformly.

✅ Common Hash Functions

 h(k) = k % table_size

 h(k) = (a * k + b) % p (universal hashing)

✅ Collision in Hashing

 When two keys hash to the same index.

✅ Collision Resolution Techniques

Technique Description

Use linked lists at each index to store multiple


Chaining
elements.

Linear Probing Search next available slot (sequentially).

Quadratic
Jump in quadratic intervals: (h + i²) % table_size
Probing

Double Use a second hash function: (h1(k) + i * h2(k)) %


Hashing table_size

🔃 3. Sorting Algorithms

✅ 3.1 Bubble Sort

 Repeatedly swaps adjacent elements if they are in the wrong order.

for (int i = 0; i < n-1; i++)

for (int j = 0; j < n-i-1; j++)


if (arr[j] > arr[j+1])

swap(arr[j], arr[j+1]);

 Time Complexity: O(n²)

 Stable: Yes

✅ 3.2 Selection Sort

 Selects the smallest element and swaps with current index.

for (int i = 0; i < n-1; i++) {

int min = i;

for (int j = i+1; j < n; j++)

if (arr[j] < arr[min])

min = j;

swap(arr[i], arr[min]);

 Time Complexity: O(n²)

 Stable: No (unless modified)

✅ 3.3 Insertion Sort

 Builds the sorted array one element at a time.

for (int i = 1; i < n; i++) {

int key = arr[i];

int j = i - 1;

while (j >= 0 && arr[j] > key)

arr[j + 1] = arr[j--];

arr[j + 1] = key;

 Time Complexity: O(n²)


 Stable: Yes

 Efficient for small or nearly sorted arrays

✅ 3.4 Quick Sort

 Divide-and-conquer approach.

 Choose a pivot, partition the array, and recursively sort subarrays.

int partition(int arr[], int low, int high) {

int pivot = arr[high], i = low - 1;

for (int j = low; j < high; j++) {

if (arr[j] <= pivot)

swap(arr[++i], arr[j]);

swap(arr[i+1], arr[high]);

return i+1;

void quickSort(int arr[], int low, int high) {

if (low < high) {

int pi = partition(arr, low, high);

quickSort(arr, low, pi - 1);

quickSort(arr, pi + 1, high);

 Time Complexity:

o Best/Average: O(n log n)

o Worst: O(n²) (when array is already sorted)

 Not Stable, but fast in practice.


✅ 3.5 Merge Sort

 Divides the array into halves, sorts, and merges them.

void merge(int arr[], int l, int m, int r) {

// Merge logic

void mergeSort(int arr[], int l, int r) {

if (l < r) {

int m = (l + r)/2;

mergeSort(arr, l, m);

mergeSort(arr, m+1, r);

merge(arr, l, m, r);

 Time Complexity: O(n log n)

 Stable: Yes

 Uses extra space

✅ 3.6 Heap Sort

 Builds a heap (max-heap or min-heap), repeatedly removes root to


sort.

void heapify(int arr[], int n, int i) {

int largest = i;

int l = 2*i + 1, r = 2*i + 2;

if (l < n && arr[l] > arr[largest]) largest = l;

if (r < n && arr[r] > arr[largest]) largest = r;

if (largest != i) {

swap(arr[i], arr[largest]);
heapify(arr, n, largest);

void heapSort(int arr[], int n) {

for (int i = n / 2 - 1; i >= 0; i--)

heapify(arr, n, i);

for (int i = n - 1; i >= 0; i--) {

swap(arr[0], arr[i]);

heapify(arr, i, 0);

 Time Complexity: O(n log n)

 Not Stable

 In-place

✅ 3.7 Radix Sort

 Non-comparative sort.

 Works by sorting digits from least significant to most using Counting


Sort.

void countingSort(int arr[], int n, int exp) {

// Sort based on digit at exp place

void radixSort(int arr[], int n) {

int max = getMax(arr, n);

for (int exp = 1; max/exp > 0; exp *= 10)

countingSort(arr, n, exp);
}

 Time Complexity: O(nk), where k = number of digits

 Stable: Yes

 Works only on integers

📝 Sorting Algorithm Comparison Table

Time Complexity (Best / Stabl


Algorithm Space
Avg / Worst) e

Bubble Sort O(n) / O(n²) / O(n²) O(1) Yes

Selection
O(n²) / O(n²) / O(n²) O(1) No
Sort

Insertion
O(n) / O(n²) / O(n²) O(1) Yes
Sort

O(log
Quick Sort O(n log n) / O(n log n) / O(n²) No
n)

O(n log n) / O(n log n) / O(n log


Merge Sort O(n) Yes
n)

O(n log n) / O(n log n) / O(n log


Heap Sort O(1) No
n)

O(n+k
Radix Sort O(nk) Yes
)

✅ Summary

 Searching: Linear for unsorted, Binary for sorted arrays.

 Hashing: Offers constant-time search (ideal), needs good collision


resolution.

 Sorting: Choose based on data size, stability requirement, and


time/space trade-offs.

o Quick Sort is fast but unstable.


o Merge Sort is stable and preferred in external sorting.

o Heap Sort is good when memory is constrained.


Ut 4

Trees in Data Structures

✅ 1. Basic Terminology of Trees

Term Description

Node Each element in a tree

Root Topmost node of the tree

Parent Node having child nodes

Child Node derived from parent

Leaf (Terminal
Node with no children
Node)

Internal Node Any non-leaf node

Tree formed by a node and its


Subtree
descendants

Level Distance from root (root is level 0)

Height/Depth Max level in a tree

Degree Number of children a node has

✅ 2. Binary Tree

 A tree where each node has at most two children.

 Children are referred to as left and right.

✅ 3. Types of Binary Trees

Type Description

Full Binary
Every node has 0 or 2 children
Tree

Strictly Same as Full Binary Tree


Type Description

Binary Tree

Complete All levels filled except possibly last, which is filled left to
Binary Tree right

Perfect
All internal nodes have two children, all leaves at same level
Binary Tree

Extended Also called 2-tree: every internal node has exactly 2


Binary Tree children and leaves are NULL nodes (external nodes)

✅ 4. Binary Tree Representations

📌 4.1 Array Representation

 Root at index 0 (or 1)

 Left child of i: 2i + 1 (or 2i if starting at 1)

 Right child of i: 2i + 2 (or 2i + 1)

 Used in complete binary trees

📌 4.2 Pointer (Linked List) Representation

struct Node {

int data;

struct Node *left, *right;

};

✅ 5. Binary Tree Traversals

Traversal
Order
Type

Left → Root →
Inorder
Right

Root → Left →
Preorder
Right
Traversal
Order
Type

Left → Right →
Postorder
Root

Example:

/ \

B C

/\ /\

D E F G

Type Output

DBEAFC
Inorder
G

ABDECF
Preorder
G

Postorde D E B F G C
r A

✅ 6. Constructing Tree from Traversals

 Inorder + Preorder → Unique Tree

 Inorder + Postorder → Unique Tree

Inorder helps locate root positions for left and right subtree division.

✅ 7. Binary Search Tree (BST)

 For each node:

o Left subtree < Root

o Right subtree > Root

📌 Operations
✅ Insertion

if (root == NULL) return createNode(key);

if (key < root->data)

root->left = insert(root->left, key);

else

root->right = insert(root->right, key);

✅ Deletion (3 Cases)

1. Node is a leaf

2. Node has one child

3. Node has two children → Replace with inorder


successor/predecessor

✅ Searching

 Traverse left/right based on comparison.

✅ Modification

 Similar to search, update the value when found.

✅ 8. Threaded Binary Trees

 Null pointers are replaced with threads pointing to inorder


predecessor/successor.

📌 Types

 Single Threaded: Either left or right null pointer is threaded.

 Double Threaded: Both null pointers are threaded.

📌 Advantages

 Faster inorder traversal

 No need for stack or recursion

✅ 9. Traversing Threaded Binary Tree


Node* inorderSuccessor(Node* ptr) {

if (ptr->rThread) return ptr->right;

ptr = ptr->right;

while (!ptr->lThread)

ptr = ptr->left;

return ptr;

✅ 10. Huffman Coding Using Binary Tree

📌 Huffman Tree

 Used for lossless data compression

 Characters with higher frequencies get shorter codes

📌 Steps:

1. Create a leaf node for each character.

2. Build min-heap of all nodes.

3. While heap has more than one node:

o Extract two minimum freq nodes.

o Create a new node with freq = sum of both.

o Insert new node back into heap.

4. Resulting tree gives Huffman codes.

✅ 11. AVL Tree (Adelson-Velsky and Landis Tree)

 Self-balancing BST

 Balance Factor (BF) = Height(left subtree) - Height(right subtree)

BF Node
Value Type

-1, 0, 1 Balanced
BF Node
Value Type

<-1 or Unbalance
>1 d

📌 Rotations Used for Balancing

 LL Rotation: Right Rotation

 RR Rotation: Left Rotation

 LR Rotation: Left → Right

 RL Rotation: Right → Left

✅ 12. B-Trees

 Balanced search tree used in databases and file systems

 Generalization of BST where each node can have more than two
children

 Maintains sorted order and allows logarithmic search

📌 Properties

 All leaves at same level

 Node with k children contains k-1 keys

 B-Trees reduce disk I/O by storing multiple keys per node

✅ 13. Binary Heap

 A complete binary tree satisfying the heap property

📌 Types

 Max Heap: Parent ≥ children

 Min Heap: Parent ≤ children

📌 Applications

 Priority Queues

 Heap Sort
 Shortest path algorithms

✅ Summary Table

Tree Type Key Feature

At most 2 children per


Binary Tree
node

Strict Binary 0 or 2 children

Complete
All levels filled left to right
Binary

BST Left < Root < Right

Threaded Tree NULLs replaced by threads

AVL Tree Self-balancing BST

Multi-way tree, used in


B-Tree
DBMS

Complete tree + Heap


Binary Heap
property

Huffman Tree Used in compression


Ut 5

📘 Graphs: Terminology and Representations

✅ Basic Terminology

 Graph (G): A pair (V, E) where V is a set of vertices (nodes), and E is a


set of edges (connections between nodes).

 Directed Graph (Digraph): Edges have direction (u → v).

 Undirected Graph: Edges do not have direction (u — v).

 Weighted Graph: Each edge has a weight (cost).

 Unweighted Graph: Edges have no weights.

 Degree: Number of edges connected to a node.

o In-degree: Number of incoming edges (for directed graphs).

o Out-degree: Number of outgoing edges.

📊 Data Structures for Graph Representations

1. Adjacency Matrix

 A 2D array A[V][V] where:

o A[i][j] = 1 (or weight) if edge exists from i to j.

o A[i][j] = 0 if no edge exists.

 Space Complexity: O(V²)

 Efficient for dense graphs.

2. Adjacency List

 Array of lists. Each vertex has a list of adjacent vertices.

 Space Complexity: O(V + E)

 Efficient for sparse graphs.

3. Incidence Matrix (less common)

 A matrix where rows = vertices and columns = edges.

 Shows which vertices are incident to which edges.


🔍 Graph Traversal

1. Depth First Search (DFS)

 Explore as far as possible before backtracking.

 Uses stack (implicitly via recursion or explicitly).

 Time Complexity: O(V + E)

2. Breadth First Search (BFS)

 Explore all neighbors before going deeper.

 Uses queue.

 Time Complexity: O(V + E)

🔗 Connected Components

 A connected component is a subgraph in which any two vertices are


connected by paths.

 In undirected graphs:

o Use DFS or BFS to find all components.

 In directed graphs, use Strongly Connected Components (SCC)


algorithms like Kosaraju’s.

🌲 Spanning Trees

 A spanning tree of a graph is a tree that includes all vertices of the


graph and V-1 edges.

 For a connected graph, many spanning trees can exist.

🔽 Minimum Cost Spanning Tree (MST)

A spanning tree with the minimum total edge weight.

🔹 Prim’s Algorithm

 Greedy algorithm.
 Start from any vertex, grow the tree by adding the minimum weight
edge that connects a vertex in the tree to a vertex outside.

 Uses priority queue (min-heap).

 Time Complexity:

o With Min-Heap + Adjacency List: O(E log V)

🔹 Kruskal’s Algorithm

 Greedy algorithm.

 Sort all edges by weight, add edges to the tree unless it forms a cycle.

 Uses Disjoint Set Union (DSU) or Union-Find.

 Time Complexity: O(E log E)

🔁 Transitive Closure

 For a graph G, transitive closure is a graph G* such that if there’s a


path from i to j, there’s an edge in G* from i to j.

 Use Warshall’s Algorithm.

🛣️Shortest Path Algorithms

1. Warshall’s Algorithm

 Computes transitive closure of a graph.

 Can also be modified to solve All-Pairs Shortest Path (Floyd-


Warshall).

 Time Complexity: O(V³)

2. Dijkstra’s Algorithm

 Solves Single Source Shortest Path for graphs with non-negative


weights.

 Uses priority queue (min-heap).

 Time Complexity: O((V + E) log V)


📝 Summary Table

Method/ Time
Concept
Structure Complexity

Adjacency Matrix 2D array O(V²)

Array of Linked
Adjacency List O(V + E)
Lists

DFS Recursion / Stack O(V + E)

BFS Queue O(V + E)

Prim’s MST Min-Heap O(E log V)

Kruskal’s MST DSU O(E log E)

Warshall
Matrix Ops O(V³)
(Transitive)

Dijkstra (Shortest O((V + E) log


Min-Heap
Path) V)

You might also like