Sure Shivam — here’s a detailed explanation of your topic “Programming and Data
Structure”, covering every concept in depth, with examples and clarity.
1. Data
Definition:
Data are raw, unprocessed facts, figures, or symbols which by themselves have no
meaning.
They can be numbers, text, images, sounds, etc.
Example:
75, Delhi, A+, "Shivam"
These are just pieces of data — not yet meaningful.
In short:
Data = Raw facts.
2. Entity
Definition:
An entity is a real-world object or concept that can be identified and stored in a database.
Each entity has attributes (properties).
Example:
Entity: Student
Attributes: Roll No, Name, Marks, Branch
So, the student Shivam with roll no 102 and marks 95 is an instance of the Student entity.
3. Information
Definition:
Information is processed, organized, and meaningful data.
It helps in decision-making.
Example:
Data: 85, Maths, Roll No 5
Information: “Student with Roll No 5 scored 85 marks in Maths.”
In short:
Information = Processed Data.
4. Di erence between Data and Information
Data Information
Raw, unorganized facts Processed and meaningful
Example: 72, 85, 90 Example: “Average score = 82”
No context Has context
Cannot be directly used for decision making Used for decision making
5. Data Type
Definition:
A data type specifies what kind of value a variable can hold and what operations can be
done on it.
Example:
int → integers like 10, -2
float → decimal numbers like 3.14
char → single character like ‘A’
bool → true or false
Purpose:
Data types help the compiler allocate proper memory and prevent errors.
6. Built-in Data Types
Definition:
Built-in (or primitive) data types are predefined in the programming language.
Examples (C/C++):
Type Example Size (approx.)
int 5, -10 2 or 4 bytes
float 3.14, -2.7 4 bytes
double 3.141592 8 bytes
char 'A' 1 byte
bool true/false 1 byte
7. Abstract Data Type (ADT)
Definition:
An Abstract Data Type specifies what operations can be performed but hides how they are
implemented.
Example:
A Stack ADT provides operations like:
push() → insert element
pop() → remove top element
peek() → see top element
We don’t need to know how these operations are implemented internally.
Common ADTs: Stack, Queue, List, Map, Set, Tree, Graph.
8. Data Structure
Definition:
A Data Structure is a way to organize, store, and manage data e iciently in memory.
Purpose:
To access and process data faster.
To reduce memory usage.
To improve e iciency of programs.
Example:
Array, Linked List, Stack, Queue, Tree, Graph, Hash Table.
9. Types of Data Structures
(a) Linear Data Structures
Data elements are arranged sequentially.
Traversed in a single run (one after another).
Examples:
1. Array – Fixed-size sequential collection.
2. Linked List – Dynamic nodes connected by pointers.
3. Stack – LIFO (Last In First Out).
4. Queue – FIFO (First In First Out).
(b) Non-Linear Data Structures
Data elements are not sequential.
Elements form a hierarchy or network.
Examples:
1. Tree – Hierarchical structure (root, parent, child nodes).
2. Graph – Set of vertices and edges (network connections).
Example:
A social network is a graph where people are nodes and friendships are edges.
10. Algorithm
Definition:
An Algorithm is a step-by-step procedure to solve a specific problem in a finite number of
steps.
Example:
Algorithm to find maximum of two numbers:
1. Start
2. Read A, B
3. If A > B then print A
4. Else print B
5. Stop
Key Point: Algorithm is language-independent.
11. Di erence between Algorithm and Program
Algorithm Program
Step-by-step solution Implementation of the algorithm
Language independent Written in a programming language
Logical representation Practical execution
Example: Flowchart or pseudocode Example: C, Java, Python code
12. Properties of a Good Algorithm
1. Input – Should have 0 or more inputs.
2. Output – At least one output.
3. Finiteness – Must terminate after finite steps.
4. Definiteness – Each step must be clear and unambiguous.
5. E ectiveness – Steps should be simple and executable.
6. Generality – Should solve all instances of the problem.
13. Algorithm Design Techniques
Technique Description Example
Divide and Divide problem → Solve subproblems →
Merge Sort, Quick Sort
Conquer Combine
Dynamic Store results of subproblems to avoid Fibonacci, Matrix Chain
Programming recomputation Multiplication
Greedy Method Make best choice at each step Dijkstra, Kruskal, Prim
Try all possibilities, backtrack when
Backtracking N-Queens, Sudoku
wrong
Branch and Bound Optimization search technique Knapsack, Job Scheduling
14. Performance Analysis of Algorithms
Goal: To measure the e iciency of algorithms in terms of:
Time Complexity → Execution time
Space Complexity → Memory usage
Example:
If a loop runs n times, its time complexity is O(n).
15. Complexity of Various Code Structures
Code Structure Example Time Complexity
Constant statement x = a + b; O(1)
Simple loop for(i=0;i<n;i++) O(n)
Nested loop Two for loops O(n²)
Binary Search Divide array each time O(log n)
Merge Sort Divide + Combine O(n log n)
16. Order of Growth
Definition:
It shows how the running time of an algorithm increases with the input size.
Example:
Algorithm Order of Growth
Constant time O(1)
Linear search O(n)
Binary search O(log n)
Bubble sort O(n²)
Merge sort O(n log n)
Interpretation:
Lower order of growth = faster algorithm.
17. Asymptotic Notations
Used to describe time/space complexity mathematically, ignoring constants.
(a) Big O Notation (O)
Represents upper bound or worst case.
Example: Linear search → O(n)
(b) Omega Notation (Ω)
Represents lower bound or best case.
Example: Linear search (if element found first) → Ω(1)
(c) Theta Notation (Θ)
Represents average or tight bound.
Example: Θ(n)
Example Comparison:
Algorithm Best Case Worst Case
Linear Search Ω(1) O(n)
Binary Search Ω(1) O(log n)
Bubble Sort Ω(n) O(n²)
Summary
Concept Key Idea
Data Raw facts
Information Processed data
Data Type Type of variable
ADT Operations definition only
Data Structure Way to store data
Algorithm Step-by-step procedure
Complexity Time and space e iciency
Asymptotic Notations Mathematical performance measures
Would you like me to add flowcharts and pseudocode examples for algorithms like
Linear Search, Binary Search, and Sorting next?
Nice — let’s dive deep, Shivam. I’ll cover arrays (single & multi-D, memory layout and index
derivations, sparse formats, applications) and recursion (C examples, Tower of Hanoi,
simulating recursion, backtracking, recursive algorithm patterns and analysis) with worked
examples and code.
Arrays — definition, layouts, index formulae, and sparse matrices
Definition (array).
An array is a contiguous block of memory containing a fixed number of elements of the
same type, indexed by integers. Elements are stored sequentially, so indexing is O(1).
Single vs. Multidimensional arrays
Single-dimensional (1-D) array: A[n] — n elements, indexed A[0]..A[n-1] (in C).
Multidimensional array: e.g. 2-D A[r][c] is logically a matrix with r rows and c
columns. In memory it’s still contiguous: either row-major or column-major
ordering.
Note: C uses row-major order; Fortran uses column-major.
1-D index (address) formula (0-based)
Let:
Base = base address of A[0]
w = size (bytes) of each element
i = index (0 ≤ i < n)
Address(A[i]) = Base + i * w
If lower bound L ≠ 0 (e.g., 1-based):
Address(A[i]) = Base + (i − L) * w
2-D arrays: Row-major and Column-major
Assume 0-based indices and array A[R][C] (R rows, C cols). Let element size = w.
Row-major (C-style)
Elements of a row are contiguous. Row 0 elements come first, then row 1, etc.
Derivation: element A[i][j] is at o set = number of elements before row i plus j inside row i:
elements before row i = i * C
then j elements into that row
Address(A[i][j]) = Base + ((i * C) + j) * w
If lower bounds Lr, Lc:
Address = Base + (((i − Lr) * C) + (j − Lc)) * w
Column-major (Fortran-style)
Columns are contiguous. Elements of column 0 come first.
Address(A[i][j]) = Base + ((j * R) + i) * w
Numeric example (row-major)
A[3][4], Base = 1000, w = 4 bytes. Find address of A[2][1] (0-based).
O set elements = (2 * 4) + 1 = 9
O set bytes = 9 * 4 = 36
Address = 1000 + 36 = 1036
General n-D array mapping (row-major)
For an n-D array with bounds Lk..Uk and dimension sizes Dk = Uk−Lk+1:
Address of element with indices i1..in:
Address = Base + ( Σ_{k=1..n} [ (ik − Lk) * Π_{m=k+1..n} Dm ] ) * w
(For column-major, the product order reverses.)
C pointer arithmetic notes
A[i] is *(A + i); address &A[i] = A + i.
For int A[3][4], A[i][j] equals *(*(A + i) + j).
Applications of arrays
Static lookup tables, storing vectors & matrices
Implementing other data structures (stacks, queues, heaps)
Sorting and searching (binary search requires sorted arrays)
Image bu ers (2-D arrays)
Numerical methods (matrices), signal processing
Hashing (buckets often arrays/lists)
Pros: O(1) access, good cache locality.
Cons: Fixed size (static arrays), insert/delete in middle O(n).
Sparse matrices — motivation & representations
Sparse matrix: most entries are zero. Storing all zeros is wasteful. Sparse formats store
only nonzero entries.
Common representations:
1) Coordinate list (COO)
Store a list of (row, col, value) triples.
Good for construction and simple I/O.
Example: matrix 4×5 with nonzeros:
(0,2,3), (0,4,4), (1,2,5), (1,3,7), (3,1,2), (3,2,6)
2) Compressed Sparse Row (CSR) — e icient for row operations & mat-vec
Three arrays:
values[] — nonzero values
col_index[] — column indices for each value
row_ptr[] — length R+1; row_ptr[i] is start index in values for row i, row_ptr[R] =
number of nonzeros
Example CSR for above:
values = [3,4,5,7,2,6]
col_index= [2,4,2,3,1,2]
row_ptr = [0,2,4,4,6] // row_ptr[0]=0, row0 has 2 entries, row1 has 2, row2 has 0, row3 has
2
3) Compressed Sparse Column (CSC)
Like CSR but compressed by columns — good for column operations.
4) Diagonal/ band storage
If nonzeros lie on known diagonals (e.g., tridiagonal), store diagonals only.
Tradeo s: COO easy to build; CSR/CSC give fast arithmetic (mat-vec) and lower storage
overhead.
Recursion — definition, C specifics, examples, simulation, backtracking, analysis
Definition (recursion).
A function is recursive if it calls itself (direct) or via a chain of calls (indirect), solving a
problem by solving smaller instances of the same problem.
Principles of recursion (must-haves)
1. Base case(s): one or more stopping conditions that return without recursion.
2. Recursive case: reduces the problem to one or more smaller subproblems.
3. Progress toward base: parameters must move toward base case (otherwise infinite
recursion).
4. State per call: each call has its own activation record (local variables, return
address).
5. Space cost: recursion uses stack frames → O(depth) additional memory.
Recursion in C — details & caveats
Each recursive call creates an activation record on the call stack containing local
variables, parameters, return address.
Deep recursion can cause stack overflow.
Tail recursion: if the recursive call is the last action, compilers may optimize it into
iteration (tail call optimization), but C does not guarantee it — depend on compiler.
Avoid excessive recursion for big depths; use iteration or explicit stacks if needed.
Simple C examples
Factorial (simple recursion)
#include <stdio.h>
long fact(int n) {
if (n <= 1) return 1; // base case
return n * fact(n - 1); // recursive case
int main() {
printf("%ld\n", fact(5)); // prints 120
return 0;
Time: O(n), Space: O(n) stack depth.
Recursive Fibonacci (naive tree recursion — exponential)
int fib(int n) {
if (n <= 1) return n;
return fib(n-1) + fib(n-2);
Time: O(φ^n) (exponential). Use memoization or iterative DP to get O(n).
Recursive binary search
int binsearch(int a[], int l, int r, int key) {
if (l > r) return -1;
int mid = l + (r - l)/2;
if (a[mid] == key) return mid;
else if (a[mid] > key) return binsearch(a, l, mid-1, key);
else return binsearch(a, mid+1, r, key);
Time: O(log n), Space: O(log n) stack depth.
Tower of Hanoi (classic recursion)
Problem: move n disks from peg A to C using B as auxiliary, one disk at a time, never place
larger on smaller.
Recursive solution:
Move n-1 disks from A to B using C.
Move disk n from A to C.
Move n-1 disks from B to C using A.
C code:
#include <stdio.h>
void hanoi(int n, char from, char to, char aux) {
if (n == 0) return;
hanoi(n-1, from, aux, to);
printf("Move disk %d from %c to %c\n", n, from, to);
hanoi(n-1, aux, to, from);
int main() {
hanoi(3, 'A', 'C', 'B');
return 0;
Recurrence: T(n) = 2 T(n-1) + 1 → T(n) = 2^n − 1 moves (exponential). Space: O(n) recursion
depth.
Simulating recursion (call stack & manual stack)
Simulate factorial(4) — call stack view
fact(4) // waiting for fact(3)
fact(3) // waiting for fact(2)
fact(2)
fact(1)
fact(0) => returns 1
fact(1) returns 1*1 = 1
fact(2) returns 2*1 = 2
fact(3) returns 3*2 = 6
fact(4) returns 4*6 = 24
Each call holds n and the pending multiplication.
Replace recursion with explicit stack (general idea)
Recursion can be converted to iteration by simulating activation records on an explicit
stack. Pattern:
push(initial_frame)
while (stack not empty) {
frame = pop()
if (frame is base case) handle and maybe combine with previous
else push frames for subproblems (in order matching recursion)
Example: iterative DFS uses a stack instead of recursive DFS.
Backtracking (recursion + undoing choices)
Definition: Backtracking systematically tries options, recursing on choices and undoing
(backtracking) when a partial choice cannot lead to solution.
General template (pseudocode):
backtrack(state):
if is_goal(state): report solution; return
for each candidate c in choices(state):
if is_valid(state, c):
apply(state, c)
backtrack(state)
undo(state, c)
Examples: N-Queens, Sudoku, cryptarithmetic, Hamiltonian path.
N-Queens (brief)
Place queens row by row.
At row r, try placing queen in column c that’s not attacked.
Recurse to r+1. If none valid, backtrack.
Complexity worst-case exponential; pruning (checking columns/diagonals) reduces
search.
Recursive algorithm patterns & analysis
Patterns
Linear recursion: T(n)=T(n−1)+O(1) → O(n)
Tail recursion: recursive call as last action (possible to optimize)
Tree recursion: one call branches into multiple (e.g., fib), often exponential
Divide & conquer: split into subproblems of smaller size, combine results (merge
sort, quicksort)
Recurrence solving (examples)
Binary search: T(n) = T(n/2) + O(1) → T(n) = O(log n)
Merge sort: T(n) = 2T(n/2) + O(n) → T(n) = O(n log n) (Master theorem)
Factorial-like: T(n) = T(n-1) + O(1) → T(n) = O(n)
Tower of Hanoi: T(n) = 2T(n-1) + 1 → T(n) = O(2^n)
Master theorem (quick reference): For T(n) = a T(n/b) + f(n):
If f(n) = O(n^{log_b a - ε}) → T(n) = Θ(n^{log_b a})
If f(n) = Θ(n^{log_b a} * log^k n) → T(n) = Θ(n^{log_b a} * log^{k+1} n)
If f(n) = Ω(n^{log_b a + ε}) and regularity holds → T(n) = Θ(f(n))
Tail recursion & optimization
Tail recursion example (factorial with accumulator):
long fact_tail(int n, long acc) {
if (n == 0) return acc;
return fact_tail(n-1, acc * n); // tail call
If compiler performs tail-call optimization, this uses O(1) stack. But do not rely on it in
portable C.
Practical tips & best practices
Prefer recursion for clarity on divide-and-conquer, tree processing, backtracking.
Use iteration when depth is large or when stack usage is a concern.
Memoize overlapping subproblems (dynamic programming) to avoid exponential
blowups.
Watch stack depth — default stacks are limited (platform dependent).
Prefer CSR for sparse matrix arithmetic (space & speed benefits).
For multi-D arrays in performance-sensitive code, keep access order consistent
with memory layout (row-major) to exploit cache locality: iterate rows outer, cols
inner for C.
Short practical exercises (recommended)
1. Derive address for A[5][7] 0-based with Base=2000, w=8, find A[3][4].
2. Implement CSR storage and write matrix-vector multiply.
3. Write recursive and iterative versions of factorial and compare stack frames.
4. Implement N-Queens with backtracking and add basic pruning (column/diag
arrays).
5. Analyze recurrence T(n)=3T(n/4)+n with Master theorem.
If you want, I can now:
give step-by-step derivations for a particular example (with numbers),
produce C code for CSR mat-vec or N-Queens, or
draw the recursion tree and call-stack trace for Tower of Hanoi or Fibonacci.
Which one should I show next?
Alright Shivam — deep dive incoming. I’ll cover both array (cursor-based) and pointer
implementations for singly / doubly / circular linked lists, show core operations
(insertion, deletion, traversal), and then explain polynomial representation and addition,
subtraction, multiplication (single-variable) with worked examples and complexity notes.
1) Overview — Why two implementations
Pointer implementation: uses dynamic memory (malloc/free) and real pointers.
Natural in C/C++ and used in practice. Flexible, no fixed limit.
Array (cursor-based) implementation: simulates pointers using arrays of indices.
Useful where dynamic allocation not allowed or for teaching/embedded
environments. Uses fixed-size arrays and a free list.
2) Array (cursor-based) implementation (aka static linked list)
Data structures
#define MAX 100
int data[MAX]; // stores element values
int next[MAX]; // stores index of next node, -1 for null
int head = -1; // index of first element or -1 if empty
int free_head = 0; // index of first free cell
Initialization
void init() {
head = -1;
for (int i = 0; i < MAX-1; ++i) next[i] = i + 1;
next[MAX-1] = -1;
free_head = 0;
free_head links all unused indices (like a free-list).
Allocation & Deallocation (indices)
int allocate_node() {
if (free_head == -1) return -1; // no space
int p = free_head;
free_head = next[free_head];
next[p] = -1;
return p;
void free_node(int p) {
next[p] = free_head;
free_head = p;
Insert at front
void insert_front(int value) {
int p = allocate_node();
if (p == -1) { /* overflow */ return; }
data[p] = value;
next[p] = head;
head = p;
Insert after index pos (pos is an index in array)
void insert_after(int pos, int value) {
if (pos < 0) return;
int p = allocate_node();
if (p == -1) return;
data[p] = value;
next[p] = next[pos];
next[pos] = p;
Delete by value (first occurrence)
void delete_value(int value) {
int cur = head, prev = -1;
while (cur != -1 && data[cur] != value) {
prev = cur; cur = next[cur];
if (cur == -1) return; // not found
if (prev == -1) head = next[cur]; else next[prev] = next[cur];
free_node(cur);
Traversal
void traverse() {
for (int cur = head; cur != -1; cur = next[cur]) {
printf("%d ", data[cur]);
Complexity
Insert/Delete/Traverse: same big-O as pointer lists (O(1) for head insert, O(n) to find
arbitrary position).
Space: fixed (MAX), waste when list small.
3) Pointer-based Singly Linked List (dynamic)
Node structure
typedef struct Node {
int data;
struct Node *next;
} Node;
Create node
Node* create_node(int val) {
Node* p = malloc(sizeof(Node));
if (!p) exit(1);
p->data = val; p->next = NULL;
return p;
Insert at head (O(1))
void insert_head(Node** head_ref, int val) {
Node* p = create_node(val);
p->next = *head_ref;
*head_ref = p;
Insert at tail (O(n) if no tail pointer)
void insert_tail(Node** head_ref, int val) {
Node* p = create_node(val);
if (*head_ref == NULL) { *head_ref = p; return; }
Node* cur = *head_ref;
while (cur->next) cur = cur->next;
cur->next = p;
(If you maintain a tail pointer, tail insertion becomes O(1).)
Insert after a given node pointer
void insert_after_node(Node* prev, int val) {
if (!prev) return;
Node* p = create_node(val);
p->next = prev->next;
prev->next = p;
Delete by value (first occurrence)
void delete_value(Node** head_ref, int val) {
Node *cur = *head_ref, *prev = NULL;
while (cur && cur->data != val) { prev = cur; cur = cur->next; }
if (!cur) return; // not found
if (!prev) *head_ref = cur->next;
else prev->next = cur->next;
free(cur);
}
Delete by position (0-based)
void delete_pos(Node** head_ref, int pos) {
if (!*head_ref) return;
Node* cur = *head_ref;
if (pos == 0) { *head_ref = cur->next; free(cur); return; }
for (int i=0; cur && i < pos-1; ++i) cur = cur->next;
if (!cur || !cur->next) return;
Node* toDelete = cur->next;
cur->next = toDelete->next;
free(toDelete);
Traversal (iterative & recursive)
void traverse_iter(Node* head) {
for (Node* cur = head; cur; cur = cur->next) printf("%d ", cur->data);
void traverse_rec(Node* head) {
if (!head) return;
printf("%d ", head->data);
traverse_rec(head->next);
Complexity
Insert at head: O(1)
Insert at tail: O(n) (unless tail pointer)
Delete/search by key/position: O(n)
Space: grows as nodes allocated.
4) Doubly Linked List (pointer implementation)
Node structure
typedef struct DNode {
int data;
struct DNode *prev, *next;
} DNode;
Insert at head
void insert_head(DNode** head_ref, int val) {
DNode* p = malloc(sizeof(DNode));
p->data = val; p->prev = NULL; p->next = *head_ref;
if (*head_ref) (*head_ref)->prev = p;
*head_ref = p;
Insert at tail
void insert_tail(DNode** head_ref, int val) {
DNode* p = malloc(sizeof(DNode));
p->data = val; p->next = NULL;
if (!*head_ref) { p->prev = NULL; *head_ref = p; return; }
DNode* cur = *head_ref;
while (cur->next) cur = cur->next;
cur->next = p; p->prev = cur;
Delete a node pointer (O(1) if you have the node)
void delete_node(DNode** head_ref, DNode* del) {
if (!del) return;
if (*head_ref == del) *head_ref = del->next;
if (del->next) del->next->prev = del->prev;
if (del->prev) del->prev->next = del->next;
free(del);
Traverse forward / backward
void traverse_forward(DNode* head) {
for (DNode* cur = head; cur; cur = cur->next) printf("%d ", cur->data);
void traverse_backward(DNode* head) {
if (!head) return;
DNode* tail = head; while (tail->next) tail = tail->next;
for (DNode* cur = tail; cur; cur = cur->prev) printf("%d ", cur->data);
Complexity
Insert/delete at known node: O(1)
Search by value: O(n)
Extra memory per node for prev.
5) Circular Linked Lists
Singly circular (pointer)
Last node's next points to head.
head == NULL means empty.
Insert at end (O(1) if you keep tail pointer)
Algorithm using tail pointer tail (tail->next = head):
Create node p.
If empty: p->next = p; tail = p;
Else: p->next = tail->next; tail->next = p; tail = p;
Insert at front using tail pointer
Use tail pointer so O(1): insert node after tail and set tail->next = new_head.
// insert at front given tail pointer
void insert_front_circular(Node** tail_ref, int val) {
Node* p = create_node(val);
if (!*tail_ref) { p->next = p; *tail_ref = p; }
else {
p->next = (*tail_ref)->next;
(*tail_ref)->next = p;
// tail remains same; new head is tail->next
Deletion (delete head)
If single node: set tail = NULL.
Else: tail->next = head->next and free old head.
Traversal
void traverse_circular(Node* tail) {
if (!tail) return;
Node* cur = tail->next; // head
do {
printf("%d ", cur->data);
cur = cur->next;
} while (cur != tail->next);
}
Doubly circular — similar with prev link; both ends connected.
Complexity
With tail pointer, insert/delete at head or tail are O(1).
Traversal still O(n).
6) Operations — summary & edge cases
Common operations across implementations:
Insertion: at head, at tail, after a position or key.
Deletion: by key, by position, delete head/tail.
Traversal: read all elements (iter/rec).
Search: find node by value or property.
Reverse (optional): reverse pointers O(n).
Edge cases:
Empty list
Single element list
Insert/delete at boundaries (head/tail)
Memory allocation failure (pointer impl)
Array-full condition (array impl)
7) Polynomial representation (single variable) — ways to store
Two common representations:
A. Array of coe icients
Index = exponent, value = coe icient. Good when degree bound small.
Example: coe [0] = const term, coe [3] = coe of x^3.
Pros: O(1) access for coe icient; multiplication/addition simple with indices.
Cons: wasteful for sparse polynomials with very large degrees.
B. Linked list of terms (sparse representation)
Node:
typedef struct Term {
int coe ;
int exp; // exponent
struct Term *next;
} Term;
Maintain list sorted by decreasing exponent (or increasing) — simplifies add/merge.
8) Polynomial addition & subtraction (linked-list merge-style)
Assume both lists sorted by descending exponent.
Algorithm (addition)
Create dummy head res_tail.
Let p = P1, q = P2.
While both non-null:
o If p->exp == q->exp: sum = p->coe + q->coe ; if sum != 0 append (sum,
exp); move p and q.
o Else if p->exp > q->exp: append (p->coe ,p->exp); p = p->next.
o Else append (q->coe ,q->exp); q = q->next.
Append remaining terms.
Return dummy.next.
C-like code (sketch)
Term* add_poly(Term* p, Term* q) {
Term dummy = {0,0,NULL}; Term *tail = &dummy;
while (p && q) {
if (p->exp == q->exp) {
int s = p->coe + q->coe ;
if (s) { tail->next = new_term(s, p->exp); tail = tail->next; }
p = p->next; q = q->next;
} else if (p->exp > q->exp) {
tail->next = new_term(p->coe , p->exp); tail = tail->next; p = p->next;
} else {
tail->next = new_term(q->coe , q->exp); tail = tail->next; q = q->next;
while (p) { tail->next = new_term(p->coe ,p->exp); tail = tail->next; p = p->next; }
while (q) { tail->next = new_term(q->coe ,q->exp); tail = tail->next; q = q->next; }
return dummy.next;
Subtraction
P1 - P2 can be done by subtracting coe icients: treat q->coe = -q->coe and reuse
addition, or modify comparison branch to subtract.
Complexity
O(m + n) where m,n are number of terms.
9) Polynomial multiplication (linked-list naive method)
Method 1 — term-by-term (naive)
For each term a in P1:
o For each term b in P2:
Multiply: coe = a.coe * b.coe , exp = a.exp + b.exp.
Insert (or accumulate) this term in result list — if a term with same
exponent exists, add coe icients.
Insertion/accumulation can be done in O(k) per insertion if result represented as
linked list (worst-case O(mnresult_size)). Better to use an auxiliary array or hash
map keyed by exponent to accumulate e iciently.
Method 2 — map/array accumulation (recommended)
If maximum degree small or known: use an array res_coe [0..max_deg] initialized to
0.
For each pair multiply and add to res_coe [e1+e2].
Then traverse array to build final sparse list.
Example (worked)
P1 = 5x^3 + 4x + 2
P2 = 3x^2 + 1
Multiply term-by-term:
5x^3 * 3x^2 = 15 x^5
5x^3 * 1 = 5 x^3
4x * 3x^2 = 12 x^3
4x * 1 = 4 x^1
2 * 3x^2 = 6 x^2
2*1=2
Combine like terms:
x^5: 15
x^3: 5 + 12 = 17
x^2: 6
x^1: 4
x^0: 2
Result: 15x^5 + 17x^3 + 6x^2 + 4x + 2
Complexity
Naive multiplication: O(m * n) term multiplications.
If using array/hashing to accumulate, combining is O(m * n + D) where D is number
of distinct degrees in result.
10) Example code — polynomial addition & multiplication (linked-list)
Full production-level code is long; here’s concise, usable code sketch:
// Term node creation
Term* new_term(int coe , int exp) {
Term* t = malloc(sizeof(Term));
t->coe = coe ; t->exp = exp; t->next = NULL;
return t;
// Append term in descending exponent order (used to build results)
void append_term(Term** tail_ref, int coe , int exp) {
Term* node = new_term(coe , exp);
(*tail_ref)->next = node;
*tail_ref = node;
// Add (assumes sorted descending)
Term* add_poly(Term* p, Term* q) {
Term dummy = {0,0,NULL}; Term* tail = &dummy;
while (p && q) {
if (p->exp == q->exp) {
int s = p->coe + q->coe ;
if (s) append_term(&tail, s, p->exp);
p = p->next; q = q->next;
} else if (p->exp > q->exp) {
append_term(&tail, p->coe , p->exp); p = p->next;
} else {
append_term(&tail, q->coe , q->exp); q = q->next;
while (p) { append_term(&tail, p->coe , p->exp); p = p->next; }
while (q) { append_term(&tail, q->coe , q->exp); q = q->next; }
return dummy.next;
// Multiply using accumulation map (array) — safer if degrees bounded
Term* multiply_poly(Term* p, Term* q) {
// find max degree (optional: if degrees unknown and high, use hash table)
int max_deg = 0;
for (Term* t = p; t; t = t->next) if (t->exp > max_deg) max_deg = t->exp;
for (Term* t = q; t; t = t->next) if (t->exp > max_deg) max_deg = t->exp;
int res_max = max_deg * 2;
int *res = calloc(res_max+1, sizeof(int)); // res[exp] = coe
for (Term* a = p; a; a = a->next)
for (Term* b = q; b; b = b->next)
res[a->exp + b->exp] += a->coe * b->coe ;
// Build result list in descending order
Term dummy = {0,0,NULL}; Term* tail = &dummy;
for (int e = res_max; e >= 0; --e) if (res[e]) append_term(&tail, res[e], e);
free(res);
return dummy.next;
11) Memory & performance notes
Pointer lists have O(1) insert/delete at head and for known node; searches are O(n).
Array (cursor) lists avoid malloc/free overhead but are limited by fixed size.
Doubly linked lists need double memory per node, but deletions at arbitrary nodes
are O(1).
Circular lists useful when you need wrap-around or equal treatment of head/tail;
careful with traversal stopping condition.
12) Quick complexity summary
Operation Singly (pointer) Doubly Cursor-array
Insert at head O(1) O(1) O(1)
O(n) (O(1) with
Insert at tail O(1) (with tail) O(n) (unless you track tail index)
tail)
Delete known
O(n) (need prev) O(1) O(n) (need prev)
node
Search by value O(n) O(n) O(n)
prev + next + index + data (less overhead but
Memory per node pointer + data
data fixed)
Polynomial ops:
Addition/Subtraction: O(m + n)
Multiplication: O(m * n) term multiplications (plus combine cost)
13) Practical tips & exercises
Use cursor-based lists when dynamic allocation is prohibited or predictably
bounded.
Use doubly linked lists when many deletions at arbitrary nodes are required.
For polynomials: if dense and degree small, use arrays; if sparse, use linked lists or
hash maps.
Exercise: implement polynomial multiplication using a hash map (exponent →
coe icient) for unbounded degrees.
If you want, I can now:
provide full ready-to-compile C programs for any one of the implementations
(array-based singly list, pointer singly list, doubly list, circular list) or for polynomial
add/multiply, or
show step-by-step memory diagrams for an insertion and deletion operation.
Which code/example would you like me to produce next?
Alright Shivam — here’s the complete detailed explanation of the topic step-by-step,
covering everything in your syllabus clearly and deeply.
1. Abstract Data Type (ADT)
Definition:
An Abstract Data Type (ADT) is a logical description of how data is organized and what
operations can be performed on it — without specifying the implementation details.
Example
Stack, Queue, List, Tree, etc. are ADTs.
They define:
What data they store (e.g., integers, strings)
What operations they support (e.g., push, pop, insert, delete)
But not how they’re implemented (array, linked list, etc.)
For example — Stack ADT
Operations:
1. push(x) → insert element x
2. pop() → remove top element
3. peek() → return top element without removing
4. isEmpty() → check if stack is empty
5. isFull() → check if stack is full
2. Stack (LIFO structure)
Definition:
A stack is a linear data structure that follows Last In, First Out (LIFO) principle.
➡ The last element inserted is the first one to be removed.
Example:
Pushing elements: 10 → 20 → 30
Stack top is now 30.
After one pop, 30 is removed, and 20 becomes top.
3. Primitive Stack Operations
Push(x)
→ Adds an element on the top of the stack.
Pop()
→ Removes and returns the top element from the stack.
Peek()
→ Returns top element without removing it.
isEmpty()
→ Returns true if stack is empty.
isFull()
→ Returns true if stack is full (in array implementation).
4. Array Implementation of Stack
Static implementation using arrays
#include <stdio.h>
#define MAX 5
int stack[MAX];
int top = -1;
void push(int x) {
if (top == MAX - 1)
printf("Stack Overflow\n");
else
stack[++top] = x;
int pop() {
if (top == -1) {
printf("Stack Underflow\n");
return -1;
} else
return stack[top--];
int peek() {
if (top == -1) {
printf("Stack Empty\n");
return -1;
} else
return stack[top];
void display() {
for (int i = top; i >= 0; i--)
printf("%d ", stack[i]);
printf("\n");
Example Output:
push(10)
push(20)
push(30)
Stack: 30 20 10
pop() → 30
Complexity:
Push: O(1)
Pop: O(1)
Peek: O(1)
5. Linked List Implementation of Stack
Dynamic implementation using pointers
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
struct Node *next;
};
struct Node *top = NULL;
void push(int x) {
struct Node *temp = (struct Node*)malloc(sizeof(struct Node));
temp->data = x;
temp->next = top;
top = temp;
int pop() {
if (top == NULL) {
printf("Stack Underflow\n");
return -1;
int val = top->data;
struct Node *temp = top;
top = top->next;
free(temp);
return val;
}
void display() {
struct Node *temp = top;
while (temp) {
printf("%d ", temp->data);
temp = temp->next;
printf("\n");
Advantages:
No fixed size
No overflow (until memory full)
6. Applications of Stack
1. Expression conversion (Infix → Prefix / Postfix)
2. Evaluation of postfix expression
3. Parenthesis balancing
4. Function call management (recursion)
5. Undo / Redo in text editors
6. Browser back/forward
7. Prefix, Infix, and Postfix Expressions
Type Example Evaluation Order
Infix A+B Operators between operands
Prefix (Polish) +AB Operator before operands
Postfix (Reverse Polish) A B + Operator after operands
Example:
Infix: A + B * C
Prefix: + A * B C
Postfix: A B C * +
8. Evaluation of Postfix Expression
Algorithm using Stack:
1. Scan postfix expression from left to right.
2. If operand → push into stack.
3. If operator → pop two elements, apply operator, push result back.
4. After complete scan, top of stack = result.
Example:
Postfix: 6 2 3 + - 3 8 2 / + *
Step Symbol Stack (top→bottom)
1 6 6
2 2 2,6
3 3 3,2,6
4 + (2+3)=5 → push → 5,6
5 - (6−5)=1 → push → 1
6 3 3,1
7 8 8,3,1
8 2 2,8,3,1
9 / (8/2)=4 → push → 4,3,1
10 + (3+4)=7 → push → 7,1
11 * (1*7)=7 → push → 7
Final Answer = 7
9. Recursion vs Iteration
Feature Recursion Iteration
Definition Function calls itself Loop repeats block
Control Function call stack Loop counter
Memory use More (stack frames) Less
Termination Base condition Loop condition
Example Factorial via recursion Factorial via loop
10. Principles of Recursion
1. Base case – must exist to stop recursion.
2. Recursive case – function calls itself with smaller input.
3. Stack frame – each recursive call stores its data on call stack.
4. Progressive approach – input should move toward base case.
11. Tail Recursion
A tail recursive function is one where recursive call is the last statement executed by
the function.
Example:
void display(int n) {
if (n == 0) return;
printf("%d ", n);
display(n - 1); // recursive call is last → tail recursion
}
This can be easily converted to an iteration using a loop.
12. Removing Recursion (Recursion → Iteration)
Every recursion can be simulated by an explicit stack or loop.
Example – Factorial
Recursive:
int fact(int n) {
if (n == 0) return 1;
return n * fact(n - 1);
Iterative:
int fact_iter(int n) {
int res = 1;
for (int i = 1; i <= n; i++)
res *= i;
return res;
13. Problem Solving with Iteration & Recursion
(a) Binary Search
Recursive version
int binarySearch(int a[], int low, int high, int key) {
if (low > high) return -1;
int mid = (low + high) / 2;
if (a[mid] == key) return mid;
else if (key < a[mid]) return binarySearch(a, low, mid - 1, key);
else return binarySearch(a, mid + 1, high, key);
Iterative version
int binarySearch_iter(int a[], int n, int key) {
int low = 0, high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (a[mid] == key) return mid;
else if (key < a[mid]) high = mid - 1;
else low = mid + 1;
return -1;
(b) Fibonacci Numbers
Recursive
int fib(int n) {
if (n <= 1) return n;
return fib(n-1) + fib(n-2);
Iterative
int fib_iter(int n) {
int a = 0, b = 1, c;
for (int i = 2; i <= n; i++) {
c = a + b;
a = b;
b = c;
return b;
(c) Tower of Hanoi
Problem: Move n disks from source → destination using auxiliary rod, one at a time.
Recursive Solution
void TOH(int n, char source, char aux, char dest) {
if (n == 1) {
printf("Move disk 1 from %c to %c\n", source, dest);
return;
TOH(n-1, source, dest, aux);
printf("Move disk %d from %c to %c\n", n, source, dest);
TOH(n-1, aux, source, dest);
Steps for n=3:
Move disk 1 from A → C
Move disk 2 from A → B
Move disk 1 from C → B
Move disk 3 from A → C
Move disk 1 from B → A
Move disk 2 from B → C
Move disk 1 from A → C
Number of moves = 2ⁿ − 1
Conceptual Summary
Concept Key Points
ADT Describes what operations are possible, not how
Stack LIFO structure; push/pop
Implementation Array (fixed size) or Linked list (dynamic)
Applications Expression eval, recursion, undo, etc.
Recursion Function calling itself; uses call stack
Tail recursion Recursive call at end (convertible to loop)
Binary Search, Fibonacci, TOH Classic recursion problems
Would you like me to provide a single C program file that demonstrates both array +
linked stack, and postfix evaluation using stack, along with recursion examples (binary
search, Fibonacci, TOH) — all in one?
Let’s explain Queue and all its operations and types in detail
QUEUE
Definition
A queue is a linear data structure that follows the FIFO (First In First Out) principle.
The element inserted first is the first to be deleted.
Example:
Think of a queue in front of a ticket counter — the person who comes first gets the ticket
first.
Basic Queue Operations
Operation Description
Create Initialize an empty queue.
Add / Enqueue Insert an element at the rear end of the queue.
Delete / Dequeue Remove an element from the front end of the queue.
Full Check if the queue is full (in case of array implementation).
Empty Check if the queue is empty (no elements).
Queue Terminologies
Front: The position from which elements are removed.
Rear: The position where new elements are added.
Size: Maximum number of elements the queue can hold.
ARRAY IMPLEMENTATION OF QUEUE IN C
#include <stdio.h>
#define SIZE 5
int queue[SIZE];
int front = -1, rear = -1;
void enqueue(int value) {
if (rear == SIZE - 1)
printf("Queue is Full\n");
else {
if (front == -1)
front = 0;
rear++;
queue[rear] = value;
printf("%d inserted\n", value);
void dequeue() {
if (front == -1 || front > rear)
printf("Queue is Empty\n");
else {
printf("%d deleted\n", queue[front]);
front++;
void display() {
if (front == -1 || front > rear)
printf("Queue is Empty\n");
else {
for (int i = front; i <= rear; i++)
printf("%d ", queue[i]);
printf("\n");
int main() {
enqueue(10);
enqueue(20);
enqueue(30);
display();
dequeue();
display();
return 0;
CIRCULAR QUEUE
Definition
In a circular queue, the last position is connected back to the first position, forming a
circle.
This helps reuse memory that is freed after deletions.
Example:
If rear reaches the end of the array but there is space at the beginning (due to deletions), it
wraps around.
Circular Queue Operations
Operation Condition
Empty front == -1
Full (rear + 1) % SIZE == front
Enqueue rear = (rear + 1) % SIZE
Dequeue front = (front + 1) % SIZE
C Program for Circular Queue
#include <stdio.h>
#define SIZE 5
int cqueue[SIZE];
int front = -1, rear = -1;
void enqueue(int value) {
if ((rear + 1) % SIZE == front)
printf("Queue is Full\n");
else {
if (front == -1)
front = 0;
rear = (rear + 1) % SIZE;
cqueue[rear] = value;
printf("%d inserted\n", value);
void dequeue() {
if (front == -1)
printf("Queue is Empty\n");
else {
printf("%d deleted\n", cqueue[front]);
if (front == rear)
front = rear = -1;
else
front = (front + 1) % SIZE;
void display() {
if (front == -1)
printf("Queue is Empty\n");
else {
int i = front;
while (1) {
printf("%d ", cqueue[i]);
if (i == rear)
break;
i = (i + 1) % SIZE;
printf("\n");
int main() {
enqueue(10);
enqueue(20);
enqueue(30);
display();
dequeue();
display();
return 0;
LINKED LIST IMPLEMENTATION OF QUEUE
Concept
Here, the queue is implemented using nodes.
Each node contains:
data
pointer to the next node
C Program (Linked Queue)
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
struct Node* next;
};
struct Node* front = NULL;
struct Node* rear = NULL;
void enqueue(int value) {
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
newNode->next = NULL;
if (rear == NULL) {
front = rear = newNode;
} else {
rear->next = newNode;
rear = newNode;
printf("%d inserted\n", value);
void dequeue() {
if (front == NULL)
printf("Queue is Empty\n");
else {
struct Node* temp = front;
printf("%d deleted\n", front->data);
front = front->next;
if (front == NULL)
rear = NULL;
free(temp);
void display() {
struct Node* temp = front;
while (temp != NULL) {
printf("%d ", temp->data);
temp = temp->next;
printf("\n");
int main() {
enqueue(10);
enqueue(20);
enqueue(30);
display();
dequeue();
display();
return 0;
DEQUE (Double-Ended Queue)
Definition
A deque is a queue where insertion and deletion can be done from both ends — front and
rear.
Types
1. Input-restricted deque: Insertion only at rear, deletion at both ends.
2. Output-restricted deque: Deletion only at front, insertion at both ends.
PRIORITY QUEUE
Definition
A priority queue is a special queue where each element has a priority.
Elements with higher priority are served before those with lower priority.
Types
Ascending Priority Queue: Smaller values have higher priority.
Descending Priority Queue: Larger values have higher priority.
Example
Element Priority
10 2
20 1
30 3
Dequeuing order → 20 → 10 → 30
Applications of Queue
CPU Scheduling
Disk Scheduling
Printer Spooling
Data Bu ers (IO queues)
Simulation of real-world queues
Would you like me to include diagrams for linear, circular, and linked queue structures
to visualize their working?
Here’s a detailed explanation of all these topics — Searching and Hashing concepts
1. Concept of Searching
Definition
Searching means finding the location of a desired element (called key) in a list of
elements (like an array or file).
It tells whether the element exists, and if yes, where.
Types of Searching
1. Sequential Search (Linear Search)
2. Index Sequential Search
3. Binary Search
4. Hashing
2. Sequential Search (Linear Search)
Concept
In sequential search, each element in the list is compared one by one with the search key
until a match is found or the list ends.
Algorithm
1. Start from the first element.
2. Compare the key with each element.
3. If match found → return position.
4. If end reached → element not found.
C Program
#include <stdio.h>
int main() {
int arr[] = {10, 25, 30, 45, 50};
int n = 5, key, i, found = 0;
printf("Enter element to search: ");
scanf("%d", &key);
for (i = 0; i < n; i++) {
if (arr[i] == key) {
printf("Element found at position %d\n", i);
found = 1;
break;
if (!found)
printf("Element not found\n");
return 0;
Time Complexity
Case Complexity
Best Case O(1)
Average Case O(n/2) ≈ O(n)
Worst Case O(n)
Advantages
Simple and easy to implement
Works for unsorted data
Disadvantages
Slow for large datasets
Requires n comparisons in the worst case
3. Index Sequential Search
Concept
This method is used for large datasets stored on disk.
It combines sequential search and indexing for faster access.
How It Works
1. Divide the data into blocks.
2. Create an index table storing the first key of each block.
3. First, search the index table to locate the correct block.
4. Then perform a sequential search within that block.
Example
Index Table Points to Block
10 Block 1 → [10, 12, 14, 16]
20 Block 2 → [20, 22, 24, 26]
30 Block 3 → [30, 32, 34, 36]
If key = 22
→ Found in Block 2
→ Sequentially search within [20, 22, 24, 26]
Advantage
Reduces number of comparisons compared to linear search
Suitable for large datasets on secondary storage
Disadvantage
Overhead of maintaining index
Not suitable for frequently changing data
4. Binary Search
Concept
Binary Search works on sorted arrays only.
It repeatedly divides the list into two halves and compares the middle element with the
search key.
Algorithm
1. Find middle element → mid = (low + high) / 2
2. If key == arr[mid] → Found
3. If key < arr[mid] → Search left half
4. If key > arr[mid] → Search right half
5. Repeat until low > high
C Program
#include <stdio.h>
int main() {
int arr[] = {10, 20, 30, 40, 50};
int n = 5, key, low = 0, high = n - 1, mid, found = 0;
printf("Enter element to search: ");
scanf("%d", &key);
while (low <= high) {
mid = (low + high) / 2;
if (arr[mid] == key) {
printf("Element found at position %d\n", mid);
found = 1;
break;
} else if (key < arr[mid])
high = mid - 1;
else
low = mid + 1;
if (!found)
printf("Element not found\n");
return 0;
Time Complexity
Case Complexity
Best Case O(1)
Average Case O(log n)
Case Complexity
Worst Case O(log n)
Advantages
Very fast for large datasets
E icient compared to linear search
Disadvantages
Works only on sorted arrays
Complex to implement compared to linear search
5. Concept of Hashing
Definition
Hashing is a technique of directly mapping a key to an address (called hash address) in a
table using a hash function.
It provides fast insertion, searching, and deletion.
How It Works
1. A hash function converts the key into an index (address).
2. The element is stored in that index of the hash table.
Formula:
Hash address = H(key)
Example:
If H(x) = x % 10
and keys = {25, 37, 45, 63}
then
Key Hash Address
25 5
37 7
45 5
63 3
Here, 25 and 45 collide (same address 5).
6. Collision in Hashing
Definition
When two or more keys are hashed to the same location, it’s called a collision.
Example
If H(x) = x % 10
and we insert 25 and 45 → both map to index 5.
7. Collision Resolution Techniques
When a collision occurs, we must find an alternate place to store the element.
(a) Open Addressing
All elements are stored in the same hash table.
If a slot is occupied, we search for another slot using a probing technique.
Types of Probing:
1. Linear Probing:
Next slot = (H(key) + i) % size
Example: if index 5 is full, try 6, then 7, etc.
2. Quadratic Probing:
Next slot = (H(key) + i²) % size
Avoids clustering problem.
3. Double Hashing:
Next slot = (H1(key) + i * H2(key)) % size
(b) Separate Chaining
Each slot of the hash table stores a linked list of elements that hash to the same address.
Example:
Index Elements
3 63 → NULL
5 25 → 45 → NULL
7 37 → NULL
Advantages of Hashing
Constant time access → O(1) average case
E icient for large datasets
Disadvantages
Collisions reduce e iciency
Requires a good hash function
More memory usage if not managed properly
Summary Table
Technique Works On Time Complexity Key Feature
Sequential Search Unsorted data O(n) Simple
Index Sequential Large datasets O(log n) Uses index
Binary Search Sorted data O(log n) Divide & Conquer
Technique Works On Time Complexity Key Feature
Hashing Key-based access O(1) avg Direct access
Would you like me to add diagrams (for binary search, hashing, and collision resolution like
chaining & probing)?
They make these concepts much easier to visualize.
Here’s a detailed explanation of all the sorting algorithms you mentioned
1. Concept of Sorting
Definition
Sorting means arranging data in a particular order — ascending or descending — based
on some key.
Example:
Unsorted array → {5, 3, 8, 1, 2}
Sorted array → {1, 2, 3, 5, 8}
2. Insertion Sort
Concept
Insertion Sort builds the final sorted array one element at a time.
It works the same way you sort playing cards in your hand — pick one card and insert it into
the correct position.
Algorithm
1. Start from the second element (index = 1).
2. Compare it with previous elements and insert it in the correct position.
3. Repeat for all elements.
Example
Array = [5, 2, 4, 6, 1]
Pass Result
1 [2, 5, 4, 6, 1]
2 [2, 4, 5, 6, 1]
3 [2, 4, 5, 6, 1]
4 [1, 2, 4, 5, 6]
C Program
#include <stdio.h>
void insertionSort(int arr[], int n) {
int i, j, key;
for (i = 1; i < n; i++) {
key = arr[i];
j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
arr[j + 1] = key;
int main() {
int arr[] = {5, 2, 4, 6, 1};
int n = 5;
insertionSort(arr, n);
for (int i = 0; i < n; i++)
printf("%d ", arr[i]);
return 0;
Time Complexity
Case Complexity
Best (Already sorted) O(n)
Average O(n²)
Worst O(n²)
3. Selection Sort
Concept
Selection Sort repeatedly selects the smallest element from the unsorted part and
places it at the beginning.
Algorithm
1. Find the smallest element in the unsorted array.
2. Swap it with the first element of the unsorted part.
3. Repeat for all positions.
Example
Array = [64, 25, 12, 22, 11]
Pass Result
1 [11, 25, 12, 22, 64]
2 [11, 12, 25, 22, 64]
3 [11, 12, 22, 25, 64]
4 [11, 12, 22, 25, 64]
C Program
#include <stdio.h>
void selectionSort(int arr[], int n) {
int i, j, min, temp;
for (i = 0; i < n - 1; i++) {
min = i;
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[min])
min = j;
temp = arr[min];
arr[min] = arr[i];
arr[i] = temp;
int main() {
int arr[] = {64, 25, 12, 22, 11};
int n = 5;
selectionSort(arr, n);
for (int i = 0; i < n; i++)
printf("%d ", arr[i]);
return 0;
Time Complexity
Case Complexity
Best O(n²)
Average O(n²)
Worst O(n²)
4. Bubble Sort
Concept
Bubble Sort repeatedly compares adjacent elements and swaps them if they are in the
wrong order.
After each pass, the largest element "bubbles up" to its correct position.
Algorithm
1. Compare adjacent elements.
2. Swap if the first is greater than the second.
3. Repeat until the array is sorted.
Example
Array = [5, 3, 8, 4, 2]
Pass Result
1 [3, 5, 4, 2, 8]
2 [3, 4, 2, 5, 8]
3 [3, 2, 4, 5, 8]
4 [2, 3, 4, 5, 8]
C Program
#include <stdio.h>
void bubbleSort(int arr[], int n) {
int i, j, temp;
for (i = 0; i < n - 1; i++) {
for (j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
int main() {
int arr[] = {5, 3, 8, 4, 2};
int n = 5;
bubbleSort(arr, n);
for (int i = 0; i < n; i++)
printf("%d ", arr[i]);
return 0;
Time Complexity
Case Complexity
Best (sorted) O(n)
Average O(n²)
Worst O(n²)
5. Heap Sort
Concept
Heap Sort uses a binary heap data structure.
It first builds a max heap, then repeatedly removes the largest element and rebuilds the
heap.
Algorithm
1. Build a max heap from the array.
2. Swap the root (max element) with the last element.
3. Reduce heap size and heapify the root.
4. Repeat until all elements are sorted.
C Program
#include <stdio.h>
void heapify(int arr[], int n, int i) {
int largest = i;
int left = 2*i + 1;
int right = 2*i + 2;
if (left < n && arr[left] > arr[largest])
largest = left;
if (right < n && arr[right] > arr[largest])
largest = right;
if (largest != i) {
int temp = arr[i];
arr[i] = arr[largest];
arr[largest] = temp;
heapify(arr, n, largest);
void heapSort(int arr[], int n) {
for (int i = n / 2 - 1; i >= 0; i--)
heapify(arr, n, i);
for (int i = n - 1; i >= 0; i--) {
int temp = arr[0];
arr[0] = arr[i];
arr[i] = temp;
heapify(arr, i, 0);
int main() {
int arr[] = {12, 11, 13, 5, 6, 7};
int n = 6;
heapSort(arr, n);
for (int i = 0; i < n; i++)
printf("%d ", arr[i]);
return 0;
Time Complexity
Case Complexity
Best O(n log n)
Average O(n log n)
Worst O(n log n)
6. Comparison of Sorting Algorithms
Algorithm Best Case Average Case Worst Case Space Stable
Bubble Sort O(n) O(n²) O(n²) O(1) Yes
Insertion Sort O(n) O(n²) O(n²) O(1) Yes
Selection Sort O(n²) O(n²) O(n²) O(1) No
Algorithm Best Case Average Case Worst Case Space Stable
Heap Sort O(n log n) O(n log n) O(n log n) O(1) No
7. Sorting in Linear Time
Some sorting algorithms can run in O(n) time when the data range is known or limited.
(a) Counting Sort
Concept
Counting Sort works by counting occurrences of each distinct element, then placing them
in the correct order.
Steps
1. Find the largest element (max).
2. Create a count array of size max+1.
3. Store the frequency of each element.
4. Modify count array → cumulative sum.
5. Place elements in output array using count array.
Example
Input = [4, 2, 2, 8, 3, 3, 1]
Count array after counting: [0,1,2,1,1,0,0,0,1]
Final sorted output = [1,2,2,3,3,4,8]
Time Complexity
O(n + k), where k = range of input values
(b) Bucket Sort
Concept
Bucket Sort divides the elements into multiple buckets, sorts each bucket individually
(using insertion sort or any other), and then merges them.
Steps
1. Divide the range of elements into buckets.
2. Distribute elements into buckets.
3. Sort each bucket.
4. Combine all buckets into one sorted array.
Example
Array = [0.25, 0.36, 0.58, 0.41, 0.29, 0.22]
Bucket Values (Sorted Individually)
0–0.2 -
0.2–0.4 [0.22, 0.25, 0.29, 0.36]
0.4–0.6 [0.41, 0.58]
Final output → [0.22, 0.25, 0.29, 0.36, 0.41, 0.58]
Time Complexity
Case Complexity
Best O(n + k)
Average O(n + k)
Worst O(n²)
Summary Table
| Sort | Best | Average | Worst | Space | Stable | Type |
|-----------|-----------|--------------|------------|-------------|-------------|
| Bubble | O(n) | O(n²) | O(n²) | O(1) | | Simple |
| Insertion | O(n) | O(n²) | O(n²) | O(1) | | Simple |
| Selection | O(n²) | O(n²) | O(n²) | O(1) | | Simple |
| Heap | O(n log n) | O(n log n) | O(n log n) | O(1) | | E icient |
| Counting | O(n + k) | O(n + k) | O(n + k) | O(k) | | Linear |
| Bucket | O(n + k) | O(n + k) | O(n²) | O(n) | | Linear |
Would you like me to include flowcharts or step-by-step visual examples (like array
diagrams) for each sort?
They make understanding sorting much easier.
Here’s a detailed explanation of all the graph-related concepts you mentioned
1. Graph Terminology
A graph is a non-linear data structure consisting of a set of vertices (nodes) and edges
(connections) that connect pairs of vertices.
Basic Terms:
Term Description
Vertex (Node) A point in the graph, represented as V. Example: V = {A, B, C, D}
Edge A connection between two vertices. Example: (A, B)
Directed Graph
Edges have direction (→). Example: A → B
(Digraph)
Undirected Graph Edges do not have direction. Example: A — B
Weighted Graph Each edge has a weight or cost (e.g., distance, time).
Term Description
Number of edges connected to the vertex. In directed graphs: In-
Degree of a vertex degree → number of incoming edges Out-degree → number of
outgoing edges
Path A sequence of vertices connected by edges. Example: A → B → C
Cycle A path that starts and ends at the same vertex.
Connected Graph Every vertex is reachable from every other vertex.
Disconnected
Some vertices are not reachable from others.
Graph
Complete Graph Every pair of vertices is connected by an edge.
Sparse Graph Has few edges compared to the number of vertices.
Dense Graph Has many edges compared to the number of vertices.
2. Graph Representations
Graphs can be represented in three common ways:
A. Adjacency Matrix
A 2D array where each cell (i, j) indicates whether there is an edge from vertex i to
vertex j.
For unweighted graphs, entries are 0 (no edge) or 1 (edge exists).
For weighted graphs, entries store the edge weight.
Example:
Graph:
A—B
A—C
B—C
ABC
A011
B101
C1 1 0
Advantages:
Simple and easy to understand
Good for dense graphs
Disadvantages:
Uses O(V²) space even if few edges exist
Ine icient for sparse graphs
B. Adjacency List
Each vertex maintains a linked list of adjacent vertices.
Example:
Graph:
A → B, C
B→C
C→A
Adjacency List:
A→B→C
B→C
C→A
Advantages:
E icient for sparse graphs
Uses less space: O(V + E)
Disadvantages:
Slower to check if two vertices are connected
C. Incidence Matrix
Rows represent vertices and columns represent edges.
If vertex v is incident to edge e, entry (v, e) = 1.
Not as commonly used as the first two.
3. Graph Traversal
Graph traversal means visiting all the vertices in a particular manner.
There are two main techniques:
A. Depth First Search (DFS)
Concept: Explore as far as possible along each branch before backtracking.
Uses: Stack (explicit or recursion)
Algorithm (Recursive DFS):
1. Start from a source vertex.
2. Mark it as visited.
3. Visit an adjacent unvisited vertex recursively.
4. Backtrack when no adjacent vertex remains.
Example:
Graph:
A→B→D
A→C
Traversal Order: A, B, D, C
Pseudocode:
void DFS(int v, int visited[], int adj[][n]) {
visited[v] = 1;
printf("%d ", v);
for (int i = 0; i < n; i++) {
if (adj[v][i] == 1 && !visited[i])
DFS(i, visited, adj);
Applications:
Detecting cycles
Topological sorting
Pathfinding problems
B. Breadth First Search (BFS)
Concept: Explore all neighbors of a vertex before moving to the next level.
Uses: Queue
Algorithm:
1. Start from a source vertex and mark it visited.
2. Add it to a queue.
3. Remove vertex from queue → Visit all unvisited neighbors → Add them to queue.
4. Repeat until queue is empty.
Example:
Graph:
A → B, C
B→D
C→E
Traversal Order: A, B, C, D, E
Pseudocode:
void BFS(int start, int visited[], int adj[][n]) {
Queue q;
enqueue(&q, start);
visited[start] = 1;
while (!isEmpty(&q)) {
int v = dequeue(&q);
printf("%d ", v);
for (int i = 0; i < n; i++) {
if (adj[v][i] == 1 && !visited[i]) {
enqueue(&q, i);
visited[i] = 1;
Applications:
Finding the shortest path (in unweighted graphs)
Detecting connected components
Web crawlers, peer-to-peer networks
4. Connected Components
A connected component in an undirected graph is a set of vertices that are all reachable
from each other.
Example:
Graph:
A—B—C D—E
Here, there are two connected components:
{A, B, C} and {D, E}
Finding Connected Components (using DFS or BFS):
for (i = 0; i < n; i++) {
if (!visited[i]) {
DFS(i, visited, adj);
printf("\n"); // Each DFS traversal gives one component
Summary Table
Concept Description
Graph Collection of vertices and edges
Representation Adjacency Matrix / List
DFS Depth-based traversal (uses stack/recursion)
BFS Level-based traversal (uses queue)
Connected Component Subgraph where all vertices are connected
Would you like me to explain graph algorithms next (like Dijkstra’s, Kruskal’s, and Prim’s)?
Here’s a complete and detailed explanation of all the tree-related topics you mentioned
1. Basic Terminology Used with Trees
A Tree is a non-linear hierarchical data structure consisting of nodes connected by
edges.
Tree Terms
Term Description
Node A single element of a tree that contains data and links to child nodes.
Root The topmost node in a tree (has no parent).
Parent Node The node that has child nodes.
Child Node Node directly connected to another node when moving away from the root.
Siblings Nodes with the same parent.
Leaf Node A node with no children.
Edge The connection between a parent and a child node.
Path Sequence of nodes connected by edges.
Height Length of the longest path from root to a leaf.
Depth Distance of a node from the root.
Degree Number of children a node has.
Subtree A tree formed by any node and its descendants.
2. Binary Tree
A Binary Tree is a tree in which each node can have at most two children — called the
left child and right child.
Types of Binary Trees:
1. Strict (Proper) Binary Tree – Every node has 0 or 2 children.
2. Complete Binary Tree – All levels are filled except possibly the last, which is filled
from left to right.
3. Perfect Binary Tree – All internal nodes have two children and all leaves are at the
same level.
4. Skewed Binary Tree – All nodes have only one child (either left or right).
3. Binary Tree Representation
A. Array Representation
Used mainly for complete binary trees.
If the root is stored at index 1, then:
o Left child of node i = 2*i
o Right child of node i = 2*i + 1
o Parent of node i = i / 2
Example:
Binary Tree:
/\
B C
/\
D E
Array Representation:
Index: 1 2 3 4 5
Value: A B C D E
B. Linked List (Pointer) Representation
Each node contains:
Data
Pointer to Left Child
Pointer to Right Child
struct Node {
int data;
struct Node *left;
struct Node *right;
};
4. Binary Search Tree (BST)
A Binary Search Tree is a special kind of binary tree where:
The left subtree of a node contains only nodes with values less than the node’s
key.
The right subtree of a node contains only nodes with values greater than the
node’s key.
Both subtrees are also BSTs.
Example:
50
/ \
30 70
/\ /\
20 40 60 80
5. Complete Binary Tree
A Complete Binary Tree is a binary tree in which:
All levels except possibly the last are completely filled.
The last level has all nodes as left as possible.
Example:
/ \
2 3
/\ /
4 56
6. Extended Binary Tree (Full Binary Tree)
An Extended Binary Tree (also called Full Binary Tree) is a binary tree in which every node
has either 0 or 2 children — no node has only one child.
7. Tree Traversal Algorithms
Tree traversal means visiting every node of a tree exactly once in a specific order.
A. Inorder Traversal (Left → Root → Right)
void inorder(struct Node* root) {
if (root != NULL) {
inorder(root->left);
printf("%d ", root->data);
inorder(root->right);
Example Output: 20 30 40 50 60 70 80
B. Preorder Traversal (Root → Left → Right)
void preorder(struct Node* root) {
if (root != NULL) {
printf("%d ", root->data);
preorder(root->left);
preorder(root->right);
}
Example Output: 50 30 20 40 70 60 80
C. Postorder Traversal (Left → Right → Root)
void postorder(struct Node* root) {
if (root != NULL) {
postorder(root->left);
postorder(root->right);
printf("%d ", root->data);
Example Output: 20 40 30 60 80 70 50
8. Constructing Binary Tree from Traversals
To construct a binary tree:
Inorder + Preorder → Unique tree
Inorder + Postorder → Unique tree
Example:
Inorder = D B E A F C
Preorder = A B D E C F
From preorder, root = A
In inorder, elements left of A are in left subtree → (D B E)
Elements right of A are in right subtree → (F C)
9. Operations on Binary Search Tree (BST)
A. Insertion
1. Start from root.
2. If new value < current node → go left.
3. If new value > current node → go right.
4. Insert where NULL is found.
B. Searching
Compare key with current node.
Go left or right depending on value.
Continue until key is found or NULL is reached.
C. Deletion
Cases:
1. Node is a leaf → Delete directly.
2. Node has one child → Replace node with its child.
3. Node has two children → Replace with inorder successor (smallest in right
subtree) or predecessor (largest in left subtree).
10. Threaded Binary Trees
A Threaded Binary Tree is used to make inorder traversal faster without using recursion
or a stack.
Normally, many pointers in a binary tree are NULL.
In a threaded tree, these NULL pointers are replaced by threads pointing to the
inorder predecessor or successor.
Advantages:
Faster inorder traversal
No stack or recursion needed
11. Hu man Coding Using Binary Tree
Hu man coding is a data compression algorithm that assigns shorter codes to more
frequent characters.
Steps:
1. Create leaf nodes for each character with frequency.
2. Insert all nodes into a min-heap (priority queue).
3. Repeat until only one node remains:
o Remove two smallest frequency nodes.
o Create a new internal node with these two as children.
o Insert the new node back.
4. The remaining node is the Hu man Tree root.
5. Left edge = 0, Right edge = 1 → generate codes.
Example:
A:5, B:9, C:12, D:13, E:16, F:45
Resulting codes:
F=0
C = 100
D = 101
A = 1100
B = 1101
E = 111
12. AVL Tree (Height Balanced BST)
An AVL Tree is a self-balancing Binary Search Tree where
for every node:
|Height(left subtree) – Height(right subtree)| ≤ 1
Rotations Used to Balance:
1. LL Rotation – Right rotation
2. RR Rotation – Left rotation
3. LR Rotation – Left-Right rotation
4. RL Rotation – Right-Left rotation
Advantages:
Faster searching and insertion compared to normal BST
Time complexity = O(log n)
13. B-Tree
A B-Tree is a balanced multi-way search tree used in databases and file systems.
Properties:
1. All leaves are at the same level.
2. Each node can have multiple keys and multiple children.
3. Keeps data sorted and allows binary search within nodes.
4. Maintains balance after insertion/deletion.
Advantages:
Reduces disk I/O operations
E icient for large data storage
Summary Table
Concept Description
Binary Tree Each node has ≤ 2 children
BST Left < Root < Right
Complete Tree Filled except last level (left-filled)
Traversal Inorder, Preorder, Postorder
Threaded Tree Inorder traversal without stack
Concept Description
Hu man Tree Used in data compression
AVL Tree Self-balancing BST
B-Tree Multi-level balanced tree for databases
Would you like me to include illustrative diagrams for each tree type (like BST, AVL,
Hu man, and B-Tree)? It’ll make the concept much clearer visually.