1.
Algorithm
An algorithm is a well-defined, finite sequence of instructions used to solve a problem or
perform a computation.
It takes an input, processes it through logical steps, and produces a desired output.
Characteristics:
1. Finiteness: Must terminate after a finite number of steps.
2. Definiteness: Each step is clear and unambiguous.
3. Input: Takes zero or more inputs.
4. Output: Produces at least one output.
5. Effectiveness: Each step can be executed mechanically.
Example:
Binary Search algorithm finds an element in a sorted list by repeatedly dividing the search
interval in half.
Goal:
Achieve correctness, efficiency, and clarity — ideally minimal time and space use.
2. Efficiency Goal of Algorithms
Algorithm efficiency measures how effectively it uses computational resources — primarily time
and space.
● Time Complexity: How the runtime grows with input size n.
Represented in Big-O notation:
○ O(1): Constant time
○ O(log n): Logarithmic (e.g., Binary Search)
○ O(n): Linear (e.g., Linear Search)
○ O(n log n): Quasi-linear (e.g., Merge Sort, Quick Sort)
○ O(n²): Quadratic (e.g., Insertion Sort, Bubble Sort)
● Space Complexity: Memory needed for input, variables, recursion, and data structures.
Example: Merge Sort uses O(n) extra space; Quick Sort uses O(log n) for recursion.
1
Goal:
Design algorithms that minimize both time and space, achieving optimal performance for large
inputs.
3. NP Problems
“NP” stands for Nondeterministic Polynomial time.
A problem is NP if a given solution can be verified quickly (in polynomial time), even if
finding that solution may take a very long time.
Example:
Given a path in a graph, checking if it visits each city exactly once (Hamiltonian cycle) is fast;
finding such a path may not be.
Key Point:
● NP problems may not be solvable quickly, but once a candidate solution is found, it can
be checked efficiently.
● Polynomial time = execution time proportional to nᵏ (for some constant k).
4. NP-Complete Problems
A problem is NP-complete if:
1. It is in NP (its solutions can be verified quickly).
2. Every NP problem can be reduced to it in polynomial time.
These are the hardest problems in NP. If you find a polynomial-time algorithm for one NP-
complete problem, you can solve all NP problems efficiently.
Examples:
● Traveling Salesman Problem (TSP)
● 3-SAT (Boolean satisfiability problem)
● Subset Sum Problem
● Knapsack Problem
Key Idea:
No known polynomial-time solution exists yet (this is the famous P vs NP question).
5. Divide and Conquer
2
A design strategy that solves problems by:
1. Dividing the main problem into smaller subproblems of the same type.
2. Conquering each subproblem recursively.
3. Combining the results to form the final solution.
Examples:
● Merge Sort: Divides array into halves, sorts recursively, merges sorted halves.
● Quick Sort: Divides around a pivot, sorts subarrays.
● Binary Search: Divides search interval in half each iteration.
Advantages:
● Simplifies complex problems.
● Often yields better time complexity (O(n log n)).
Disadvantages:
● May use extra memory due to recursion (e.g., Merge Sort).
6. Recursion
A method where a function calls itself to solve smaller instances of a problem.
Structure:
1. Base Case: Condition to stop recursion.
2. Recursive Case: Function calls itself with smaller input.
Example:
factorial(n):
if n == 1: return 1
else: return n * factorial(n-1)
Advantages:
3
● Simplifies code for divide-and-conquer problems.
● Natural for problems like tree traversals or factorial.
Disadvantages:
● Uses more memory due to stack calls.
● Risk of stack overflow if base case missing.
7. The 37% Rule
Used in optimal stopping problems, where you must make a decision sequentially without
going back.
Concept:
If you’re faced with n choices (like candidates), you should:
1. Observe the first 37% of options without choosing.
2. Then select the next one that’s better than all previous options.
Example:
Out of 100 job applicants, reject the first 37, then choose the next who’s better than all the first
37.
Use Cases: Hiring, dating, apartment hunting, etc.
Purpose: Maximize probability of picking the best option.
8. Stack
A Last In, First Out (LIFO) data structure.
The last element inserted is the first removed.
Operations:
● push(x): Add item to top.
● pop(): Remove top item.
● peek(): View top item without removing.
Applications:
● Function call management (recursion stack)
4
● Undo operations in software
● Expression evaluation (postfix/prefix)
Time Complexity: O(1) for push/pop.
9. Queue
A First In, First Out (FIFO) data structure.
The first element inserted is the first removed.
Operations:
● enqueue(x): Insert element at rear.
● dequeue(): Remove element from front.
Applications:
● Scheduling (CPU, printers)
● BFS (Breadth-First Search) in graphs
● Task management systems
Variants:
● Circular Queue (wrap-around indexing)
● Priority Queue (ordered by priority)
● Deque (Double-ended Queue)
Time Complexity: O(1) for enqueue/dequeue.
10. Linked List
A dynamic data structure where each node contains data and a pointer (link) to the next node.
Types:
● Singly Linked List: Each node points to the next.
● Doubly Linked List: Nodes point to both next and previous.
5
● Circular Linked List: Last node connects back to first.
Advantages:
● Dynamic size (no need for resizing).
● Efficient insertion/deletion (especially at ends).
Disadvantages:
● Sequential access only (O(n) search).
● Extra memory for pointers.
11. Hash Table and Hash Function
A hash table stores key–value pairs for efficient searching, insertion, and deletion.
A hash function converts a key into an array index.
Example:
h(k) = k mod m, where m = table size.
Advantages:
● Average O(1) time for lookup and insert.
Disadvantages:
● Collisions (two keys produce same index).
Collision Handling:
● Chaining: Store multiple elements in linked lists at same index.
● Open Addressing: Find next open slot using probing.
12. Chaining
Method to resolve hash collisions by storing multiple entries at one index using linked lists.
Example:
If keys 12, 22, 32 hash to index 2 → store as [12 → 22 → 32].
Pros:
6
● Easy to implement.
● Works well when load factor (n/m) < 1.
Cons:
● Slower when lists become long.
● Extra memory for pointers.
Time Complexity: O(1 + α), α = load factor.
13. Open Addressing
Collision handling method that searches for another available slot when collision occurs.
Types:
● Linear Probing: (h(k) + i) mod m
● Quadratic Probing: (h(k) + i²) mod m
● Double Hashing: h1(k) + i * h2(k)
Advantages:
● Compact storage (no linked lists).
Disadvantages:
● Clustering (adjacent cells filled, causing slowdown).
Time Complexity: O(1) when α < 0.7.
14. Binary Search Tree (BST)
A tree data structure where:
● Left child < Parent node
● Right child > Parent node
Operations:
7
● Insertion: Add new node in correct position.
● Search: Compare with root, go left/right.
● Deletion: Replace with inorder predecessor or successor.
Traversal Orders:
● In-order (LNR): Sorted output.
● Pre-order (NLR): Root-first (used in copying trees).
● Post-order (LRN): Root-last (used in deletion).
Time Complexity:
● Average: O(log n)
● Worst: O(n) (skewed tree)
15. Insertion Sort
Idea: Build sorted array one element at a time by inserting each item into its correct position.
Steps Example:
[5,3,4,1] → [3,5,4,1] → [3,4,5,1] → [1,3,4,5]
Pseudocode:
for i = 1 to n-1:
key = A[i]
j=i-1
while j >= 0 and A[j] > key:
A[j + 1] = A[j]
j=j-1
A[j + 1] = key
Time Complexity:
8
● Best: O(n) (already sorted)
● Average/Worst: O(n²)
● Space: O(1)
Stable: Yes (preserves order of equal elements).
16. Merge Sort
A Divide and Conquer sorting algorithm.
Steps:
1. Divide array into halves recursively.
2. Sort both halves.
3. Merge the sorted halves.
Pseudocode:
MergeSort(A):
if len(A) > 1:
mid = len(A)//2
L = A[:mid]
R = A[mid:]
MergeSort(L)
MergeSort(R)
Merge(L, R, A)
Time Complexity: O(n log n)
Space Complexity: O(n)
Stable: Yes.
Use Case: External sorting (large datasets).
17. Heap Sort
9
Uses a binary heap (complete binary tree where parent ≥ children).
Steps:
1. Build max heap.
2. Swap root with last element.
3. Reduce heap size and heapify again.
Pseudocode:
HeapSort(A):
BuildMaxHeap(A)
for i = n downto 2:
swap(A[1], A[i])
heap_size = heap_size - 1
MaxHeapify(A, 1)
Time Complexity: O(n log n)
Space Complexity: O(1)
Stable: No.
Use Case: When in-place sorting is needed.
18. Quick Sort
Another Divide and Conquer algorithm.
Idea: Choose a pivot element, partition the array into two subarrays — smaller elements on left,
larger on right — and sort them recursively.
Steps Example:
Array = [9,3,7,1,6], pivot=6 → [3,1] 6 [9,7] → recursively sort left and right.
Pseudocode:
QuickSort(A, low, high):
if low < high:
p = Partition(A, low, high)
QuickSort(A, low, p - 1)
10
QuickSort(A, p + 1, high)
Time Complexity:
● Best/Average: O(n log n)
● Worst: O(n²) (bad pivot, e.g., sorted input)
Space: O(log n) (recursion stack)
Stable: No.
Advantage: Very fast in practice, good cache performance.
19. Comparison of Sorting Algorithms
Algorithm Best Average Worst Space Stabl Notes
e
Insertion O(n) O(n²) O(n²) O(1) Yes Good for small arrays
Sort
Merge Sort O(n log O(n log O(n log O(n) Yes Reliable, but not in-place
n) n) n)
Heap Sort O(n log O(n log O(n log O(1) No In-place, used in system
n) n) n) sorts
Quick Sort O(n log O(n log O(n²) O(log No Fastest on average
n) n) n)
11