.
Time Complexity & Asymptotic Notation
What is it?
Time complexity tells how fast an algorithm runs as input grows.
Asymptotic Notations:
O(n) – Worst case (Big O)
Ω(n) – Best case (Omega)
Θ(n) – Average case (Theta)
Example:
// O(n)
for (int i = 0; i < n; i++) {
printf("%d", i);
// O(n^2)
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
printf("%d %d", i, j);
2. Linear Search
When to Use?
When the array is unsorted.
Idea:
Check every element one-by-one.
C Code
int linear_search(int arr[], int n, int key) {
for (int i = 0; i < n; i++) {
if (arr[i] == key)
return i;
return -1;
Time Complexity:
Best Case: O(1)
Worst Case: O(n)
3. Binary Search
When to Use?
Only works on sorted arrays.
Idea:
Divide and compare mid element.
C Code:
int binary_search(int arr[], int n, int key) {
int low = 0, high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == key)
return mid;
else if (arr[mid] < key)
low = mid + 1;
else
high = mid - 1;
return -1;
}
Time Complexity:
O(log n)
4. Max-Min using Divide and Conquer
When to Use?
To reduce comparisons while finding max and min.
Idea:
Split array, find max/min in parts, then combine.
C Code:
void max_min(int arr[], int low, int high, int* max, int* min) {
if (low == high) {
*max = *min = arr[low];
} else if (high == low + 1) {
if (arr[low] > arr[high]) {
*max = arr[low];
*min = arr[high];
} else {
*max = arr[high];
*min = arr[low];
} else {
int mid = (low + high) / 2, max1, min1, max2, min2;
max_min(arr, low, mid, &max1, &min1);
max_min(arr, mid + 1, high, &max2, &min2);
*max = (max1 > max2) ? max1 : max2;
*min = (min1 < min2) ? min1 : min2;
}
Time Complexity:
O(n)
5. Merge Sort
When to Use?
When stable and consistent O(n log n) sort is needed.
Idea:
Divide → Sort → Merge
C Code:
void merge(int arr[], int l, int m, int r) {
int i, j, k, n1 = m - l + 1, n2 = r - m;
int L[n1], R[n2];
for (i = 0; i < n1; i++) L[i] = arr[l + i];
for (j = 0; j < n2; j++) R[j] = arr[m + 1 + j];
i = j = 0; k = l;
while (i < n1 && j < n2) arr[k++] = (L[i] <= R[j]) ? L[i++] : R[j++];
while (i < n1) arr[k++] = L[i++];
while (j < n2) arr[k++] = R[j++];
void merge_sort(int arr[], int l, int r) {
if (l < r) {
int m = l + (r - l) / 2;
merge_sort(arr, l, m);
merge_sort(arr, m + 1, r);
merge(arr, l, m, r);
}
Time Complexity:
O(n log n) in all cases
6. Quick Sort
When to Use?
Fast in practice, especially for average-case data.
Idea:
Pick pivot → Partition → Recurse
🧾 C Code
int partition(int arr[], int low, int high) {
int pivot = arr[high], i = (low - 1);
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp;
int temp = arr[i+1]; arr[i+1] = arr[high]; arr[high] = temp;
return i + 1;
void quick_sort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quick_sort(arr, low, pi - 1);
quick_sort(arr, pi + 1, high);
}
Time Complexity:
Average: O(n log n)
Worst: O(n²)
7. Selection Sort
When to Use?
Simple sorting method for small datasets.
Idea:
Find minimum in the unsorted part and move it to the front.
C Code:
void selection_sort(int arr[], int n) {
for (int i = 0; i < n-1; i++) {
int min_idx = i;
for (int j = i+1; j < n; j++) {
if (arr[j] < arr[min_idx])
min_idx = j;
int temp = arr[i];
arr[i] = arr[min_idx];
arr[min_idx] = temp;
Time Complexity:
Always O(n²)
8. Insertion Sort
When to Use?
Efficient for nearly sorted arrays or small inputs.
Idea:
Insert elements into their correct position in sorted part.
C Code:
void insertion_sort(int arr[], int n) {
for (int i = 1; i < n; i++) {
int key = arr[i], j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j+1] = arr[j];
j--;
arr[j+1] = key;
Time Complexity:
Best: O(n), Worst: O(n²)
9. Bubble Sort
When to Use?
Educational use; not efficient for large datasets.
Idea:
Compare adjacent elements and swap if needed.
C Code:
void bubble_sort(int arr[], int n) {
for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j+1]) {
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
Time Complexity:
Best: O(n), Worst: O(n²)
10. Greedy Approach
A Greedy Algorithm is an algorithmic paradigm that builds up a solution piece by piece,
always choosing the next piece that offers the most immediate benefit or appears to be
the best at that moment.
It never revisits past decisions, which is what makes it fast, but also sometimes
suboptimal.
Key Characteristics:
1. Greedy Choice Property: A globally optimal solution can be arrived at by choosing
the best local option at each step.
2. Optimal Substructure: An optimal solution to the problem contains optimal
solutions to subproblems.
When Does Greedy Work?
When a problem has both the greedy-choice property and optimal substructure.
Greedy algorithms don’t always guarantee optimal solutions unless the problem
satisfies these properties.
Examples of Greedy Algorithms:
Problem Greedy Strategy
Activity Selection Always pick the activity that ends earliest.
Problem Greedy Strategy
Fractional Knapsack Take items with the highest value/weight ratio.
Huffman Encoding Merge least frequent characters first.
Prim’s Algorithm (MST) Choose the minimum edge connecting visited to unvisited.
Kruskal’s Algorithm (MST) Pick the edge with the least weight that doesn’t form a cycle.
Dijkstra’s Algorithm Choose the nearest unvisited vertex.
Example 1: Activity Selection Problem
You are given n activities with start and end times. Select the maximum number of activities
that don’t overlap.
Greedy Strategy:
Sort by end time.
Select the activity with the earliest finishing time that doesn’t overlap.
Time Complexity:
O(n log n) for sorting + O(n) selection
Why it works?
Because picking the activity that finishes first leaves the most room for others.
Example 2: Fractional Knapsack
Maximize total value in a bag with limited weight. You can take fractions of items.
Greedy Strategy:
Calculate value/weight ratio for each item.
Sort items by this ratio in descending order.
Pick items greedily until bag is full.
Time Complexity:
O(n log n) for sorting items
Why it works?
Taking the most "valuable per unit weight" item ensures best value.