1.
Time and Space Complexity of Algorithms
Time and space complexity are the two primary measures used to evaluate the efficiency of an
algorithm. Time complexity describes the amount of time an algorithm takes to run as a function of
the input size, while space complexity deals with the amount of memory consumed. Time
complexity is not measured in real clock time because hardware and system load may affect
runtime. Instead, it is measured in terms of the number of elementary operations, such as
comparisons or assignments, that the algorithm performs. For example, if an algorithm requires
looping through all n elements of an array once, its time complexity is O(n). If it requires a nested
loop that iterates n times inside another loop of n, its complexity becomes O(n^2). Space
complexity includes fixed parts (like program instructions, constants, and simple variables) and
variable parts (like dynamically allocated memory, recursion stack space, and arrays). For instance,
recursive algorithms like merge sort need extra space for recursion stack, while iterative algorithms
like bubble sort do not. Understanding both complexities is crucial. Sometimes, an algorithm with
less time complexity may consume more memory (like dynamic programming), while another may
use less memory but take more time. Thus, algorithm analysis balances trade-offs between time
and space.
2. Deriving Time Complexity Polynomial by
Execution Counts
Deriving the time complexity polynomial involves counting the number of basic operations executed
by an algorithm’s pseudocode. Each line or block of pseudocode is analyzed for how many times it
executes relative to the input size n. For example: Algorithm Example: for i = 1 to n: for j = 1 to n:
print(i, j) Here, the inner loop executes n times for each iteration of the outer loop. Thus, the total
number of executions is n * n = n^2. Adding constants and lower-order terms gives us a polynomial
expression like an^2 + bn + c. In asymptotic analysis, we drop constants and lower-order terms,
leaving O(n^2). This process allows us to generalize algorithm efficiency mathematically without
running the program. The polynomial form also helps compare algorithms; an algorithm with O(n
log n) will always perform better than O(n^2) for large n, regardless of constants.
3. Best, Average, and Worst Case Analysis
When analyzing algorithms, three performance cases are considered: best, average, and worst.
The best case represents the scenario where the algorithm performs the minimum number of steps.
For example, in linear search, if the element to be found is the first element, the search completes
in O(1). The worst case is the maximum steps taken, such as searching for an element that is not in
the array, which requires O(n) comparisons. The average case provides a realistic measure by
considering all possible inputs and averaging the performance. For linear search, the average case
assumes the element may be anywhere in the array, giving about n/2 comparisons, which simplifies
to O(n). This distinction is important because algorithms may perform very differently under different
conditions. For instance, quicksort runs in O(n log n) on average but may degrade to O(n^2) in the
worst case if the pivot choices are poor. Hence, analyzing all three cases ensures we understand
the full performance spectrum of an algorithm, guiding its suitability for practical use.
4. Bubble Sort
Bubble sort is a simple comparison-based sorting algorithm. It works by repeatedly swapping
adjacent elements if they are in the wrong order. Consider an array of size n: each pass compares
adjacent pairs and pushes the largest element to the end, like a bubble rising in water. The process
repeats until the array is sorted. For example, with input [5, 3, 2, 4], bubble sort compares (5,3) →
swap → [3,5,2,4], then (5,2) → swap → [3,2,5,4], and so on. After the first pass, the largest element
(5) is at the end. After n-1 passes, the array is sorted. Complexity analysis: in the worst case
(reverse order), each pass makes n-1 comparisons, totaling (n-1) + (n-2) + … + 1 = n(n-1)/2 ≈
O(n^2). Best case: if already sorted, only one pass is needed with O(n). Average case is O(n^2).
Bubble sort is rarely used in practice due to inefficiency but is useful for teaching algorithm
concepts.
5. Insertion Sort
Insertion sort builds a sorted portion of the array step by step by inserting elements into their correct
positions. Starting from the second element, it compares with elements before it and inserts it in the
correct place. This process repeats for all elements. For example, array [4, 3, 1, 2]: take 3, compare
with 4, insert before → [3,4,1,2]. Next, take 1, insert before both → [1,3,4,2]. Finally, insert 2
between 1 and 3 → [1,2,3,4]. Time complexity: worst case occurs when the array is in reverse
order, requiring about n^2/2 comparisons = O(n^2). Best case is when the array is already sorted,
requiring only O(n) comparisons. Average case is O(n^2). Insertion sort is efficient for small arrays
or nearly sorted data and is commonly used in hybrid algorithms like Timsort.
6. Binary Search
Binary search is an efficient searching algorithm for sorted arrays. It works by repeatedly dividing
the search interval in half. Starting with the entire array, compare the target value with the middle
element. If equal, return the index. If smaller, search the left half; if larger, search the right half. This
reduces the search space exponentially. Example: searching 23 in [10,15,20,23,28,35]. Middle
element = 20. Since 23 > 20, search right half [23,28,35]. Middle = 28. Since 23 < 28, search left
half [23]. Found! Complexity: in the best case, O(1) (if the middle element is the target). In the worst
case, each step halves the array, giving O(log n). Average case is also O(log n). Binary search is
fundamental in algorithm design and underlies many advanced data structures like binary search
trees.
7. Asymptotic Notations – O, Θ, Ω
Asymptotic notations describe the growth rate of an algorithm’s runtime or memory with input size
n. They provide a machine-independent framework for comparing algorithms. Big O (O): describes
the upper bound. Example: if an algorithm runs in 2n^2 + 3n + 5 steps, we write O(n^2), ignoring
constants and smaller terms. It means the algorithm will not take longer than proportional to n^2.
Big Omega (Ω): describes the lower bound. If the algorithm takes at least cn^2 steps for large n, it is
Ω(n^2). Big Theta (Θ): describes tight bound. If an algorithm is both O(n^2) and Ω(n^2), then it is
Θ(n^2). Graphs help visualize: O(n) grows linearly, O(n^2) quadratically, O(log n) grows very slowly.
These notations allow us to abstract away machine differences and focus purely on input size
effects.
8. Logarithmic Time Complexity & Graphs
Logarithmic time complexity arises when the input size reduces by a constant factor in each step.
Binary search is the classic example, reducing the search space by half each iteration.
Mathematically, if an algorithm divides the problem size n by 2 repeatedly, the number of steps is
log2(n). For example, if n = 16, binary search requires at most log2(16) = 4 steps. For n =
1,000,000, binary search takes only ~20 steps, demonstrating efficiency. Graphically, O(log n)
grows far slower than O(n) and O(n^2). While linear algorithms increase proportionally with n,
logarithmic ones barely rise, making them ideal for large-scale problems. Many advanced data
structures, like balanced trees and heaps, achieve O(log n) operations for insertion, deletion, and
search.
Graph: Comparison of Time Complexities