Algorithms ● Quadratic Time (O(n²)):The runtime increases
● A step-by-step procedure for solving a problem or performing a exponentially as the input grows.
task. Example: Comparing all pairs of elements in a list (nested
● Characteristics: loops).
● Finite: An algorithm must terminate after a limited number of ● Exponential Time (O(2ⁿ)):The runtime doubles with each
steps. increase in input size.
● Definite: Each step must be clearly defined. Example: Recursive algorithms like solving the Tower of
● Input/Output: Algorithms have zero or more inputs and one Hanoi.
outputs.
● Effective: Every step in an algorithm should be basic enough
to be performed easily.
Seen
● Recipe for Cooking Pasta:
● Step 1: Boil water (Input: water and heat source).
● Step 2: Add pasta and cook for a specific time (Input: pasta,
timing).
● Step 3: Drain the water (Output: cooked pasta).
● Just like a recipe, an algorithm defines the steps to achieve the
desired outcome.
Seen
● Finding a Name in a Phonebook:
● If you search the phonebook alphabetically (linear search),
it will take O(n) time to go through each name.
● However, using a binary search (divide and conquer
method), you can find the name in O(log n) time, which is
Analyzing Algorithms much faster as the input size grows.
● Time Complexity: How the running time of an algorithm
increases with the input size. Asymptotic Notations
● Space Complexity: How much extra memory the algorithm ● Asymptotic notations describe the limiting behavior of an
needs. algorithm's runtime as the input size grows.
● They help us classify algorithms based on their performance in
Complexity of Algorithms the best, worst, and average cases.
● Big O Notation (O): Represents the upper bound of an ● The three most commonly used asymptotic notations are:
algorithm's running time (worst-case scenario).
● Examples: 1. Big O Notation (O) – Worst Case
○ Constant Time: O(1) ● Describes the upper bound of the runtime, giving the worst-case
○ Linear Time: O(n) scenario.
○ Quadratic Time: O(n²) ● Big O gives the highest curve that the algorithm will touch as the
● Ω (Omega): Best-case scenario. input size grows.
● Θ (Theta): Average-case scenario. ● Purpose: It helps ensure that the algorithm will never take more
time than the specified amount.
Growth of Functions ● Example:In Bubble Sort, the time complexity is O(n²) because, in
● The growth of a function describes how the runtime of an the worst case, it compares every element with every other
algorithm increases with the size of its input. element.
● It helps us understand and compare the efficiency of algorithms.
● Common Growth Rates: 2.Omega Notation (Ω) – Best Case
● Constant Time (O(1)):The runtime does not increase with ● Describes the lower bound of the runtime, giving the best-case
the input size. scenario.
Example: Accessing an element from an array by its index. ● Omega gives the lowest curve representing the best case.
● Logarithmic Time (O(log n)):The runtime grows slowly as ● Purpose: It shows the minimum amount of time an algorithm will
the input size increases. take.
Example: Binary search on a sorted array. ● Example:For Linear Search, the best-case time complexity is Ω(1)
● Linear Time (O(n)):The runtime grows proportionally with when the target element is the first in the list.
the input size.
Example: Scanning through an array of n elements.
3. Theta Notation (Θ) – Average Case
● Linearithmic Time (O(n log n)):The runtime grows faster
● Describes both the upper and lower bounds, giving the
than linear but slower than quadratic.
average-case scenario.
Example: Efficient sorting algorithms like Merge Sort.
● Theta shows the middle curve, representing how the algorithm Seen
𝑛
behaves on average. 1−𝑥
GP Series: a+x+x2+x3+x4…..+xn= 𝑎( 1−𝑥
) for x≠ 1
● Purpose: It provides a more realistic idea of the algorithm's
performance by covering both best and worst-case behavior. Substitution method
● substitution method is a technique used to solve recurrence
● Example:In Merge Sort, the time complexity is Θ(n log n) in all
relations,often found in the analysis of recursive algorithms.
cases, meaning it performs consistently.
● Using an iterative approach within the substitution method
involves repeatedly substituting the recurrence relation to find a
pattern and derive a closed-form solution.
Seen
● Sorting a Deck of Cards:
● If the deck is already sorted, you only need to check once
(best case, Ω(n)).
● If the deck is completely unsorted, you'll have to go through
the entire process (worst case, O(n²)).
● Most of the time, it's somewhat shuffled, and you'll need a
middle amount of effort (average case, Θ(n log n)).
Performance Measurement
● Performance measurement involves evaluating how efficiently
an algorithm performs in terms of both time and space.
● It helps in comparing different algorithms and selecting the best
one for a specific task.
Trade-offs Between Time and Space
● Often, there’s a trade-off between time and space. Master Theorem
● For example, an algorithm might take less time but use more Master Theorem is a quick trick to find the time complexity of
memory, or vice versa. divide-and-conquer algorithms that split problems into equal parts
● Selecting the best algorithm often depends on whether time or and merge the solutions—saving you from complicated math!
space is the more critical factor 𝑛
𝑇(𝑛) = 𝑎𝑇( 𝑏 ) + 𝑓(𝑛)
Seen
● File Compression: ● Where:T(n) is the time complexity function.
● Time Complexity: Faster algorithms can compress files ● a≥1 is the number of subproblems.
more quickly, which is crucial when dealing with large files. ● n is the problem size.
● Space Complexity: Some algorithms save more space by ● b>1 is the division factor for the problem size.
compressing data more efficiently, at the cost of taking ● f(n) = Cost of dividing the problem and the cost of merging the
more time solution.
parents relax at the top and then take credit once the solution comes
back!
Shell Sort
● Shell Sort is an in-place comparison-based algorithm,
● introduced by Donald Shell in 1959, as an improvement over
Insertion Sort.
Working of Shell Sort
● Start with a large gap and reduce it progressively.
● For each gap, perform a gapped insertion sort on elements that
are the "gap" distance apart.
● After each pass, reduce the gap and repeat.
● When gap = 1, perform a standard insertion sort to complete the
sorting.
Example 1 :35,33,42,10,14,19,27,44,26,31
Quantum 1.6
Recursion Tree
● It is a visual representation of the recursive calls made by a
recursive algorithm.
● It helps in analyzing the time complexity of recursive algorithms
by illustrating how the problem is divided into subproblems and
how many subproblems are created at each step.
Example 2: 12 ,4 ,3 ,9,18,7,2,17,13,1,5,6
Time Complexity
Best case: O(nlogn) Average case: O(n1.5) Worst case: O(n2)
Advantages
● In-place: Uses no extra memory.
● Efficient for medium-sized arrays.
● Easy to implement.
Disadvantages
● Not stable: May not preserve the order of equal elements.
● Performance depends on the gap sequence.
Algorithm of Shell Sort
ShellSort(arr, n):
gap = n // 2
Seen
while gap > 0:
for i = gap to n - 1:
A recursion tree is like a family tree of procrastinators—everyone temp = arr[i]
j=i
passes the problem to their "kids" until the simplest version is left for
while j >= gap and arr[j - gap] > temp:
the youngest to solve. The "kids" do all the hard work while the arr[j] = arr[j - gap]
j -= gap
arr[j] = temp
gap //= 2
Quick Sort
● Quick Sort is an efficient, in-place, comparison-based sorting
algorithm.
● It uses a divide-and-conquer approach.
● A pivot element is selected from the array.
● The array is partitioned into two parts:
● Elements smaller than the pivot.
● Elements larger than the pivot.
● The same process is recursively applied to the sub-arrays.
● The recursion continues until the entire array is sorted.
Working of Quick Sort
● Choose a Pivot: Select a pivot element from the array
(commonly the first, last, middle, or a random element).
● Partitioning:
● Rearrange the elements so that all elements less than the
pivot are on the left and all elements greater than the pivot
are on the right.
● The pivot is now in its correct sorted position.
● Recursively Apply Quick Sort:Recursively apply the same process
to the left and right sub-arrays (excluding the pivot).
● Base Case: The recursion stops when sub-arrays have fewer than
two elements, meaning they are already sorted.
Characteristics
● In-place: Requires only a small amount of extra memory.
● Not stable: May not preserve the relative order of equal
elements. Algorithm of Quick Sort
● Efficient: Quick Sort is widely used due to its good average QUICKSORT(A, start, end)
performance. 1. If start < end then
2. pivot = PARTITION(A, start, end)
3. QUICKSORT(A, start, pivot - 1) // Sort left part
Time Complexity: 4. QUICKSORT(A, pivot + 1, end) // Sort right part
● Best case: O(nlogn)
PARTITION(A, start, end)
● Average case: O(nlogn) 1. pivot = A[end] // Choose the last element as pivot
● Worst case: O(n2) — occurs when the pivot selection leads to 2. i = start - 1 // Initialize i
unbalanced partitions (e.g., always picking the smallest or largest 3. For j from start to end - 1 do
4. If A[j] < pivot then
element as pivot). 5. i=i+1
Advantages: 6. Swap A[i] with A[j]
7. Swap A[i + 1] with A[end] // Place pivot in the right position
● Fast Average Case: O(n log n) time complexity. 8. Return i + 1 // Return the pivot index
● In-Place: Requires minimal additional memory.
Disadvantages:
● Worst-Case Performance: Can degrade to O(n²). Merge sort
● Unstable: Doesn’t maintain the order of equal elements. ● Merge sort is a sorting algorithm that uses the idea of divide and
conquer.
Seen ● This algorithm divides the array into two subarray , sorts them
“Quick Sort: Named for its fast average performance of O(nlogn) , separately and then merges them.
despite a worst-case of O(n2). It's often faster than Merge Sort in Complexity of merge sort
practice.” ● Best,Average,Worst TC: O(nlogn)
“Python: The sorted() function uses Timsort, which combines ● SC: O(n)
Quick Sort techniques for efficiency.” Advantages
● Stable: Preserves the order of equal elements.
● Consistent Performance: O(n log n) time complexity.
Disadvantages
● Space-Intensive: Requires O(n) additional memory.
● Not In-Place: Needs extra storage for merging. ● Build Max Heap: Convert the array into a max heap (largest
element at the root).
● Swap and Heapify: Swap the root with the last element, reduce
the heap size, and restore the heap property (Heapify).
● Repeat: Continue swapping and heapifying until the array is
sorted.
Time Complexity : Best ,Average , Worst Case :O (nlogn)
Space Complexity : O(1)
Quantum 1.21
Algorithm of MergeSort
1. mergesort(arr, l, r)
2. if l < r
3. set mid = l + (r - l) / 2
4. mergesort(arr, l, mid) // Sort the left half
5. mergesort(arr, mid + 1, r) // Sort the right half
6. MERGE(arr, l, mid, r) // Merge the sorted halves
7. endif
8. END mergesort
void merge(int a[], int l, int m, int r)
set n1 = m - l + 1 // Size of left sub-array
set n2 = r - m // Size of right sub-array
initialize Left[n1], Right[n2]; // Create temporary arrays
// Copy the left data into left array
for i = 0 up to n1 - 1 do
Left[i] = a[l + i]
// Copy the right data into right array
for j = 0 up to n2 - 1 do
Quantum 1.25 1.26
Right[j] = a[m + 1 + j] Algorithm of Heap Sort
BuildHeap(arr)
set i = 0, j = 0, k = l // Initialize indices 1. for i = length(arr)/2-1 to 0
while (i < n1 && j < n2) do 2. Heapify(arr,i)
if (Left[i] <= Right[j]) then
a[k] = Left[i++] Heapify(A,i,n):
else 1. Initializes l=2*i+1 , r=2*i+2, largest =i
a[k] = Right[j++] 2. if l < n and A[l] > A[i] then largest=l
k++ 3. if r <n and A[r] > A [largest] then largest = r
4. if largest != i :
while (i < n1) do // Copy remaining elements from Left 5. swap(A[i],A[largest])
a[k++] = Left[i++] 6. Heapify(A,largest)
while (j < n2) do // Copy remaining elements from Right HeapSort(A) :
a[k++] = Right[j++] 1. BuildHeap(A)
2. for j = length [A] down to 1
3. swap(A[1] , A[j])
Seen 4. Heapify (A, 0,j)
🧩😄
“Merge Sort aise hai jaise tum ek puzzle ke tukde pehle chhote groups mein baat rahe
ho, phir unhe sahi tarike se milakar ek perfect picture banate ho! ”
Seen
“Heapsort ko aise samjho jaise tum laddoos (elements) ko size ke hisaab se line mein
Heapsort
😄
lagate ho, hamesha sabse bada laddoo pehle kha ke line ko chhota karte jaate ho!
● Heapsort is a comparison-based sorting algorithm that uses a ”
binary heap data structure. Counting Sort
● A complete binary tree where each parent is larger (max heap) ● Counting Sort is an integer sorting algorithm that operates in O(n
or smaller (min heap) than its children. + k) time complexity,
● where n is the number of elements in the array, and k is the
Working of Heap Sort range of input values
Working
● Find Range: Get the min and max values. Bucket Sort
● Create Count Array: Initialize count array for the range. ● Bucket Sort works by dividing the input elements into several
● Store Counts: Count occurrences of each element. groups called buckets.
● Cumulative Count: Update counts for positions. ● Each bucket contains elements within a specific range.
● Sort: Place elements in sorted order using the count array. ● Once the elements are distributed into these buckets, they are
individually sorted, usually with another sorting algorithm such
Time and Space Complexity: O(n + k)
as Insertion Sort, Quick Sort, or even Recursively Bucket Sort.
● Finally, all the sorted buckets are combined to form the sorted
output.
Algorithm
CountingSort(A, n):
1. Find max_value in A Time Complexity
2. Create count array C of size max_value + 1 and initialize to 0 ● Best and Average Case:O(n + k)
3. for i = 0 to n-1:
C[A[i]]++ ● worst Case:O(n2)
4. for i = 1 to max_value: Space Complexity:O(n + k)
C[i] += C[i-1] Algorithm
5. Create output array B of size n
Bucket Sort(A[],n)
6. for i = n-1 down to 0:
1. Let B[0....n-1] be a new array
C[A[i]]--
2. n=length[A]
B[C[A[i]]] = A[i]
3. for i=0 to n-1
7. Copy B back to A
4. make B[i] an empty list
5. for i=1 to n
Seen 6. do insert A[i] into list B[n a[i]]
“Counting Sort ko aise samjho jaise tum apne almirah mein kapde rang ke hisaab se 7. for i=0 to n-1
sort kar rahe ho—pehle dekhte ho kitne red, blue, aur black kapde hain, fir un sab ko 8. do sort list B[i] with insertion-sort
alag-alag stacks mein arrange karte ho, taaki sab ekdum sorted aur sahi jagah pe ho! 9. Concatenate lists B[0], B[1],........, B[n-1] together in order
“
End
Radix Sort
● Radix Sort is a non-comparative, integer-based sorting algorithm
that sorts numbers by processing individual digits.
● It works by sorting the numbers digit by digit, starting from the
least significant digit (LSD) to the most significant digit (MSD) or
vice versa.
● Radix Sort is often used for sorting integers or strings with fixed
lengths, such as dates or alphanumeric codes.
Time Complexity: O(d * (n + k))
Space Complexity: O( (n + k))
Algorithm
RadixSort(A, n):
1. Find the maximum number in A to determine the number of digits (d)
2. for digit from least significant to most significant:
Apply a stable sorting algorithm (e.g., Counting Sort) on the digit