M.
E - COMPUTER SCIENCE AND ENGINEERING (Regulation 2024)
PCS24101 – ADVANCED DATA STRUCTURES AND ALGORITHMS
UNIT – I : ROLE OF ALGORITHMS IN COMPUTING
Detailed Postgraduate Study Notes – Comprehensive University Reference Material
(Expanded Version)
LEARNING OBJECTIVES
Understand the fundamental definition and properties of algorithms.
Analyze algorithmic performance using time and space complexity.
Differentiate among major design paradigms such as Divide and Conquer, Dynamic
Programming, and Greedy algorithms.
Apply asymptotic notations for algorithm analysis and comparison.
Solve recurrence relations using substitution, recursion-tree, and master theorem
approaches.
Relate algorithm design and analysis to real-world computing and optimization
problems.
1. INTRODUCTION TO ALGORITHMS
Algorithms are the foundation of computer science and engineering. They represent the
procedural logic that enables computers to solve problems efficiently. An algorithm is a
precise, step-by-step method of solving a problem or achieving a goal. It must have well-
defined inputs and outputs, and terminate in a finite number of steps.
1.1 Definition
An algorithm is a finite sequence of well-defined instructions designed to solve a specific
problem. It operates on input data, processes it systematically, and produces output within
a finite time. Algorithms form the basis for all computer programs and systems.
1.2 Characteristics of an Algorithm
Finiteness – The algorithm must terminate after a finite number of steps.
Definiteness – Each step must be precisely defined and unambiguous.
Input – An algorithm may have zero or more inputs from a specific set.
Output – It must produce one or more results that are directly related to the input.
Effectiveness – Each instruction must be basic enough to be executed by a computer
within a finite time.
1.3 Examples of Classical Algorithms
Euclid’s Algorithm: Computes GCD of two integers by repeatedly applying the remainder
operation.
Binary Search: Performs logarithmic-time search on a sorted array by halving the search
space each iteration.
Merge Sort: A divide-and-conquer sorting algorithm with guaranteed O(n log n)
complexity.
Quick Sort: Partition-based recursive sorting algorithm; average O(n log n), worst O(n²).
Dijkstra’s Algorithm: Finds the shortest path from a source vertex to all others in a
weighted graph.
2. ANALYZING ALGORITHMS
Algorithm analysis determines the computational resources—primarily time and space—
required to execute an algorithm. The goal is to predict the performance of an algorithm in
relation to input size, independent of machine specifics.
2.1 Time Complexity
Time complexity measures the number of basic operations executed as a function of input
size n. It is expressed asymptotically to generalize behavior for large inputs. Three cases are
typically considered: Best-case, Average-case, and Worst-case.
Example: Linear Search
for i = 1 to n:
if A[i] == key:
return i
return -1
Best case: O(1) | Average: O(n/2) | Worst case: O(n)
2.2 Space Complexity
Space complexity is the total memory required during execution. It includes fixed space
(variables, constants) and variable space (recursion stack, dynamic memory). For example,
merge sort uses O(n) auxiliary space while quick sort uses O(log n).
2.3 Empirical vs. Theoretical Analysis
Empirical Analysis Theoretical Analysis
Measures actual runtime on hardware Uses mathematical models to predict
using profiling tools. performance.
Dependent on hardware, compiler, and Independent of machine-specific factors.
environment.
3. ALGORITHM DESIGN TECHNIQUES
Algorithm design paradigms provide frameworks to solve computational problems
efficiently. Each paradigm offers a distinct way of thinking about problem decomposition
and optimization.
Divide and Conquer: Splits problems into subproblems, solves them recursively, and
combines results. Example: Merge Sort.
Dynamic Programming: Stores results of overlapping subproblems to avoid
recomputation. Example: Matrix Chain Multiplication.
Greedy Algorithms: Selects the locally optimal solution at each step aiming for global
optimality. Example: Kruskal’s Algorithm.
Backtracking: Explores all possibilities recursively, backtracking when constraints are
violated. Example: N-Queens Problem.
Branch and Bound: Prunes subtrees that cannot yield better solutions. Example: Travelling
Salesman Problem.
Randomized Algorithms: Uses randomization to improve expected runtime. Example:
Randomized Quick Sort.
Diagrammatic Representation (Textual):
Divide-and-Conquer → Recursive tree of subproblems → Base cases → Combine results.
4. GROWTH OF FUNCTIONS AND ASYMPTOTIC NOTATIONS
Asymptotic analysis describes the running time of algorithms in terms of input size n. It
abstracts away constants and focuses on the growth rate.
Big O (O): Represents the upper bound of an algorithm’s growth.
Big Omega (Ω): Represents the lower bound of an algorithm’s growth.
Big Theta (Θ): Represents a tight bound — both upper and lower.
Little o: Indicates strictly smaller growth order.
Little ω: Indicates strictly greater growth order.
Order hierarchy: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) < O(n!).
5. SOLVING RECURRENCES
Recursive algorithms are analyzed by forming recurrence relations. The main solution
methods are substitution, recursion-tree, and the master theorem.
Substitution Method: Guess the form of T(n) and prove it by induction.
Recursion Tree: Visualize recursive calls as a tree and sum across levels.
Master Theorem: Provides asymptotic solutions for T(n) = aT(n/b) + f(n).
Worked Example: T(n) = 2T(n/2) + n → Θ(n log n).
Textual Tree Diagram:
Level 0: cost n
Level 1: cost n
Level 2: cost n
Depth: log n
Total: n log n
6. ALGORITHM CORRECTNESS AND COMPLEXITY CLASSES
Correctness ensures the algorithm yields correct outputs for all valid inputs. Complexity
classes categorize problems by difficulty.
Example Proof: Loop Invariant for Insertion Sort – After each iteration, the subarray before
index i is sorted.
Complexity Classes:
P – Polynomial time problems
NP – Problems verifiable in polynomial time
NP-Complete – Hardest in NP
NP-Hard – At least as hard as NP problems.
7. CASE STUDIES AND PRACTICAL CONSIDERATIONS
Case Study 1: Sorting One Million Records
Bubble Sort – O(n²) impractical.
Merge Sort – Stable, predictable O(n log n).
Quick Sort – Efficient average case but needs pivot randomization.
Case Study 2: Graph Shortest Path
Dijkstra’s Algorithm – O(V²) using adjacency matrix, O(E + V log V) with min-priority queue.
Practical Factors:
- Cache performance
- Compiler optimization
- Data locality
- Constant factors
8. SUMMARY
Algorithms define the logical process behind computation. Analyzing and designing efficient
algorithms are central to computer science. The unit explored definitions, asymptotic
notations, analysis techniques, and various design paradigms.
9. REVIEW QUESTIONS
1. Define algorithm and list its essential properties.
2. Differentiate between time and space complexity.
3. Explain the Master Theorem and its three cases.
4. Describe the difference between empirical and theoretical analysis.
5. Illustrate Divide and Conquer and give two examples.
6. Discuss algorithm correctness with an example of loop invariant.
7. Compare P, NP, NP-Complete and NP-Hard problems.
8. Explain the recursion-tree method with an example.
REFERENCES
1. Cormen, T. H. et al., Introduction to Algorithms, MIT Press.
2. Sedgewick, R. and Wayne, K., Algorithms (4th Edition), Addison-Wesley.
3. Skiena, S. S., The Algorithm Design Manual, Springer.
4. Kleinberg, J. and Tardos, É., Algorithm Design, Pearson.
5. Levitin, A., Introduction to the Design and Analysis of Algorithms, Pearson Education.