0% found this document useful (0 votes)
11 views52 pages

DAA Unit 2

Uploaded by

cutestprincess49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views52 pages

DAA Unit 2

Uploaded by

cutestprincess49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Faculty of Computer Applications &

Information Technology

Integrated MSc(IT)

221601705
Design and Analysis of Algorithm
Unit 2

Faculty of Computer Applications & IT


Introduction to Divide and
Conquer Paradigm
Definition:

Divide and Conquer is a problem-solving strategy that works by:

Dividing the problem into smaller subproblems of the same type.

Conquering each subproblem recursively.

Combining the solutions of subproblems to solve the original
problem.

Faculty of Computer Applications & IT


Advantages

Simplifies complex problems

Enables efficient algorithm design

Often reduces time complexity (especially compared to
brute-force approaches)

Faculty of Computer Applications & IT


Divide-and-Conquer Techniques
Steps typically involved:

Divide: Break the problem into smaller subproblems.

Conquer: Solve each subproblem (often recursively).

Combine: Merge solutions to subproblems into a final solution.

Common examples: Merge Sort, Quick Sort, Binary Search,


Strassen’s Matrix Multiplication, Closest-Pair of Points.

Faculty of Computer Applications & IT


Merge Sort
Steps:

Divide the array into two halves.

Recursively sort both halves.

Merge the sorted halves.

Faculty of Computer Applications & IT


Faculty of Computer Applications & IT
Key Properties:


Stable sort

Requires additional space

Best, average, and worst case: O(n log n)

Faculty of Computer Applications & IT


Algorithm
mergeSort(arr, low, high):

if low < high:

mid = (low + high) / 2

mergeSort(arr, low, mid)

mergeSort(arr, mid + 1, high)

merge(arr, low, mid, high)

Faculty of Computer Applications & IT


Merge Function:
merge(arr, low, mid, high):

Create two temporary arrays: left[] and right[]

Merge them back into arr[low..high]

Faculty of Computer Applications & IT


Time Complexity Analysis:

At each level of recursion, the array is divided into
two parts: n/2, n/4, ..., down to size 1

Each level takes O(n) time to merge

Number of levels = log₂n

Total Time: O(n log n)

Space: O(n) (extra space for merging)

Faculty of Computer Applications & IT


Meaning of log₂n

The expression log₂n (pronounced “log base 2 of n”) means:

How many times can you divide n by 2 until you get 1?
Example:

Let’s see how many times we can divide n = 8 by 2:

n=8

8→4→2→1

Log 28=3

Faculty of Computer Applications & IT


In Merge Sort

At each recursive level, the array is split into 2 halves:

Starting with n, then n/2, then n/4, ..., down to 1.

So, the total number of levels = log₂n.

This is why Merge Sort's time complexity is:O(nlog 2n)

(The base 2 is often omitted in Big-O notation since
all logarithmic bases are within a constant factor, so
we just write O(n log n).)

Faculty of Computer Applications & IT


Quick Sort
Steps:

Choose a pivot

Partition the array around the pivot (elements < pivot on left, > pivot on right)

Recursively apply quicksort on left and right subarrays.


Key Properties:

In-place sorting

Not stable

Efficient in practice for large datasets

Time Complexity: (Average: O(n log n), Worst: O(n²))

Faculty of Computer Applications & IT


Example

Let's sort: A = [8, 4, 7, 3, 10, 2]

Step-by-step (Pivot = 8):

Partition → [4, 7, 3, 2] (pivot) [10]

Recur on left: [4, 7, 3, 2]

Pivot = 4 → [3, 2] (4) [7]

etc..

Final sorted array: [2, 3, 4, 7, 8, 10]

Faculty of Computer Applications & IT


Quick Sort
Algorithm:

quickSort(arr, low, high):

if low < high:

pivotIndex = partition(arr, low, high)

quickSort(arr, low, pivotIndex – 1)

quickSort(arr, pivotIndex + 1, high)

Faculty of Computer Applications & IT


Partition Function:
partition(arr, low, high):

pivot = arr[high]

i = low - 1

for j from low to high - 1:

if arr[j] < pivot:

i++

swap arr[i] and arr[j]

swap arr[i + 1] and arr[high]

return i + 1

Faculty of Computer Applications & IT


Time Complexity Analysis:


Best and average case: O(n log n)

Partitioning takes O(n), and log n levels

Worst case (already sorted): O(n²)

Space: O(log n) (due to recursion stack)

Faculty of Computer Applications & IT


Worst-Case Time Complexity of
Quick Sort
Situation:

The worst case occurs when:

The pivot chosen is always the smallest or the largest
element.

This happens when the array is:

Already sorted in ascending/descending order

And you choose the first or last element as the pivot

Faculty of Computer Applications & IT


What happens?

Suppose you always pick the last element as the pivot.

Given: A = [1, 2, 3, 4, 5]

Choose 5 as pivot

Partition results in:

Left subarray: [1, 2, 3, 4] → size n-1

Right subarray: [] → size 0

So only one side gets processed each time — this leads to maximum
depth in recursion.

Faculty of Computer Applications & IT


Result

Worst-case time complexity: O(n²)

Due to unbalanced partitioning
How to Avoid It?

Use randomized pivot selection

Use median-of-three strategy (median of first, middle, and last
elements)

Ensures more balanced partitioning → back to average-case O(n
log n)

Faculty of Computer Applications & IT


Binary Search
Steps:

Find the mid of the array.

Compare target with the middle element.

If equal, return index.

If target < mid, search left subarray; else right subarray.
Requirements:

Works only on sorted arrays

Significantly faster than linear search

Faculty of Computer Applications & IT


Faculty of Computer Applications & IT
Algorithm
binarySearch(arr, low, high, key):

if low > high:

return -1

mid = (low + high) / 2

if arr[mid] == key:

return mid

else if key < arr[mid]:

return binarySearch(arr, low, mid - 1, key)

else:

return binarySearch(arr, mid + 1, high, key)

Faculty of Computer Applications & IT


Binary Search Time Complexity

Best Case: O(1) — Target is found at the middle on the first
comparison.


Average Case: O(log n) — Each step halves the search space.


Worst Case: O(log n) — Target not found or found at the last
possible level.

Faculty of Computer Applications & IT


Time Complexity Analysis

Each step reduces problem size by half →
log₂n steps

Comparisons at each level: O(1)

Time: O(log n)

Space: O(log n) for recursive, O(1) for iterative

Faculty of Computer Applications & IT



How the Search Space Shrinks

Let’s say the array has n elements.

Faculty of Computer Applications & IT


Binary Tree Traversals

Traversal = visiting each node of the tree exactly
once in a specific order.

There are two main categories:

Category Types Included

Depth-First: Inorder, Preorder, Postorder

Breadth-First: Level Order

Faculty of Computer Applications & IT


Depth-First
Tree Traversals:

Inorder (LNR): Left → Node → Right

Preorder (NLR): Node → Left → Right

Postorder (LRN): Left → Right → Node
Properties:

Inorder of BST gives sorted order.

Preorder used for expression tree copying.

Postorder used in deletion of tree.

Faculty of Computer Applications & IT


Inorder Traversal (LNR)
(Left → Root → Right)

inorder(node):

if node != null:

inorder(node.left)

print(node.data)

inorder(node.right)

Use Case:
Retrieves sorted order in a Binary Search Tree (BST)

Faculty of Computer Applications & IT


Divide and Conquer in Tree
Traversals

Each recursive traversal follows:

Divide: Traverse left subtree

Conquer: Process root

Combine: Traverse right subtree

Faculty of Computer Applications & IT


Preorder Traversal
(Root → Left → Right)
def preorder(node):

if node:

print(node.data)

preorder(node.left)

preorder(node.right)
Use Case:

Useful for copying tree, prefix expressions, and serialization
Faculty of Computer Applications & IT
Postorder Traversal
(Left → Right → Root)
def postorder(node):

if node:

postorder(node.left)

postorder(node.right)

print(node.data)
Use Case:

Tree deletion (bottom-up)

Postfix expression evaluation

Faculty of Computer Applications & IT


Time Complexity

Visit each node once: O(n)

Space (recursive): O(h) where h = height of tree

Same for Preorder and Postorder.

Faculty of Computer Applications & IT


Level Order Traversal (Breadth-First Search on Tree)


Level Order Traversal visits all nodes level by
level, from left to right, starting from the root.

It is essentially a Breadth-First Search (BFS)
applied to a binary tree using a queue.

Faculty of Computer Applications & IT


Algorithm (Using Queue)
Step-by-step logic:

Create an empty queue.

Enqueue the root node.

While the queue is not empty:
– Dequeue a node from the front.
– Visit the node (print or process it).
– Enqueue its left child (if exists).
– Enqueue its right child (if exists).

Faculty of Computer Applications & IT


Breadth-First Search (BFS)
BFS(graph, start):

create a queue Q

mark start as visited

enqueue start into Q

while Q is not empty:

node = dequeue(Q)

process(node)

for each neighbor of node:

if not visited:

mark neighbor as visited

enqueue(neighbor)

Faculty of Computer Applications & IT


Time & Space Complexity

Measure Value

Time Complexity O(n) (each node is visited
once)

Space Complexity O(w) (w = max width of
tree, i.e., max queue size)

Faculty of Computer Applications & IT


Graph Algorithms (BFS and DFS)

Although not pure divide-and-conquer, graph traversal
algorithms can be understood in terms of recursive
problem-solving.

Graph Algorithms: BFS & DFS

Graph = Set of Vertices (V) and Edges (E)

Can be directed or undirected, weighted or
unweighted

Faculty of Computer Applications & IT


Breadth-First Search (BFS)

Strategy: Level-by-level or Level-order traversal (uses Queue)

Visits: All neighbors before going deeper

Uses: Shortest path in unweighted graphs, finding connected components

Steps:

Start from a node, visit its neighbors

Visit neighbors’ neighbors, etc.

Time Complexity: O(V + E) for adjacency list

Faculty of Computer Applications & IT


BFS Algorithm

from collections import deque

def bfs(graph, start):

visited = set()

queue = deque([start])

while queue:

vertex = queue.popleft()

if vertex not in visited:

print(vertex)

visited.add(vertex)

for neighbor in graph[vertex]:

if neighbor not in visited:

queue.append(neighbor)

Faculty of Computer Applications & IT


Time Complexity

O(V+E)
Applications:

Finding shortest path in unweighted graph

Social networks (friend suggestions)

Web crawling

Tree Level Order traversal

Faculty of Computer Applications & IT


Depth-First Search (DFS)

Approach: Go as deep as possible before backtracking (uses Stack or
Recursion)

Uses: Topological sorting, detecting cycles, connected components
Steps:

Use Stack or recursion

Visit a node, mark visited

Recur for unvisited adjacent nodes

Time Complexity: O(V + E)

Faculty of Computer Applications & IT


DFS Algorithm
def dfs(graph, start, visited=None):

if visited is None:

visited = set()

visited.add(start)

print(start)

for neighbor in graph[start]:

if neighbor not in visited:

dfs(graph, neighbor, visited)

Faculty of Computer Applications & IT


BFS vs DFS

Faculty of Computer Applications & IT


Hashing

Though not a divide-and-conquer method, hashing is an essential algorithmic tool.

Hashing is a technique to store and retrieve data using a key-value pair in constant
average time O(1).

Definition:

Hashing is the process of converting input (keys) into a fixed-size value (hash code)
using a hash function.

Hash Table Operations:

Insert: Place key using a hash function

Search: Direct access using hash

Delete: Remove key using the same hash

Faculty of Computer Applications & IT


Hash Function

Converts key → index (slot) in the table.

Example:h(key)=keymodm

Where m is the size of hash table

Faculty of Computer Applications & IT


Common Hashing Techniques:

Chaining: Handle collisions via linked lists

Open Addressing: Linear or quadratic probing

Time Complexity:

Average: O(1)

Worst: O(n) (if too many collisions)

Faculty of Computer Applications & IT


Collisions

When two keys hash to the same index →
Collision

Faculty of Computer Applications & IT


Time Complexity
OperationAverage Case Worst Case (lots of
collisions)

Search O(1) O(n)

Insert O(1) O(n)

Delete O(1) O(n)

Faculty of Computer Applications & IT


Applications of Hashing

Symbol tables in compilers

Caching (e.g., web browser history)

Databases (indexes)

Cryptographic hash functions (SHA-256, MD5)

Data structures like:

HashMap (Java)

Dictionary (Python)

Unordered_Map (C++)

Faculty of Computer Applications & IT


Summary Table

Faculty of Computer Applications & IT

You might also like