0% found this document useful (0 votes)
36 views12 pages

Ada Module 3

The document outlines the syllabus for the Analysis and Design of Algorithms course (BCS401), focusing on transform-and-conquer techniques, balanced search trees, heaps, and heapsort. It provides detailed explanations of AVL trees, 2-3 trees, and the properties and construction of heaps, along with algorithms for heap construction and deletion. The content is supported by references to textbooks and includes examples for better understanding of the concepts.

Uploaded by

schethannaik924
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views12 pages

Ada Module 3

The document outlines the syllabus for the Analysis and Design of Algorithms course (BCS401), focusing on transform-and-conquer techniques, balanced search trees, heaps, and heapsort. It provides detailed explanations of AVL trees, 2-3 trees, and the properties and construction of heaps, along with algorithms for heap construction and deletion. The content is supported by references to textbooks and includes examples for better understanding of the concepts.

Uploaded by

schethannaik924
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Analysis and Design of Algorithms(BCS401)

MODULE 3
SYLLABUS

TRANSFORM-AND-CONQUER: Balanced Search Trees, Heaps and Heapsort. SPACE-TIME


TRADEOFFS: Sorting by Counting: Comparison counting sort, Input Enhancement in String
Matching: Horspool’s Algorithm. Chapter 6 (Sections 6.3,6.4), Chapter 7 (Sections 7.1,7.2)

Textbooks
1. Introduction to the Design and Analysis of Algorithms, By Anany Levitin, 3rd Edition
(Indian), 2017, Pearson.

Reference books
1. Computer Algorithms/C++, Ellis Horowitz, Sartaj Sahni and Rajasekaran, 2nd Edition, 2014,
Universities Press.
2. Introduction to Algorithms, Thomas H. Cormen, Charles E. Leiserson, Ronal L. Rivest,
Clifford Stein, 3rd Edition, PHI.
3. Design and Analysis of Algorithms, S. Sridhar, Oxford (Higher Education)

Dept of CSE, AJIET, Mangaluru 1


Analysis and Design of Algorithms(BCS401)

6.3 Balanced Search Trees


A binary search tree is a binary tree whose nodes contain elements of a set of orderable items,
one element per node, so that all elements in the left subtree are smaller than the element in the
subtree’s root, and all the elements in the right subtree are greater than it. The time efficiency of
searching, insertion, and deletion is Θ(log n) in the average case. In the worst case, these
operations are in Θ(n).

Two approaches are used to preserve the logarithmic efficiency of the classical binary search
tree:
➔​ The first approach is of the instance-simplification variety: an unbalanced binary search
tree is transformed into a balanced one. Such trees are called self-balancing. An AVL tree
requires the difference between the heights of the left and right subtrees of every node
never exceed 1. A red-black tree tolerates the height of one subtree being twice as large
as the other subtree of the same node. If an insertion or deletion of a new node creates a
tree with a violated balance requirement, the tree is restructured by one of a family of
special transformations called rotations that restore the balance required.
➔​ The second approach is of the representation-change variety: allow more than one
element in a node of a search tree. Specific cases of such trees are 2-3 trees, 2-3-4 trees,
and more general and important B-trees.

AVL Trees

AVL trees were invented in 1962 by two Russian scientists, G. M. Adelson-Velsky and E. M.
Landis [Ade62], after whom this data structure is named.

DEFINITION An AVL tree is a binary search tree in which the balance factor of every node,
which is defined as the difference between the heights of the node’s left and right subtrees, is
either 0 or +1 or −1.

FIGURE (a) AVL tree. (b) Binary search tree that is not an AVL tree.

Dept of CSE, AJIET, Mangaluru 2


Analysis and Design of Algorithms(BCS401)

If an insertion of a new node makes an AVL tree unbalanced, we transform the tree by a rotation.
A rotation in an AVL tree is a local transformation of its subtree rooted at a node whose balance
has become either +2 or −2. There are four types of rotation:

Single right rotation, or R-rotation: This rotation is performed after a new key is inserted into
the left subtree of the left child of a tree whose root had the balance of +1 before the insertion.

Symmetric single left rotation, or L-rotation: This is the mirror image of the single R-rotation.
It is performed after a new key is inserted into the right subtree of the right child of a tree whose
root had the balance of −1 before the insertion.

Double left-right rotation (LRrotation): This is a combination of two rotations: we perform the
L-rotation of the left subtree of root r followed by the R-rotation of the new tree rooted at r. It is
performed after a new key is inserted into the right subtree of the left child of a tree whose root
had the balance of +1 before the insertion.

Double right-left rotation (RL-rotation): This is the mirror image of the double LR-rotation.

Figure: General form of the R-rotation in the AVL tree. A shaded node is the last one inserted.

Figure: General form of the double LR-rotation in the AVL tree. A shaded node ithe last one
inserted. It can be either in the left subtree or in the right subtree of the root’s grandchild.

Dept of CSE, AJIET, Mangaluru 3


Analysis and Design of Algorithms(BCS401)

Figure: Four rotation types for AVL trees with three nodes. (a) Single R-rotation. (b) Single
L-rotation. (c) Double LR-rotation. (d) Double RL-rotation.

Dept of CSE, AJIET, Mangaluru 4


Analysis and Design of Algorithms(BCS401)

Construction of an AVL tree:

Example: Construct an AVL tree for a given list of numbers 5, 6, 8, 3, 2, 4, 7.

Figure: Construction of an AVL tree for the list 5, 6, 8, 3, 2, 4, 7 by successive insertions. The
parenthesized number of a rotation’s abbreviation indicates the root of the tree being reorganized.

Dept of CSE, AJIET, Mangaluru 5


Analysis and Design of Algorithms(BCS401)

The critical characteristic of a binary search tree is the tree’s height. The height h of any AVL
tree with n nodes satisfies the inequalities

⌊log2 n⌋ ≤ h < 1.4405 log2(n + 2) − 1.3277

The above inequalities imply that the operations of searching and insertion are Θ(log n) in the
worst case.
The drawbacks of AVL trees are frequent rotations and the need to maintain balances for its
nodes.

2-3 Trees

A 2-3 tree is a tree that can have nodes of two kinds: 2-nodes and 3-nodes.

A 2-node contains a single key K and has two children: the left child serves as the root of a
subtree whose keys are less than K, and the right child serves as the root of a subtree whose keys
are greater than K.

A 3-node contains two ordered keys K1 and K2 (K1 < K2) and has three children. The leftmost
child serves as the root of a subtree with keys less than K1, the middle child serves as the root of
a subtree with keys between K1 and K2, and the rightmost child serves as the root of a subtree
with keys greater than K2.

In a 2-3 tree all leaves must be on the same level. A 2-3 tree is always perfectly height-balanced:
the length of a path from the root to a leaf is the same for every leaf.

Figure: Two kinds of nodes of a 2-3 tree

Searching for a given key K in a 2-3 tree starts at the root. If the root is a 2-node we either stop
if K is equal to the root’s key or continue the search in the left or right subtree if K is,
respectively, smaller or larger than the root’s key. If the root is a 3- node, we know after no more
than two key comparisons whether the search can be stopped (if K is equal to one of the root’s
keys) or in which of the root’s three subtrees it needs to be continued.

Dept of CSE, AJIET, Mangaluru 6


Analysis and Design of Algorithms(BCS401)

Inserting a new key in a 2-3 tree is done as follows. First of all, we always insert a new key K in
a leaf, except for the empty tree. The appropriate leaf is found by performing a search for K. If
the leaf in question is a 2-node, we insert K there as either the first or the second key, depending
on whether K is smaller or larger than the node’s old key. If the leaf is a 3-node, we split the leaf
in two: the smallest of the three keys (two old ones and the new key) is put in the first leaf, the
largest key is put in the second leaf, and the middle key is promoted to the old leaf’s parent. (If
the leaf happens to be the tree’s root, a new root is created to accept the middle key.)

Figure: Construction of a 2-3 tree for the list 9, 5, 8, 3, 2, 4, 7

6.4 Heaps and Heapsort


Notion of the Heap

DEFINITION A heap can be defined as a binary tree with keys assigned to its nodes, one key
per node, provided the following two conditions are met:

1.​ The shape property—the binary tree is essentially complete (or simply complete), i.e.,
all its levels are full except possibly the last level, where only some rightmost leaves may
be missing.
2.​ The parental dominance or heap property—the key in each node is greater than or
equal to the keys in its children.

Dept of CSE, AJIET, Mangaluru 7


Analysis and Design of Algorithms(BCS401)

For example, consider the trees below:

Figure: Illustration of the definition of heap: only the leftmost tree is a heap

The first tree is a heap. The second one is not a heap, because the tree’s shape property is
violated. And the third one is not a heap, because the parental dominance fails for the node with
key 5.

Properties of heaps:

1. There exists exactly one essentially complete binary tree with n nodes. Its height is equal to
⌊log2 n⌋.

2. The root of a heap always contains its largest element.

3. A node of a heap considered with all its descendants is also a heap.

4. A heap can be implemented as an array by recording its elements in the top down, left-to-right
fashion. It is convenient to store the heap’s elements in positions 1 through n of such an array,
leaving H[0] either unused or putting there a sentinel whose value is greater than every element
in the heap. In such a representation,
a. the parental node keys will be in the first ⌊n/2⌋ positions of the array, while the leaf
keys will occupy the last ⌊n/2⌋ positions;
b. the children of a key in the array’s parental position i (1 ≤ i ≤ ⌊n/2⌋) will be in
positions 2i and 2i + 1, and, correspondingly, the parent of a key in position i (2 ≤ i ≤ n)
will be in position ⌊i/2⌋.

A heap can be defined as an array H[1..n] in which every element in position i in the first half of
the array is greater than or equal to the elements in positions 2i and 2i + 1, i.e.,

H[i] ≥ max{H[2i], H[2i + 1]} for i = 1,..., ⌊n/2⌋.

The bottom-up heap construction algorithm:

The bottom-up heap construction algorithm initializes the essentially complete binary tree with n
nodes by placing keys in the order given and then “heapifies” the tree as follows. Starting with
the last parental node, the algorithm checks whether the parental dominance holds for the key in

Dept of CSE, AJIET, Mangaluru 8


Analysis and Design of Algorithms(BCS401)

this node. If it does not, the algorithm exchanges the node’s key K with the larger key of its
children and checks whether the parental dominance holds for K in its new position. This process
continues until the parental dominance for K is satisfied. After completing the “heapification” of
the subtree rooted at the current parental node, the algorithm proceeds to do the same for the
node’s immediate predecessor. The algorithm stops after this is done for the root of the tree.

ALGORITHM HeapBottomUp(H[1..n])
//Constructs a heap from elements of a given array by the bottom-up algorithm
//Input: An array H[1..n] of orderable items
//Output: A heap H[1..n]
for i ← n/2 downto 1 do
k ← i; v ← H[k]
heap ← false
while not heap and 2 ∗ k ≤ n do​
j←2∗k
if j<n //there are two children
if H[j ] < H[j + 1] j ← j + 1
if v ≥ H[j ]
heap ← true
else H[k]← H[j ]; k ← j
H[k]← v

Example: Bottom-up construction of a heap for the list 2, 9, 7, 6, 5, 8.

Figure: Bottom-up construction of a heap for the list 2, 9, 7, 6, 5, 8. The double headed arrows
show key comparisons verifying parental dominance.

Analysis of the algorithm:

Dept of CSE, AJIET, Mangaluru 9


Analysis and Design of Algorithms(BCS401)

Let h be the height of the tree. According to the first property of heaps in the list at the beginning
of the section, h = log2 n or just log2 (n + 1) − 1 = k − 1 for the specific values of n we are
considering. Each key on level i of the tree will travel to the leaf level h in the worst case of the
heap construction algorithm.

The total number of key comparisons in the worst case will be

Thus, with this bottom-up algorithm, a heap of size n can be constructed with fewer than 2n
comparisons.

The top-down heap construction algorithm:

The algorithm works as follows. First, attach a new node with key K in it after the last leaf of the
existing heap. Then shift K up to its appropriate place in the new heap as follows. Compare K
with its parent’s key: if the latter is greater than or equal to K, stop (the structure is a heap);
otherwise, swap these two keys and compare K with its new parent. This swapping continues
until K is not greater than its last parent or it reaches the root.

Example:

(a)

(b)

Figure: Inserting a key (10) into the heap in Figure (a). The new key is shifted up via a swap
with its parent until it is not larger than its parent (or is in the root).

Dept of CSE, AJIET, Mangaluru 10


Analysis and Design of Algorithms(BCS401)

Deleting the root’s key from a heap:

Maximum Key Deletion from a heap


Step 1 Exchange the root’s key with the last key K of the heap.
Step 2 Decrease the heap’s size by 1.
Step 3 “Heapify” the smaller tree by sifting K down the tree exactly in the same way we did it in
the bottom-up heap construction algorithm. That is, verify the parental dominance for K:
if it holds, we are done; if not, swap K with the larger of its children and repeat this
operation until the parental dominance condition holds for K in its new position.

Figure: Deleting the root’s key from a heap. The key to be deleted is swapped with the last key
after which the smaller tree is “heapified” by exchanging the new key in its root with the larger
key in its children until the parental dominance requirement is satisfied.

Heapsort

Heapsort is a two-stage algorithm that works as follows.

Stage 1 (heap construction): Construct a heap for a given array.


Stage 2 (maximum deletions): Apply the root-deletion operation n − 1 times to the
remaining heap.

Dept of CSE, AJIET, Mangaluru 11


Analysis and Design of Algorithms(BCS401)

Figure: Sorting the array 2, 9, 7, 6, 5, 8 by heapsort


Analysis:
The heap construction stage of the algorithm is in O(n)), we have to investigate just the time
efficiency of the second stage. For the number of key comparisons, C(n), needed for eliminating
the root keys from the heaps of diminishing sizes from n to 2, we get the following inequality:

This means that C(n) ∈ O(n log n)for the second stage of heapsort. For both stages, we get O(n)
+ O(n log n) = O(n log n).

Dept of CSE, AJIET, Mangaluru 12

You might also like