0% found this document useful (0 votes)
43 views75 pages

Unit IV

This is c++ documents

Uploaded by

tarun9340solanki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views75 pages

Unit IV

This is c++ documents

Uploaded by

tarun9340solanki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Tree Data Structrue

A tree is a non-linear abstract data type with a hierarchy-based structure. It


consists of nodes (where the data is stored) that are connected via links.
The tree data structure stems from a single node called a root node and has
subtrees connected to the root.

struct node
{
int data;
struct node *leftchild;
struct node *rightchild;
}

Important Terms
Following are the important terms with respect to tree.
Tree Terminologies

Root
●​ In a tree data structure, the root is the first node of the tree. The root node is
the initial node of the tree in data structures.
●​ In the tree data structure, there must be only one root node.

Edge
●​ In a tree in data structures, the connecting link of any two nodes is called the
edge of the tree data structure.
●​ In the tree data structure, N number of nodes connecting with N -1 number of
edges.

Parent
In the tree in data structures, the node that is the predecessor of any node is known as a
parent node, or a node with a branch from itself to any other successive node is called
the parent node.

Child
●​ The node, a descendant of any node, is known as child nodes in data
structures.
●​ In a tree, any number of parent nodes can have any number of child nodes.
●​ In a tree, every node except the root node is a child node.

Siblings
In trees in the data structure, nodes that belong to the same parent are called siblings.

Leaf
●​ Trees in the data structure, the node with no child, is known as a leaf node.
●​ In trees, leaf nodes are also called external nodes or terminal nodes.
Internal nodes
●​ Trees in the data structure have at least one child node known as internal
nodes.
●​ In trees, nodes other than leaf nodes are internal nodes.
●​ Sometimes root nodes are also called internal nodes if the tree has more than
one node.

Degree
●​ In the tree data structure, the total number of children of a node is called the
degree of the node.
●​ The highest degree of the node among all the nodes in a tree is called the
Degree of Tree.

Level
In tree data structures, the root node is said to be at level 0, and the root node's children
are at level 1, and the children of that node at level 1 will be level 2, and so on.
Height
●​ In a tree data structure, the number of edges from the leaf node to the
particular node in the longest path is known as the height of that node.
●​ In the tree, the height of the root node is called "Height of Tree".
●​ The tree height of all leaf nodes is 0.
●​

Depth
●​ In a tree, many edges from the root node to the particular node are called the
depth of the tree.
●​ In the tree, the total number of edges from the root node to the leaf node in the
longest path is known as "Depth of Tree".
●​ In the tree data structures, the depth of the root node is 0.
Path
●​ In the tree in data structures, the sequence of nodes and edges from one node
to another node is called the path between those two nodes.
●​ The length of a path is the total number of nodes in a path.zx

Subtree
In the tree in data structures, each child from a node shapes a sub-tree recursively and
every child in the tree will form a sub-tree on its parent node.

Binary Tree
A binary tree in DSA (Data Structures and Algorithms) is a way to organize data in a hierarchical
structure. In a binary tree, each node has at most two children, called the left child and the right
child. The topmost node is called the root, and the nodes with no children are called leaves.
The basic idea of a binary tree is to have a parent-child relationship between nodes. Each node
can have a left and a right child, and this pattern continues down the tree. This structure makes
it easy to organize and find data quickly.
Types of Binary Tree in Data Structure
Let’s understand what are the different types of binary tree with examples:

●​ Full Binary Tree

A full binary tree is a tree where every node has either 0 or 2 children.

Example:

Complete Binary Tree

A complete binary tree is a tree where all levels are fully filled except possibly the last level,
which is filled from left to right.
Example:

●​ Perfect Binary Tree

A perfect binary tree is a tree where all internal nodes have exactly two children and all leaf
nodes are at the same level.

Example:

●​ Balanced Binary Tree

A balanced binary tree is a tree where the height of the left and right subtrees of any node
differ by at most one.
Example:

●​ Degenerate (or Pathological) Binary Tree

A degenerate binary tree is a tree where each parent node has only one child. This makes
the tree look like a linked list.

Example:
Properties of Binary Tree (Height and Level Start with 0)

Basic Terminology
●​ **Node**: A basic unit containing data and links to child nodes.
●​ **Root**: The topmost node of the binary tree.
●​ **Child**: A node directly connected to another node when moving away from the root.
●​ **Parent**: A node that has a link to children nodes.
●​ **Leaf Node**: A node with no children.
●​ **Level**: The level of a node is the number of edges on the path from the root to that node.
Root is at level 0.
●​ **Height**: The height of a tree is the number of edges on the longest downward path from the
root to a leaf. A single node (root only) tree has height 0.

Important Properties

1. Maximum Nodes at Level l


At level l, the maximum number of nodes is:​
Max nodes at level l = 2^l

●​ Example:

●​ - At level 0: 2^0 = 1 node (root)


●​ - At level 1: 2^1 = 2 nodes
●​ - At level 2: 2^2 = 4 nodes

2. Maximum Total Nodes in a Binary Tree of Height h


N = 2^(h+1) - 1

●​ Example:

●​ - Height = 0: 2^(0+1) - 1 = 1 node


●​ - Height = 1: 2^(1+1) - 1 = 3 nodes
●​ - Height = 2: 2^(2+1) - 1 = 7 nodes

3. Minimum Possible Height for N Nodes


h = ceil(log2(N)) - 1

●​ Example:

●​ - 1 node: height = ceil(log2(1)) - 1 = 0


●​ - 3 nodes: height = ceil(log2(3)) - 1 = 1
●​ - 7 nodes: height = ceil(log2(7)) - 1 = 2

4. Number of Leaf Nodes


L=N-I

In a full binary tree: L = I + 1


●​ Example:

●​ - In a full binary tree with 3 internal nodes, leaf nodes = 3 + 1 = 4

5. Relationship Between Internal Nodes and Total Nodes


N = 2I + 1

●​ Example:

●​ - If there are 5 internal nodes, total nodes = 2 × 5 + 1 = 11

Important Notes
●​ **Perfect Binary Tree**: All internal nodes have 2 children and all leaves are at the same level.
●​ **Complete Binary Tree**: All levels are completely filled except possibly the last, which is filled
from left to right.
●​ **Skewed Binary Tree**: Every parent has only one child.
●​ - Left-skewed: each node has only a left child.
●​ - Right-skewed: each node has only a right child.

Example Tree
Consider this binary tree:​

A​
/ \​
B C​
/ \​
D E

●​ Levels:

●​ - Level 0: A
●​ - Level 1: B, C
●​ - Level 2: D, E

●​ Height: 2 (root A to D or A to E has 2 edges)


●​ Nodes: 5
●​ Leaf Nodes: 3 (C, D, E)
●​ Internal Nodes: 2 (A, B)
●​ Checking properties:

●​ - Max nodes at level 2 = 2^2 = 4


●​ - Max nodes possible for height 2 = 2^(2+1) - 1 = 7
●​ - Current nodes = 5 (< 7, so it’s not a perfect tree)
Representation of Binary Trees
There are two primary ways to represent binary trees:
1.​ Linked Node Representation
2.​ Array Representation

1. Linked Node Representation


This is the simplest way to represent a binary tree. Each node contains data and pointers to its
left and right children.​
This representation is mostly used to represent binary tree with multiple advantages. The most
common advantages are given below.

In this method, each node is represented as a structure or object with three fields:​

1. Data (value of the node)

2. Pointer to the left child

3. Pointer to the right child

This method is space-efficient and ideal for dynamic binary trees where the number of nodes is
not fixed.
Advantages:
●​ It can easily grow or shrink as needed, so it uses only the memory it needs.
●​ Adding or removing nodes is straightforward and requires only pointer adjustments.
●​ Only uses memory for the nodes that exist, making it efficient for sparse trees.
Disadvantages:
●​ Needs extra memory for pointers.
●​ Finding a node can take longer because you have to start from the root and follow
pointers.

Create/Declare a Node of a Binary Tree:

Structure Example in C++:


struct Node {
​ char data;
​ Node* left;
​ Node* right;
};
Creating/Declaring a Node in C++ (Turbo C++)
In Turbo C++, a binary tree node is typically declared using a structure (struct). Each node contains
three parts:​
- The data​
- A pointer to the left child​
- A pointer to the right child

Example Code:

struct Node {​
​ int data;​
​ struct Node* left;​
​ struct Node* right;​
};​

// Creating a node in main or a function​
Node* createNode(int value) {​
​ Node* newNode = (Node*)malloc(sizeof(Node));​
​ newNode->data = value;​
​ newNode->left = NULL;​
​ newNode->right = NULL;​
​ return newNode;​
}

2. Array Representation

Array Representation is another way to represent binary trees, especially

useful when the tree is complete (all levels are fully filled except possibly

the last, which is filled from left to right). In this method:

●​ The tree is stored in an array.

●​ For any node at index i:

○​ Left Child: Located at 2 * i + 1


○​ Right Child: Located at 2 * i + 2

●​ Root Node: Stored at index 0

Or

In this method, a binary tree is stored in a linear array. The root node is

stored at index 0 (or 1 in some implementations). The left and right

children of a node at index i are stored at the following positions:

· Left Child Index = 2*i + 1

· Right Child Index = 2*i + 2

· Parent Index = (i - 1) / 2
Example:

Consider the binary tree:​

​ A​

​ / \​

​ B C​

​ / \​

​ D E​

Array representation (starting from index 0): [A, B, C, D, E]

Comparison of Linked list vs Array Representation of

Binary Tree

Linked Node Array


Feature
Representation Representation

Uses extra memory Efficient for complete


Memory Usage
for pointers trees
Ease of Simple but limited to
Simple and intuitive
Implementation complete trees

Easily grows and Not flexible for


Dynamic Size
shrinks dynamic sizes

Slower access for Fast access using


Access Time
specific nodes indices

Any binary tree


Suitable For Complete binary tree
structure
Binary Tree Traversals

A binary tree is a data structure in which each node has at most two
children, referred to as the left child and the right child.

Traversing a binary tree means visiting all the nodes in a specific order.
There are three main types of depth-first traversal methods:

1.​ Inorder Traversal (Left, Root, Right)​

2.​ Preorder Traversal (Root, Left, Right)​

3.​ Postorder Traversal (Left, Right, Root)​

1. Inorder Traversal (L, Root, R)


In this traversal method, the left subtree is visited first, then the root and
later the right sub-tree. We should always remember that every node may
represent a subtree itself.

If a binary tree is traversed in-order, the output will produce sorted key
values in an ascending order.

Steps:

●​ Traverse the left subtree​

●​ Visit the root node​

●​ Traverse the right subtree​


●​
●​ We start from A, and following in-order traversal, we move to
its left subtree B.B is also traversed in-order. The process goes
on until all the nodes are visited. The output of in-order
traversal of this tree will be −
●​ D B E A F C G

Algorithm
Until all nodes are traversed −

Step 1 − Recursively traverse left subtree.


Step 2 − Visit root node.
Step 3 − Recursively traverse right subtree.

Example 1:

A
/\
B C
/\
D E

Inorder: D B E A C
Example 2:

1
/\
2 3
/\
4 5

Inorder: 4 2 5 1 3

2. Preorder Traversal (Root, L, R)


In this traversal method, the root node is visited first, then the left subtree
and finally the right subtree.

We start from A, and following pre-order traversal, we first visit A itself and
then move to its left subtree B. B is also traversed pre-order. The process
goes on until all the nodes are visited. The output of pre-order traversal of
this tree will be −

A→B→D→E→C→F→G

Steps:

●​ Visit the root node​

●​ Traverse the left subtree​

●​ Traverse the right subtree

Algorithm
Until all nodes are traversed −

Step 1 − Visit root node.


Step 2 − Recursively traverse left subtree.
Step 3 − Recursively traverse right subtree.

Example 1:

A
/\
B C
/\
D E

Preorder: A B D E C

Example 2:
1
/\
2 3
/\
4 5

Preorder: 1 2 4 5 3

3. Postorder Traversal (L, R, Root)


In this traversal method, the root node is visited last, hence the name. First
we traverse the left subtree, then the right subtree and finally the root node.

Steps:

●​ Traverse the left subtree​

●​ Traverse the right subtree​

●​ Visit the root node​


We start from A, and following pre-order traversal, we first visit the left subtree B. B is also
traversed post-order. The process goes on until all the nodes are visited. The output of
post-order traversal of this tree will be
●​ D → E → B → F → G → C → A

Algorithm
Until all nodes are traversed −

Step 1 − Recursively traverse left subtree.


Step 2 − Recursively traverse right subtree.
Step 3 − Visit root node.

Example 1:

A
/\
B C
/\
D E

Postorder: D E B C A

Example 2:

1
/\
2 3
/\
4 5

Postorder: 4 5 2 3 1
Diagram:

Binary Tree:

A
/\
B C
/\
D E

Summary Table:

Traversal Order
Type

Inorder DBEAC

Preorder ABDEC

Postorder DEBCA

Practice Examples:

Example 3:

F
/\
D G
/\
B E
/
A
●​ Inorder: A B D E F G​

●​ Preorder: F D B A E G​

●​ Postorder: A B E D G F​

Example 4:

M
/\
L Q
/\
N R

●​ Inorder: L M N Q R​

●​ Preorder: M L Q N R​

●​ Postorder: L N R Q M​

4. Level-order Traversal (Breadth-First)


In level-order traversal, the nodes are visited level by level from left to right. This
traversal uses a queue data structure to keep track of nodes.

Example:
Level-order Traversal: 1, 2, 3, 4, 5

Steps:
●​ Visit the root: 1
●​ Visit the nodes at the next level: 2, 3
●​ Visit the nodes at the next level: 4, 5

These traversal techniques are foundational for understanding more


complex operations in binary trees and are widely used in various
applications such as expression tree evaluation, file system navigation,
and more.

Binary Tree Operations With Examples


Binary trees support several fundamental operations, including insertion, deletion,
searching, and traversal:

1. Insertion
Insertion involves adding a new node to the binary tree. In a binary tree, a new node
is usually inserted at the first available position in level order to maintain the
completeness of the tree.

Example:
Let's insert the value 6 into the following binary tree:

After insertion, the binary tree will look like this:

2. Deletion
Deletion involves removing a node from the binary tree. In a binary tree, the node to
be deleted is replaced by the deepest and rightmost node to maintain the tree's
structure.
Example:
Let's delete the value 3 from the following binary tree:

After deletion, the binary tree will look like this:

3. Search
Searching involves finding a node with a given value in the binary tree. The search
operation can be implemented using any traversal method (in-order, pre-order,
post-order, or level-order).

Example:
Let's search for the value 5 in the following binary tree:
Using level-order traversal:

●​ Visit node 1
●​ Visit node 2
●​ Visit node 3
●​ Visit node 4
●​ Visit node 5 (found)

4. Traversal
Traversal involves visiting all the nodes in the binary tree in a specific order. The
main traversal methods are in-order, pre-order, post-order, and level-order.

Example:
Consider the following binary tree:
●​ In-order Traversal (Left, Root, Right): 4, 2, 5, 1, 3
●​ Pre-order Traversal (Root, Left, Right): 1, 2, 4, 5, 3
●​ Post-order Traversal (Left, Right, Root): 4, 5, 2, 3, 1
●​ Level-order Traversal (Breadth-First): 1, 2, 3, 4, 5

Time and Space Complexity of Binary Tree

Worst Case Time


Operation Complexity Space Complexity

Insertion O(n) O(n)

Deletion O(n) O(n)

Search O(n) O(n)

Traversal (In-order, O(h) (where h is the height


O(n)
Pre-order, Post-order) of the tree)

Applications of Binary Trees with Examples


Binary trees have a wide range of applications in computer science and various
algorithms:

●​ Binary Search Trees (BST): Efficient searching, insertion, and deletion.


●​ Heaps: Implement priority queues with efficient maximum or minimum retrieval.
●​ Expression Trees: Represent and evaluate arithmetic expressions.
●​ Huffman Coding Tree: Data compression using optimal prefix codes.
●​ AVL Trees: Self-balancing BSTs for efficient operations.
●​ B-Trees: Used in databases and file systems for balanced storage.
●​ Decision Trees: Machine learning models for classification and regression.

Threaded Binary tree

What is a Threaded Binary tree?


In a Threaded Binary Tree, the nodes will store the in-order predecessor/successor instead of
storing NULL in the left/right child pointers.
So the basic idea of a threaded binary tree is that for the nodes whose right pointer is null, we
store the in-order successor of the node (if-exists), and for the nodes whose left pointer is null,
we store the in-order predecessor of the node(if-exists).
One thing to note is that the leftmost and the rightmost child pointer of a tree always points to
null as their in-order predecessor and successor do not exist.

To understand this, let’s look at an example of a Threaded Binary Tree.


In the above-given figure, the right pointer of node value 6 points to 9.
○​ Now, we will check the left pointer of node 9, and if it is NULL, then we modify the
reference to the predecessor of the node, which is 6.
○​ Then, we will check for the right pointer, and if it is also NULL, we will point it to the
successor node, which is 10. Thus, it will point to 10.
○​ We point to in-order predecessors/successors in left/right pointers using threads
(denoted by dotted lines) as shown in the figure above, and that is why it is known as a
Threaded Binary Tree.
○​ Since the leftmost pointer of this tree is the left pointer of node value 5 and the rightmost
pointer of this tree is the right pointer of node value 20, they both point to NULL.​

The main idea behind setting such a structure is to make the inorder and preorder traversal of a
binary tree faster without using any additional Data structure (e.g. auxiliary stack) or memory for
its traversal.
What is the need for a Threaded Binary Tree?

Purpose / Need for Threaded Binary Tree

1.​ Space Optimization​

○​ Makes use of null pointers for threading instead of external stacks.​

○​ Reduces memory usage during traversal.​

2.​ Time Efficiency​

○​ Avoids backtracking to ancestors.​

○​ Enables quicker traversal through direct links.​

3.​ Simplification of Tree Operations​


○​ Makes insertion, deletion, and traversal easier by maintaining a
sequence of nodes.​

4.​ Non-recursive Traversals​

○​ Supports in-order and pre-order traversal without recursion.​

○​ Useful where stack memory is limited or recursion is inefficient.​

●​

Types of Threaded Binary tree


There are two types of Threaded Binary Trees:

1.​ One-way Threaded Binary Tree


2.​ Two-way Threaded Binary Tree

1. One-way Threaded Binary Tree


In this type, if a node has a right null pointer, then this right pointer is threaded towards
the in-order successor’s node if it exists.

Node Structure of Single-Threaded Binary Trees


The structure of a node in a binary threaded tree is quite similar to that of a binary tree,
but with some modifications. In threaded binary trees, we need to use extra boolean
variables in the node structure. For single-threaded binary trees, we use only the
rightThread variable.

struct Node{
int value;
Node* left;
Node* right;
bool rightThread;
}

The following diagram depicts an example of a Single-Threaded Binary Tree. Dotted


lines represent threads.
In the figure given above, you can observe the node value 20 does not have any child nodes.
So, the right pointer of node value 20 is null and therefore, it is pointing to its in-order successor
(node value 30) through a thread. Similarly, the other nodes of this tree containing a right null
pointer refer to their in-order successor, as shown.

2. Two-way Threaded Binary Tree


In this type, the left null pointer of a node is made to point towards the in-order
predecessor node and the right null pointer is made to point towards the in-order
successor node.

Node Structure of Double-Threaded Binary Trees


For the double-threaded binary tree, we use two boolean variables: rightThread and
leftThread.

struct Node{
int value;
Node* left;
Node* right;
bool rightThread;
bool leftThread;
}

Here, the leftThread and rightThread boolean variables help us to differentiate
whether the left/right pointer stores the in-order predecessor/successor or left child/right
child.
Let’s look at an example to understand this.
This is how a Double-Threaded Binary Tree looks like. You can observe here that node value 40
have no left and right child. So, its left pointer points to its in-order predecessor (node value 30)
and it's right pointer points towards its in-order successor (node value 50). Similarly, the other
nodes of this tree with a left/right null pointer refers to their in-order predecessor/successor
using threads.

Here is the in-order traversal of a threaded binary tree:

1.​ Initialize: Set current to the leftmost node of the tree using the
Leftmost(root) function​

2.​ Traversal Loop: While current is not null, repeat the following steps​

3.​ Process Node: Process the current node by performing the desired action
using the ProcessNode function​

4.​ Move to In-Order Successor:


○​ If the current node has a right thread: Set current to the node's
right thread​

○​ Else: Set current to the leftmost node in the right subtree using
Leftmost(current.rightChild)​

5.​ Repeat: Go back to step 2 until all nodes are visited​

6.​ Finish: The algorithm completes when all nodes in the threaded binary tree
are processed
Advantages of Threaded Binary Tree
Let’s discuss some advantages of a Threaded Binary tree.

○​ No need for stacks or recursion: Unlike binary trees, threaded binary trees
do not require a stack or recursion for their traversal.
○​ Optimal memory usage: Another advantage of threaded binary tree data
structure is that it decreases memory wastage. In normal binary trees,
whenever a node’s left/right pointer is NULL, memory is wasted. But with
threaded binary trees, we are overcoming this problem by storing its inorder
predecessor/successor.
○​ Time complexity: In-order traversal in a threaded binary tree is fast
because we get the next node in O(1) time than a normal binary tree that
takes O(Height). But insertion and deletion operations take more time for the
threaded binary tree.
○​ Backward traversal: In a double-threaded binary tree, we can even do a
backward traversal.

Disadvantages of Threaded Binary tree


Let’s discuss some disadvantages that might create a problem for a programmer using
this.

○​ Complicated insertion and deletion: By storing the inorder predecessor/


successor for the node with a null left/right pointer, we make the insertion
and deletion of a node more time-consuming and a highly complex process.
○​ Extra memory usage: We use additional memory in the form of rightThread
and leftThread variables to distinguish between a thread from an ordinary
link. (However, there are more efficient methods to differentiate between a
thread and an ordinary link).

Applications of Threaded Binary Tree


There are some applications of threaded binary trees:

●​ Efficient In-Order Traversal: Threaded binary trees enhance in-order


traversal efficiency by eliminating the need for recursion or stacks​
●​ Space Efficiency: Threaded trees save space by eliminating null pointers,
reducing memory overhead​

●​ Simplified Tree Traversal: Threaded structures simplify and streamline tree


traversal algorithms​

●​ Fast Searching and Retrieval: Threaded binary trees enable faster


navigation, improving the performance of search operations​

●​ Threaded Tree-based Iterators: Threaded trees are useful for


implementing efficient iterators for various tree traversal orders​

●​ Binary Search Tree Operations: Threaded trees enhance efficiency in


operations like finding minimum/maximum elements or
predecessors/successors

Heap

Introduction

A heap is a special type of tree-based data structure where the tree is a complete binary
tree. They follow a special rule: in every node, the value must be either bigger than or
the same as its children's values (this is called a max-heap) or smaller than or equal to
its children's values (this is called a min-heap). This makes heaps perfect for creating
priority queues, where you always need quick access to the largest or smallest element,
which is kept at the top of the tree. Heaps are used in many popular algorithms like
Dijkstra's Shortest Path and Heap Sort.

Heap Data Structure – Key Points

📌 Definition
●​ A Heap is a special tree-based data structure that satisfies the heap property.​

●​ There are two types of heaps:​

○​ Max Heap: The value of each node is greater than or equal to the values
of its children.​

○​ Min Heap: The value of each node is less than or equal to the values of
its children.​

🌳 Structure
●​ A heap is a nearly complete binary tree.​

●​ All levels are fully filled except possibly the last, which is filled from left to
right.​

●​ Heaps are usually represented as arrays, with parent-child relationships defined


by index.​

🔄 Heap Property
●​ The highest-priority (max or min) element is always at the top (root) of the
heap.​

●​ This property ensures efficient priority access.​

Basics of Heap Data Structure


Now that we understand what heaps are & some of their applications, let's understand
the basics of how heaps work.​

As mentioned earlier, a heap is a complete binary tree. This means that all levels of the
tree are fully filled, except possibly the last level, which is filled from left to right. This
property allows heaps to be efficiently stored in an array.​

In an array representation of a heap, the first element (index 0) is the root of the heap.
For any element at index i:

●​ Its left child is at index 2i + 1​

●​ Its right child is at index 2i + 2​

●​ Its parent is at index floor((i - 1) / 2)​

The heap property (either max-heap or min-heap) is what distinguishes a heap from a
regular binary tree. In a max-heap, for any given node C, if P is a parent node of C, then
the key (the value) of P is greater than or equal to the key of C. In a min-heap, the key
of P is less than or equal to the key of C.​

This property is crucial for efficient insertion & deletion operations, as it allows us to
maintain the heap structure while only comparing a node with its parent or children.

Another important property of heaps is that they are not sorted structures. The only
guarantee is that the root element is either the maximum (in a max heap) or the
minimum (in a min-heap). The other elements are not in any particular order.

Types of Heaps


There are two main types of heaps: max heaps & min heaps.​

1. Max Heap: In a max heap, the key at the root node must be the largest among all the
keys present in the heap. The same property must be recursively true for all sub-trees in
the binary tree.
2. Min Heap: In a min heap, the key at the root node must be the smallest among all the
keys present in the heap. The same property must be recursively true for all sub-trees in
the binary tree.
The min-heap property is useful for algorithms like Dijkstra's Shortest Path, while the
max-heap property is useful for algorithms like Heap Sort.​
It's important to note that there is no ordering between siblings in a heap - the only
requirement is that the parent node must be greater than (in a max heap) or less than
(in a min-heap) both of its children.​

Heaps are usually implemented with the help of arrays, with the array indices used to
implicitly represent the tree structure based on the parent-child relationships. For a node
at index i, its children are at indices 2i+1 & 2i+2, while its parent (if any) is at index
floor((i-1)/2).

Heap Operations

Operations supported by min – heap and max – heap are same. The difference is just that
min-heap contains minimum element at root of the tree and max – heap contains maximum
element at the root of the tree.

Heapify:

It is the process to rearrange the elements to maintain the property of heap data structure. It is
done when root is removed (we replace root with the last node and then call heapify to ensure
that heap property is maintained) or heap is built (we call heapify from the last internal node to
root) to make sure that the heap property is maintained. This operation also takes O(log n) time.

For max-heap, it makes sure the maximum element is the root of that binary tree and all
descendants also follow the same property.

For min-heap, it balances in such a way that the minimum element is the root and all
descendants also follow the same property.

Insertion:

1.​ Insertion​

○​ The new element is added at the end (bottom) of the heap.​

○​ It's compared to its parent and swapped up if needed (heapify-up)


When a new element is inserted into the heap, it can disrupt the heap’s properties. To restore
and maintain the heap structure, a heapify operation is performed. This operation ensures the
heap properties are preserved and has a time complexity of O(log n).

Examples:

Assume initially heap(taking max-heap) is as follows

8​
/ \​
4 5​
/ \​
1 2

Now if we insert 10 into the heap​


8​
/ \​
4 5​
/ \ /​
1 2 10

After repeatedly comparing with the parent nodes and swapping if required, the final heap will
be look like this​
10​
/ \​
4 8​
/ \ /​
1 25

Deletion:

2.​ Extraction (Remove Top Element)​

○​ The root is removed and replaced with the last element.​

○​ The new root is compared with its children and swapped down if
needed (heapify-down).​
If we delete the element from the heap it always deletes the root element of the tree and
replaces it with the last element of the tree.

Since we delete the root element from the heap it will distort the properties of the heap so we
need to perform heapify operations so that it maintains the property of the heap.

It takes O(log n) time.

Example:

Assume initially heap(taking max-heap) is as follows​


15​
/ \​
5 7​
/ \​
2 3

Now if we delete 15 into the heap it will be replaced by leaf node of the tree for temporary.​
3​
/ \​
5 7​
/ ​
2

After heapify operation final heap will be look like this​


7​
/ \​
5 3​
/ ​
2

getMax (For max-heap) or getMin (For min-heap):

It finds the maximum element or minimum element for max-heap and min-heap respectively
and as we know minimum and maximum elements will always be the root node itself for
min-heap and max-heap respectively. It takes O(1) time.

removeMin or removeMax:

This operation returns and deletes the maximum element and minimum element from the
max-heap and min-heap respectively. In short, it deletes the root element of the heap binary
tree.
Binary Search Tree(BST)
Binary search tree is a data structure that quickly allows us to maintain a
sorted list of numbers.

●​ It is called a binary tree because each tree node has a maximum of two
children.
●​ It is called a search tree because it can be used to search for the
presence of a number in O(log(n)) time.

The properties that separate a binary search tree from a regular binary tree is

1.​ All nodes of left subtree are less than the root node
2.​ All nodes of right subtree are more than the root node
3.​ Both subtrees of each node are also BSTs i.e. they have the above two
properties
A tree having a right subtree with one value smaller than the root is shown to demonstrate that it is
not a valid binary search tree

The binary tree on the right isn't a binary search tree because the right
subtree of the node "3" contains a value smaller than it.

There are two basic operations that you can perform on a binary search tree:

Search Operation
The algorithm depends on the property of BST that if each left subtree has
values below root and each right subtree has values above the root.

If the value is below the root, we can say for sure that the value is not in the
right subtree; we need to only search in the left subtree and if the value is
above the root, we can say for sure that the value is not in the left subtree; we
need to only search in the right subtree.
Definition:

Search operation in a Binary Search Tree (BST) involves finding whether a


particular value exists in the tree.​

In BST, for each node:​
- Left subtree has nodes with values less than the node.​
- Right subtree has nodes with values greater than the node.​

This property allows efficient searching (O(log n) time in balanced BST).

📘 Algorithm for Searching in a Binary


Search Tree (BST):

Step-by-step (Recursive method):​


1. Start at the root node.​
2. If root is NULL, the value is not found.​
3. If value == root.data, return found.​
4. If value < root.data, search in left subtree.​
5. If value > root.data, search in right subtree.

📌 Algorithm (Pseudocode):
Function Search(root, value):​
​ If root is NULL:​
​ Return false​
​ If value == root.data:​
​ Return true​
​ If value < root.data:​
​ Return Search(root.left, value)​
​ Else:​
​ Return Search(root.right, value)

OR

Algorithm:

If root == NULL
return NULL;
If number == root->data
return root->data;
If number < root->data
return search(root->left)
If number > root->data
return search(root->right)

Step-by-Step Search Example (Search for 60)

Inserted BST Values: 50, 30, 70, 20, 40, 60, 80

The BST looks like:


50
/ \
30 70
/ \ / \
20 40 60 80

Now let’s search for 60:

1.​ Start at the root node 50.​


→ 60 > 50, so we move to the right subtree.​

2.​ Current node: 70​


→ 60 < 70, so we move to the left subtree.​

3.​ Current node: 60​


→ 60 == 60. Element found!

If the value is not found, we eventually reach the left or right child of a leaf
node which is NULL and it gets propagated and returned.
Insert Operation
Inserting a value in the correct position is similar to searching because we try
to maintain the rule that the left subtree is lesser than root and the right
subtree is larger than root.

We keep going to either right subtree or left subtree depending on the value
and when we reach a point left or right subtree is null, we put the new node
there.

Insertion in a Binary Tree means adding a new node to the tree in such a way
that the structure of the tree remains a Binary Tree, i.e., each node has at
most two children.​

In Binary Search Tree (BST) specifically, insertion follows an ordered rule:​
- Left subtree has nodes with values less than the root.​
- Right subtree has nodes with values greater than the root.

Algorithm:

Algorithm for Insertion in a Binary Search


Tree (BST):

Step-by-step (Recursive method):​


1. Start at the root.​
2. If the tree is empty, the new node becomes the root.​
3. If the new value is less than the current node:​
- Move to the left subtree.​
- If left is NULL, insert the new node there.​
4. If the new value is greater than the current node:​
- Move to the right subtree.​
- If right is NULL, insert the new node there.​
5. Repeat the process recursively until the node is inserted.

OR

If node == NULL
return createNode(data)
if (data < node->data)
node->left = insert(node->left, data);
else if (data > node->data)
node->right = insert(node->right, data);
return node;

Example: Insert the following values into


an empty BST
Values to insert: 50, 30, 70, 20, 40, 60, 80

🔢 Step-by-step Construction:
1. Insert 50:​
→ Tree is empty, so 50 becomes the root.​

50

2. Insert 30:​
→ 30 < 50 → insert to the left of 50.​

50​
/​
30

3. Insert 70:​
→ 70 > 50 → insert to the right of 50.​

50​
/ \​
30 70

4. Insert 20:​
→ 20 < 50 → go left → 20 < 30 → insert to the left of 30.​

50​
/ \​
30 70​
/​
20

5. Insert 40:​
→ 40 < 50 → go left → 40 > 30 → insert to the right of 30.​

50​
/ \​
30 70​
/ \​
20 40

6. Insert 60:​
→ 60 > 50 → go right → 60 < 70 → insert to the left of 70.​

50​
. / \​
30 70​
/ \ /​
20 40 60

7. Insert 80:​
→ 80 > 50 → go right → 80 > 70 → insert to the right of 70.​

50​
. / \​
30 70​
/ \ / \​
20 40 60 80

🌿 Final Tree (In-Order Traversal Output):


20 30 40 50 60 70 80
Deletion in Binary Search Tree (BST)

In a Binary Search Tree (BST), each node has at most two children, and the
left child contains a value less than the parent node, while the right child
contains a value greater than the parent node. Deletion in a BST is a bit more
complex than insertion because the tree needs to maintain its sorted property
even after the node has been removed.

The deletion operation involves three primary cases:

1.​ Deleting a Leaf Node​

2.​ Deleting a Node with One Child​

3.​ Deleting a Node with Two Children​

The key challenge in deleting a node from a BST is ensuring that the tree
maintains its properties after the deletion. Let’s explore each of these cases in
detail.

Case I: Deleting a Leaf Node

A leaf node is a node that has no children. This is the simplest case of
deletion because there are no structural changes required to the tree other
than removing the node.

Steps for Deleting a Leaf Node:

1.​ Locate the node to be deleted.​

2.​ If the node is a leaf, simply remove it from the tree.​

3.​ Update the parent’s pointer to null, as the node no longer exists.​
Example:​
Consider the following tree:

50
/ \
30 70
/ \ / \
20 40 60 80

If we delete node 20 (a leaf node), it simply disappears from the tree:

50
/ \
30 70
\ / \
40 60 80

Case II: Deleting a Node with One Child

When a node has only one child, the process is a bit more complex, but still
straightforward. Instead of simply removing the node, we replace it with its
child.

Steps for Deleting a Node with One Child:

1.​ Find the node to be deleted.​

2.​ If the node has only one child, replace the node with its child.​

3.​ Remove the child from its original position.​


Example:​
Consider the following tree:

50
/ \
30 70
\ / \
40 60 80

If we delete node 30, which has only one child (40), the tree becomes:

50
/ \
40 70
/ \
60 80

Case III: Deleting a Node with Two Children

When a node has two children, the process of deletion is more complex. In
this case, we need to find a replacement for the node that preserves the
properties of the BST. This replacement is typically the inorder successor or
inorder predecessor of the node.

●​ The inorder successor is the smallest node in the right subtree of the
node.​

●​ The inorder predecessor is the largest node in the left subtree of the
node.​
The inorder successor is usually chosen for deletion because it’s guaranteed
to be greater than all the nodes in the left subtree but less than all the nodes
in the right subtree.

Steps for Deleting a Node with Two Children:

1.​ Find the node to be deleted.​

2.​ Find the inorder successor of the node (smallest node in the right
subtree).​

3.​ Replace the node to be deleted with its inorder successor.​

4.​ Remove the inorder successor from its original position.​

Example:​
Consider the following tree:
50
/ \
30 70
/ \ / \
20 40 60 80

If we delete node 50, which has two children, we find the inorder successor.
The inorder successor of 50 is 60 (the smallest node in the right subtree). So
we replace 50 with 60, and then we delete 60 from its original position.

The resulting tree looks like this:

60
/ \
30 70
/ \ \
20 40 80

Binary Search Tree Complexities

Time Complexity

Operati Best Case Average Case Worst Case


on Complexity Complexity Complexity

Search O(log n) O(log n) O(n)

Insertio O(log n) O(log n) O(n)


n

Deletion O(log n) O(log n) O(n)

Here, n is the number of nodes in the tree.


Space Complexity
The space complexity for all the operations is O(n).

Binary Search Tree Applications


1.​ In multilevel indexing in the database
2.​ For dynamic sorting
3.​ For managing virtual memory areas in Unix kernel

AVL Tree

Definition

An AVL tree is a type of self-balancing binary search tree (BST). The key property of an AVL
tree is that for every node, the difference between the heights of the left and right subtrees
(known as the balance factor) is at most 1. This ensures that the tree remains balanced and
operations like insertion, deletion, and search can be performed in O(log n) time.

Theory

In a regular binary search tree (BST), the height of the left and right subtrees can differ
significantly, leading to unbalanced trees. In such unbalanced trees, operations such as
insertion and deletion could take O(n) time, where n is the number of nodes in the tree.

An AVL tree maintains balance by enforcing the following condition:

●​ Balance Factor (BF) of any node is the difference between the heights of its left and
right subtrees.​

●​ Balance Factor (BF) = Height of Left Subtree - Height of Right Subtree.​

If the balance factor of any node becomes greater than 1 or less than -1, the tree is rebalanced
using rotations.

Balance Factor

The balance factor (BF) of a node in an AVL tree is calculated as:


●​ Balance Factor (BF) = Height of Left Subtree - Height of Right Subtree​

●​ Balance Factor = 0: The node is balanced (both subtrees have equal height).​

●​ Balance Factor = 1: The left subtree is one level taller than the right subtree.​

●​ Balance Factor = -1: The right subtree is one level taller than the left subtree.​

●​ Balance Factor > 1: The node is left-heavy and requires a rotation to the right.​

●​ Balance Factor < -1: The node is right-heavy and requires a rotation to the left.​

Example of AVL Tree

Consider the following AVL tree:

Avl tree
Consider the following AVL tree:

●​ For node 10: Left height = 0, Right height = 0, BF = 0 - 0 = 0


●​ For node 20: Left height = 1, Right height = 0, BF = 1 - 0 = 1
●​ For node 50: Left height = 0, Right height = 0, BF = 0 - 0 = 0
●​ For node 40: Left height = 0, Right height = 1, BF = 0 - 1 = -1
●​ For node 30: Left height = 2, Right height = 2, BF = 2 - 2 = 0

This tree is balanced as all nodes have a balancing factor of -1, 0, or 1.

●​
This AVL tree is balanced because the balance factors of all nodes are within the range of -1 to 1.

AVL Tree Rotations


Rotations in AVL tree are operations performed to maintain the balance of the tree after
insertions or deletions.

There are four types of rotations: Left Rotation, Right Rotation, Left-Right Rotation, and
Right-Left Rotation. Each rotation helps to rebalance the tree when a node becomes
unbalanced.

1. Left Rotation
A left rotation is performed when a node's right subtree is heavier than its left subtree

(balance factor < -1). This rotation shifts the balance to the left.When a node is added

into the right subtree of the right subtree, if the tree gets out of balance, we

do a single left rotation.

2. Right Rotation A right rotation is performed when a node's left subtree is heavier than its
right subtree (balance factor > 1). This rotation shifts the balance to the right.If a node is added
to the left subtree of the left subtree, the AVL tree may get out of balance,
we do a single right rotation.

3. Left-Right (LR) Rotation


A left-right or LR rotation in AVL tree is performed when a node's left subtree has a right-heavy child,
causing the balance factor of the left subtree to be less than zero. This rotation is a combination of a
left rotation followed by a right rotation.A left-right rotation is a combination in which
first left rotation takes place after that right rotation executes.

4. Right-Left (RL) Rotation


A right-left or RL rotation in AVL tree is performed when a node's right subtree has a left-heavy child,
causing the balance factor of the right subtree to be greater than zero. This rotation is a combination
of a right rotation followed by a left rotation.
A right-left rotation is a combination in which first right rotation takes place after that left rotation
executes.

Operations in AVL Tree

The main operations on an AVL tree include insertion, deletion, and search. After each insertion or
deletion, the tree may become unbalanced, requiring rebalancing through rotations.

Insertion in AVL Tree

Insertion is similar to insertion in a regular binary search tree, but after every insertion, the tree
needs to be rebalanced.

Steps for Insertion:

1.​ Insert the new node as in a regular BST.​

2.​ Update the height of each ancestor node.​

3.​ Calculate the balance factor for each ancestor node.​

4.​ If the balance factor of any node is outside the range [-1, 1], perform rotations to restore
balance.​
●​ ​

Example of Insertion:

Operations on AVL Tree Data Structure


1. Insertion
Insertion in an AVL tree involves adding a new key while ensuring the tree remains
balanced. After inserting a new node, the balance factor of each node is checked,
and rotations are performed if necessary to maintain the AVL property.

Algorithm:
●​ Perform a standard BST insertion.
●​ Update the height of the current node.
●​ Calculate the balance factor of the current node.
●​ Perform rotations if the node becomes unbalanced.

Example:


2. Deletion in AVL Tree
Deletion in an AVL tree follows similar steps as in a regular BST deletion. However, after deletion,
we need to check the balance factor and perform rotations if necessary.

Steps for Deletion:

1.​ Delete the node as in a regular BST.​

2.​ Update the heights of the ancestors of the deleted node.​

3.​ Calculate the balance factor of each ancestor node.​

4.​ If the balance factor of any node is outside the range [-1, 1], perform the appropriate rotation.

Algorithm:
●​ Perform a standard BST deletion.
●​ Update the height of the current node.
●​ Calculate the balance factor of the current node.
●​ Perform rotations if the node becomes unbalanced.

Example:
Delete key 20 from the AVL tree:
3. Searching
Searching in an AVL tree is similar to searching in a binary search tree (BST). Since AVL
trees are balanced, searching is efficient with a time complexity of O(log n).

Algorithm:
●​ Start from the root node.
●​ Compare the target key with the key of the current node.
●​ If the target key is equal to the current node's key, return the node.
●​ If the target key is less than the current node's key, move to the left child.
●​ If the target key is greater than the current node's key, move to the right child.
●​ Repeat steps 2-5 until the target key is found or the leaf node is reached.

Example:
Search for key 20 in the AVL tree:

Now the tree is balanced, with all balance factors within the range [-1, 1].

Graphs - Graph ADT


Graph Traversal Algorithms
Graph traversal algorithms are used to visit all the vertices and edges in a graph. The two
most common traversal methods are Depth-First Search (DFS) and Breadth-First Search
(BFS). Each method has its own algorithm and applications.

1. Depth-First Search (DFS)


Depth First Search (DFS) algorithm is a recursive algorithm for searching all
the vertices of a graph or tree data structure. This algorithm traverses a
graph in a depthward motion and uses a stack to remember to get the next
vertex to start a search, when a dead end occurs in any iteration.

Algorithm:
●​ Start at the root (or any arbitrary node) and mark it as visited.
●​ Explore each adjacent node recursively that has not been visited yet.
●​ Repeat the process until all nodes are visited.

Steps:
●​ Initialize a stack and push the starting node onto the stack.
●​ While the stack is not empty:
●​ Pop a node from the stack.
●​ If the node has not been visited:
●​ Mark the node as visited.
●​ Push all adjacent unvisited nodes onto the stack.
Example:
Consider the following graph:

DFS Traversal from A:


●​ Start at A.
●​ Visit A, push B and C to the stack.
●​ Pop C, visit C, push E to the stack.
●​ Pop E, visit E.
●​ Pop B, visit B, push D to the stack.
●​ Pop D, visit D.

Traversal Order: A, C, E, B, D

2. Breadth-First Search (BFS)

Breadth First Search (BFS) Algorithm


Breadth First Search (BFS) algorithm traverses a graph in a breadthward
motion to search a graph data structure for a node that meets a set of
criteria. It uses a queue to remember the next vertex to start a search,
when a dead end occurs in any iteration.

Breadth First Search (BFS) algorithm starts at the tree root and explores all
nodes at the present depth prior to moving on to the nodes at the next
depth level.
Algorithm:
●​ Start at the root (or any arbitrary node) and mark it as visited.
●​ Explore each adjacent node level by level.
●​ Repeat the process until all nodes are visited.

Steps:
●​ Initialize a queue and enqueue the starting node.
●​ While the queue is not empty:
●​ Dequeue a node from the queue.
●​ If the node has not been visited:
●​ Mark the node as visited.
●​ Enqueue all adjacent unvisited nodes.

Example:
Consider the same graph:

BFS Traversal from A:


●​ Start at A.
●​ Visit A, enqueue B and C.
●​ Dequeue B, visit B, enqueue D.
●​ Dequeue C, visit C, enqueue E.
●​ Dequeue D, visit D.
●​ Dequeue E, visit E.

Traversal Order: A, B, C, D, E

You might also like