0% found this document useful (0 votes)
31 views128 pages

Data Structures

The document provides an overview of linked lists, detailing the differences between singly and doubly linked lists, and explains stack operations using linked lists. It also discusses applications of queues, operations on doubly linked lists, and procedures for sorting nodes in a singly linked list. Additionally, it covers common operations for singly, doubly, and circular linked lists, as well as the differences between stack and queue data structures.

Uploaded by

lathasreeponnada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views128 pages

Data Structures

The document provides an overview of linked lists, detailing the differences between singly and doubly linked lists, and explains stack operations using linked lists. It also discusses applications of queues, operations on doubly linked lists, and procedures for sorting nodes in a singly linked list. Additionally, it covers common operations for singly, doubly, and circular linked lists, as well as the differences between stack and queue data structures.

Uploaded by

lathasreeponnada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

UNIT-1

1. What is a Linked list? Specify the difference between Single and Double Linked list.
Linked list:

Linked list is a linear data structure that includes a series of connected nodes. Linked list can
be defined as the nodes that are randomly stored in the memory. A node in the linked list
contains two parts, i.e., first is the data part and second is the address part. The last node of
the list contains a pointer to the null. After array, linked list is the second most used data
structure. In a linked list, every link contains a connection to another link.

Difference between Single and Double Linked list:

What is a singly linked list?


A singly linked list can be simply called a linked list. A singly linked list is a list that
consists of a collection of nodes, and each node has two parts; one part is the data
part, and another part is the address. The singly linked can also be called a chain as
each node refers to another node through its address part. We can perform various
operations on a singly linked list like insertion, deletion, and traversing.
What is a doubly-linked list?
A doubly linked list is another type of the linked list. It is called a doubly linked list
because it contains two addresses while a singly linked list contains a single address.
It is a list that has total three parts, one is a data part, and others two are the pointers,
i.e., previous and next. The previous pointer holds the address of the previous node,
and the next pointer holds the address of the next node. Therefore, we can say that
list has two references, i.e., forward and backward reference to traverse in either
direction.

2.Explain about the implementation of Stack Operations using Linked list?

Stack using Linked List


Stack as we know is a Last In First Out(LIFO) data structure. It has the following
operations :

 push: push an element into the stack

 pop: remove the last element added

 top: returns the element at top of stack


Implementation of Stack using Linked List

Stacks can be easily implemented using a linked list. Stack is a data structure to
which a data can be added using the push() method and data can be removed from
it using the pop() method. With Linked list, the push operation can be replaced by
the addAtFront() method of linked list and pop operation can be replaced by a
function which deletes the front node of the linked list.

In this way our Linked list will virtually become a Stack


with push() and pop() methods.

First we create a class node. This is our Linked list node class which will have data in
it and a node pointer to store the address of the next node element.

class node

int data;

node *next;

};

Copy

Then we define our stack class,


class Stack

node *front; // points to the head of list

public:

Stack()

front = NULL;

// push method to add data element

void push(int);

// pop method to remove data element

void pop();

// top method to return top data element

int top();

};

Inserting Data in Stack (Linked List)

In order to insert an element into the stack, we will create a node and place it in front
of the list.

void Stack :: push(int d)

// creating a new node


node *temp;

temp = new node();

// setting data to it

temp->data = d;

// add the node in front of list

if(front == NULL)

temp->next = NULL;

else

temp->next = front;

front = temp;

Now whenever we will call the push() function a new node will get added to our list
in the front, which is exactly how a stack behaves.

Removing Element from Stack (Linked List)

In order to do this, we will simply delete the first node, and make the second node,
the head of the list.
void Stack :: pop()

// if empty

if(front == NULL)

cout << "UNDERFLOW\n";

// delete the first element

else

node *temp = front;

front = front->next;

delete(temp);

Return Top of Stack (Linked List)

In this, we simply return the data stored in the head of the list.

int Stack :: top()

return front->data;
}

3.Explain any one applications of queue with an example.

Application of Queue in Data Structure :


In this guide of understanding what is Queue in Data Structure, let's see the
major application of queue in data structure in detail, go through all the points
discussed below:

 Application of queues in Job Scheduling:


For scheduled jobs, the application of queues in data structure is often used
where the presence of each job is done as a process and is added to a
queue. The first job is considered first, and the next job is processed once
completed.
 Application of queues in Buffer:
Queues are used as buffers for the storage of incoming data before
processing. For instance, in networking, to hold incoming packets, a queue is
used before forwarding them to the following destination.
 Application of queues in Printer Spooler:
This application of queue in data structure takes care of various print requests
from numerous users. To store the print request, a queue is used, and the
queue is processed before receiving.
 Application of queues in Call Centre Routing:
Call centres make use of queues to manage incoming calls. It helps add the
callers to the queue where they are all answered in the order they are
received. This is one of the essential applications of queue in data
structure.

4.Explain the entire Operations of Double Linked list.

Operations of Doubly Linked List with Implementation



A Doubly Linked List (DLL) contains an extra pointer, typically called the
previous pointer, together with the next pointer and data which are there in a
singly linked list.

Below are operations on the given DLL:

1. Add a node at the front of DLL: The new node is always added before
the head of the given Linked List. And the newly added node becomes
the new head of DLL & maintaining a global variable for counting the
total number of nodes at that time.

2. Traversal of a Doubly linked list


3. Insertion of a node: This can be done in three ways:
 At the beginning: The new created node is insert in before
the head node and head points to the new node.
 At the end: The new created node is insert at the end of the
list and tail points to the new node.
 At a given position: Traverse the given DLL to that
position(let the node be X) then do the following:
1. Change the next pointer of new Node to the next
pointer of Node X.
2. Change the prev pointer of next Node of Node X to
the new Node.
3. Change the next pointer of node X to new Node.
4. Change the prev pointer of new Node to the Node X.
4. Deletion of a node: This can be done in three ways:
 At the beginning: Move head to the next node to delete the
node at the beginning and make previous pointer of current
head to NULL .
 At the last: Move tail to the previous node to delete the node
at the end and make next pointer of tail node to NULL.
 At a given position: Let the prev node of Node at position
pos be Node X and next node be Node Y, then do the
following:
1. Change the next pointer of Node X to Node Y.
2. Change the previous pointer of Node Y to Node X.
5.Explain the Procedure to sort the nodes in a Single Linked list

Program to sort the elements of the singly linked list

Explanation

In this program, we need to sort the nodes of the given singly linked list in ascending
order.

Original list:

Sorted list:

To accomplish this task, we maintain two pointers: current and index. Initially, current
point to head node and index will point to node next to current. Traverse through the
list till current points to null, by comparing current's data with index's data. If current's
data is greater than the index's data, then swap data between them. In the above
example, current will initially point to 9 and index will point to 7. Since, 9 is greater
than 7, swap the data. Continue this process until the entire list is sorted in ascending
order.

Program:

1. #include <stdio.h>
2.
3. //Represent a node of the singly linked list
4. struct node{
5. int data;
6. struct node *next;
7. };
8.
9. //Represent the head and tail of the singly linked list
10. struct node *head, *tail = NULL;
11.
12. //addNode() will add a new node to the list
13. void addNode(int data) {
14. //Create a new node
15. struct node *newNode = (struct node*)malloc(sizeof(struct node));
16. newNode->data = data;
17. newNode->next = NULL;
18.
19. //Checks if the list is empty
20. if(head == NULL) {
21. //If list is empty, both head and tail will point to new node
22. head = newNode;
23. tail = newNode;
24. }
25. else {
26. //newNode will be added after tail such that tail's next will point to newNode
27. tail->next = newNode;
28. //newNode will become new tail of the list
29. tail = newNode;
30. }
31. }
32.
33. //sortList() will sort nodes of the list in ascending order
34. void sortList() {
35. //Node current will point to head
36. struct node *current = head, *index = NULL;
37. int temp;
38.
39. if(head == NULL) {
40. return;
41. }
42. else {
43. while(current != NULL) {
44. //Node index will point to node next to current
45. index = current->next;
46.
47. while(index != NULL) {
48. //If current node's data is greater than index's node data, swap the data
between them
49. if(current->data > index->data) {
50. temp = current->data;
51. current->data = index->data;
52. index->data = temp;
53. }
54. index = index->next;
55. }
56. current = current->next;
57. }
58. }
59. }
60.
61. //display() will display all the nodes present in the list
62. void display() {
63. //Node current will point to head
64. struct node *current = head;
65. if(head == NULL) {
66. printf("List is empty \n");
67. return;
68. }
69. while(current != NULL) {
70. //Prints each node by incrementing pointer
71. printf("%d ", current->data);
72. current = current->next;
73. }
74. printf("\n");
75. }
76.
77. int main()
78. {
79. //Adds data to the list
80. addNode(9);
81. addNode(7);
82. addNode(2);
83. addNode(5);
84. addNode(4);
85.
86. //Displaying original list
87. printf("Original list: \n");
88. display();
89.
90. //Sorting list
91. sortList();
92.
93. //Displaying sorted list
94. printf("Sorted list: \n");
95. display();
96.
97. return 0;
98. }

Output:

Original list:
9 7 2 5 4
Sorted list:
2 4 5 7 9

Additional ques:
1.Algorithm Implementation using Linked Lists.

What is a Linked List?


A Linked List is a linear data structure which looks like a chain of nodes, where each
node is a different element. Unlike Arrays, Linked List elements are not stored at a
contiguous location.
It is basically chains of nodes, each node contains information such as data and
a pointer to the next node in the chain. In the linked list there is a head pointer,
which points to the first element of the linked list, and if the list is empty then it
simply points to null or nothing.

1. Singly-linked list
Traversal of items can be done in the forward direction only due to the linking of
every node to its next node.

Singly Linked Lis

// A Single linked list node

struct Node {

int data;

struct Node* next;

};

Commonly used operations on Singly Linked List:


The following operations are performed on a Single Linked List
 Insertion: The insertion operation can be performed in three ways. They are as
follows…
 Inserting At the Beginning of the list
 Inserting At End of the list
 Inserting At Specific location in the list
 Deletion: The deletion operation can be performed in three ways. They are as
follows…
 Deleting from the Beginning of the list
 Deleting from the End of the list
 Deleting a Specific Node
 Search: It is a process of determining and retrieving a specific node either from
the front, the end or anywhere in the list.
 Display: This process displays the elements of a Single-linked list.

2. Doubly linked list


Traversal of items can be done in both forward and backward directions as every
node contains an additional prev pointer that points to the previous node.

/* Node of a doubly linked list */

struct Node {

int data;

struct Node* next; // Pointer to next node in DLL

struct Node* prev; // Pointer to previous node in DLL

};

Commonly used operations on Double-Linked List:


In a double-linked list, we perform the following operations…
 Insertion: The insertion operation can be performed in three ways as
follows:
 Inserting At the Beginning of the list
 Inserting after a given node.
 Inserting at the end.
 Inserting before a given node
 Deletion: The deletion operation can be performed in three ways as
follows…
 Deleting from the Beginning of the list
 Deleting from the End of the list
 Deleting a Specific Node
 Display: This process displays the elements of a double-linked list

3. Circular linked lists


A circular linked list is a type of linked list in which the first and the last nodes are
also connected to each other to form a circle, there is no NULL at the end.

Commonly used operations on Circular Linked List:


The following operations are performed on a Circular Linked List
 Insertion: The insertion operation can be performed in three ways:
 Insertion in an empty list
 Insertion at the beginning of the list
 Insertion at the end of the list
 Insertion in between the nodes
 Deletion: The deletion operation can be performed in three ways:
 Deleting from the Beginning of the list
 Deleting from the End of the list
 Deleting a Specific Node
 Display: This process displays the elements of a Circular linked list.

2.Stacks and Queues

Difference between Stack and Queue Data


Structures

Stack: A stack is a linear data structure in which elements can be inserted and
deleted only from one side of the list, called the top. A stack follows the LIFO (Last
In First Out) principle, i.e., the element inserted at the last is the first element to
come out. The insertion of an element into the stack is called push operation, and
the deletion of an element from the stack is called pop operation. In stack, we
always keep track of the last element present in the list with a pointer called top.
The diagrammatic representation of the stack is given below:

Queue is a linear data structure in which elements can be inserted only from one
side of the list called rear, and the elements can be deleted only from the other
side called the front. The queue data structure follows the FIFO (First In First
Out) principle, i.e. the element inserted at first in the list, is the first element to be
removed from the list. The insertion of an element in a queue is called
an enqueue operation and the deletion of an element is called
a dequeue operation. In queue, we always maintain two pointers, one pointing
to the element which was inserted at the first and still present in the list with
the front pointer and the second pointer pointing to the element inserted at the
last with the rear pointer.
The diagrammatic representation of the queue is given below:

Difference between Stack and Queue Data Structures are as follows:


Stacks Queues

A stack is a data structure that stores a A queue is a data structure that stores a collection of
collection of elements, with operations elements, with operations to enqueue (add)
to push (add) and pop (remove) elements at the back of the queue, and dequeue
elements from the top of the stack. (remove) elements from the front of the queue.
Stacks Queues

Stacks are based on the LIFO principle, Queues are based on the FIFO principle, i.e., the
i.e., the element inserted at the last, is element inserted at the first, is the first element to
the first element to come out of the list. come out of the list.

Stacks are often used for tasks that


Queues are often used for tasks that involve
require backtracking, such as parsing
processing elements in a specific order, such as
expressions or implementing undo
handling requests or scheduling tasks.
functionality.

Insertion and deletion in queues takes place from the


Insertion and deletion in stacks takes
opposite ends of the list. The insertion takes place at
place only from one end of the list called
the rear of the list and the deletion takes place from
the top.
the front of the list.

Insert operation is called push


Insert operation is called enqueue operation.
operation.

Stacks are implemented using an array Queues are implemented using an array or linked list
or linked list data structure. data structure.

Delete operation is called pop operation. Delete operation is called dequeue operation.

In queues we maintain two pointers to access the


In stacks we maintain only one pointer
list. The front pointer always points to the first
to access the list, called the top, which
element inserted in the list and is still present, and
always points to the last element
the rear pointer always points to the last inserted
present in the list.
element.

Stack is used in solving problems works Queue is used in solving problems having sequential
on recursion. processing.

Stacks are often used for recursive Queues are often used in multithreaded applications,
algorithms or for maintaining a history where tasks are added to a queue and executed by a
of function calls. pool of worker threads.
Stacks Queues

Queue is of three types – 1. Circular Queue 2. Priority


Stack does not have any types.
queue 3. double-ended queue.

Can be considered as a vertical


Can be considered as a horizontal collection visual.
collection visual.

Examples of queue-based algorithms include


Examples of stack-based languages
Breadth-First Search (BFS) and printing a binary tree
include PostScript and Forth.
level-by-level.

1. https://www.simplilearn.com/tutorials/data-structure-
tutorial/stacks-and-queues

2. https://www.pvpsiddhartha.ac.in/dep_it/lecture%20notes/C
DS/unit3.pdf

UNIT - 2:

1. What is a Binary Tree? Explain its insertion and Deletion Operations?

A binary tree is a tree-type non-linear data structure with a maximum of two children for each
parent. Every node in a binary tree has a left and right reference along with the data element. The
node at the top of the hierarchy of a tree is called the root node. The nodes that hold other sub -
nodes are the parent nodes.
A parent node has two child nodes: the left child and right child. Hashing, routing data for network
traffic, data compression, preparing binary heaps, and binary search trees are some of the
applications that use a binary tree.

Terminologies associated with Binary Trees and Types of Binary Trees

 Node: It represents a termination point in a tree.


 Root: A tree’s topmost node.
 Parent: Each node (apart from the root) in a tree that has at least one sub-node of its
own is called a parent node.
 Child: A node that straightway came from a parent node when moving away from the
root is the child node.
 Leaf Node: These are external nodes. They are the nodes that have no child.
 Internal Node: As the name suggests, these are inner nodes with at least one child.
 Depth of a Tree: The number of edges from the tree’s node to the root is.
 Height of a Tree: It is the number of edges from the node to the deepest leaf. The tree
height is also considered the root height.

Types of Binary Tree


1. Full Binary Tree

A full Binary tree is a special type of binary tree in which every parent node/internal node has either

two or no children.
To learn more, please visit full binary tree.

2. Perfect Binary Tree

A perfect binary tree is a type of binary tree in which every internal node has exactly two child nodes

and all the leaf nodes are at the same level.

To learn more, please visit perfect binary tree.

3. Complete Binary Tree

A complete binary tree is just like a full binary tree, but with two major differences

1. Every level must be completely filled

2. All the leaf elements must lean towards the left.

3. The last leaf element might not have a right sibling i.e. a complete binary tree doesn't have

to be a full binary tree.


To learn more, please visit complete binary tree.

4. Degenerate or Pathological Tree

A degenerate or pathological tree is the tree having a single child either left or right.

5. Skewed Binary Tree

A skewed binary tree is a pathological/degenerate tree in which the tree is either dominated by the left

nodes or the right nodes. Thus, there are two types of skewed binary tree: left-skewed binary

tree and right-skewed binary tree.


6. Balanced Binary Tree

It is a type of binary tree in which the difference between the height of the left and the right subtree

for each node is either 0 or 1.

Insertion and deletion operations:


Insertion Operation-

Insertion Operation is performed to insert an element in the Binary Search Tree.

Rules-

The insertion of a new key always takes place as the child of some leaf node.
For finding out the suitable leaf node,
 Search the key to be inserted from the root node till some leaf node is reached.
 Once a leaf node is reached, insert the key as child of that leaf node.

Example-

Consider the following example where key = 40 is inserted in the given BST-
 We start searching for value 40 from the root node 100.
 As 40 < 100, so we search in 100’s left subtree.
 As 40 > 20, so we search in 20’s right subtree.
 As 40 > 30, so we add 40 to 30’s right subtree.

Deletion Operation-

Deletion Operation is performed to delete a particular element from the Binary Search Tree.

When it comes to deleting a node from the binary search tree, following three cases are
possible.
Case-01: Deletion Of A Node Having No Child (Leaf Node)-

Just remove / disconnect the leaf node that is to deleted from the tree.

Example-

Consider the following example where node with value = 20 is deleted from the BST-
Case-02: Deletion Of A Node Having Only One Child-

Just make the child of the deleting node, the child of its grandparent.

Example-

Consider the following example where node with value = 30 is deleted from the BST-

Case-02: Deletion Of A Node Having Two Children-

A node with two children may be deleted from the BST in the following two ways-

Method-01:

 Visit to the right subtree of the deleting node.


 Pluck the least value element called as inorder successor.
 Replace the deleting element with its inorder successor.
Example-

Consider the following example where node with value = 15 is deleted from the BST-

Method-02:

 Visit to the left subtree of the deleting node.


 Pluck the greatest value element called as inorder predecessor.
 Replace the deleting element with its inorder predecessor.

Example-

Consider the following example where node with value = 15 is deleted from the BST-
2.Discuss any three application areas of Binary search tree

Application of Binary Tree


o Binary trees are applied in data compression methods in the form of the Huffman
coding tree.
o Expression Trees, a binary tree application, are used in compilers.
o Another binary tree application that searches maximum or minimum in O(log N) time
complexity is priority queue.
o Display data that is hierarchical.
o Utilized in spreadsheet and Microsoft Excel editing applications.
o Syntax trees are used by several well-known computer compilers, including GCC and
AOCL, to execute arithmetic operations. They are helpful for indexing segmentation at
the database and storing cache in the system.
o For putting priority queues into action.
o Utilized to provide quick memory allocation in computers (binary search tree) by
finding items more quickly.
o Encoding and decoding operations

3.How can you perform the selection sort and the following elements by using the selection
sort technique? 70,30,20,50,60,10,40.

The Selection Sort


The selection sort improves on the bubble sort by making only one exchange for every pass through the list. In
order to do this, a selection sort looks for the largest value as it makes a pass and, after completing the pass,
places it in the proper location. As with a bubble sort, after the first pass, the largest item is in the correct place.
After the second pass, the next largest is in place. This process continues and requires �−1 passes to
sort n items, since the final item must be in place after the (�−1) st pass.
Figure 3 shows the entire sorting process. On each pass, the largest remaining item is selected and then placed
in its proper location. The first pass places 93, the second pass places 77, the third places 55, and so on. The fu i

4.Implement Quick sort technique on the following 20,6,89,32,65,92,8.

Quick Sort

Let us understand the working of partition and the Quick Sort algorithm with the help of the
following example:
Consider: arr[] = {10, 80, 30, 90, 40}.
 Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.
5.Implement Insertion sort technique on the following 20,6,89,32,65,92,8.

Insertion Sort Algorithm:


insertion sort works similar to the sorting of playing cards in hands. It is assumed that the
first card is already sorted in the card game, and then we select an unsorted card. If the
selected unsorted card is greater than the first card, it will be placed at the right side;
otherwise, it will be placed at the left side. Similarly, all unsorted cards are taken and put in
their exact place.

Algorithm

The simple steps of achieving the insertion sort are listed as follows -

Step 1 - If the element is the first element, assume that it is already sorted. Return 1.

Step2 - Pick the next element, and store it separately in a key.

Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to
the next element. Else, shift greater elements in the array towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.

Working of Insertion sort Algorithm

Now, let's see the working of the insertion sort Algorithm.

To understand the working of the insertion sort algorithm, let's take an unsorted array.
It will be easier to understand the insertion sort via an example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending
order. So, for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25.
Along with swapping, insertion sort will also check it with all elements in the sorted
array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence,
the sorted array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next
elements that are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items
that are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

6.Explain the Merge Sort with suitable Example

Merge sort is defined as a sorting algorithm that works by dividing an array into
smaller subarrays, sorting each subarray, and then merging the sorted subarrays
back together to form the final sorted array.
7. Explain how Postfix can be evaluated.

Postfix Expression Evaluation

A postfix expression is a collection of operators and operands in which the operator is placed after

the operands. That means, in a postfix expression the operator follows the operands.Postfix

Expression has following general structure..

Example
Postfix Expression Evaluation using Stack Data Structure

A postfix expression can be evaluated using the Stack data structure. To evaluate a postfix

expression using Stack data structure we can use the following steps...

1. Read all the symbols one by one from left to right in the given Postfix Expression

2. If the reading symbol is operand, then push it on to the Stack.

3. If the reading symbol is operator (+ , - , * , / etc.,), then perform TWO pop operations

and store the two popped oparands in two different variables (operand1 and

operand2). Then perform reading symbol operation using operand1 and operand2

and push result back on to the Stack.

4. Finally! perform a pop operation and display the popped value as final result.

Example

Consider the following Expression...


8.Write the pseudo code for the Depth First Traversal Technique.

DFS (Depth First Search) algorithm

In this article, we will discuss the DFS algorithm in the data structure. It is a recursive algorithm
to search all the vertices of a tree data structure or a graph. The depth-first search (DFS)
algorithm starts with the initial node of graph G and goes deeper until we find the goal node
or the node with no children.

The step by step process to implement the DFS traversal is given as follows -

1. First, create a stack with the total number of vertices in the graph.

2. Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.

3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top
of the stack.

4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.

5. If no vertex is left, go back and pop a vertex from the stack.

6. Repeat steps 2, 3, and 4 until the stack is empty.

Algorithm

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until STACK is empty

Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)

Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2 (waiting state)

Step 6: EXIT

Pseudocode:

1. DFS(G,v) ( v is the vertex where the search starts )


2. Stack S := {}; ( start with an empty stack )
3. for each vertex u, set visited[u] := false;
4. push S, v;
5. while (S is not empty) do
6. u := pop S;
7. if (not visited[u]) then
8. visited[u] := true;
9. for each unvisited neighbour w of uu
10. push S, w;
11. end if
12. end while
13. END DFS()

Example of DFS algorithm

Now, let's understand the working of the DFS algorithm by using an example. In the example given
below, there is a directed graph having 7 vertices.

Now, let's start examining the graph starting from Node H.

Step 1 - First, push H onto the stack.

1. STACK: H

Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all the neighbors of H onto
the stack that are in ready state.

1. Print: H]STACK: A

Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all the neighbors of A onto
the stack that are in ready state.

1. Print: A
2. STACK: B, D
Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all the neighbors of D onto
the stack that are in ready state.

1. Print: D
2. STACK: B, F

Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all the neighbors of F onto
the stack that are in ready state.

1. Print: F
2. STACK: B

Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all the neighbors of B onto
the stack that are in ready state.

1. Print: B
2. STACK: C

Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all the neighbors of C onto
the stack that are in ready state.

1. Print: C
2. STACK: E, G

Step 8 - POP the top element from the stack, i.e., G and PUSH all the neighbors of G onto the stack that
are in ready state.

1. Print: G
2. STACK: E

Step 9 - POP the top element from the stack, i.e., E and PUSH all the neighbors of E onto the stack that
are in ready state.

1. Print: E
2. STACK:

Now, all the graph nodes have been traversed, and the stack is empty.

Reference video

https://www.youtube.com/watch?v=pDxbtrVDwSU
9.Create a Binary tree from the following in-order and Pre-order traversal data In-Order data :
g,d,h,b,e,i,a,f,j,c. Pre-Order date: a,b,d,g,h,e,i,c,f,j.

(answer for this que)

To construct a binary tree from the given in-order and pre-order traversal data, we can follow the below
steps:

1. The first element of the pre-order traversal data is the root of the binary tree. In this case, the root
is ‘a’.
2. Find the index of the root element in the in-order traversal data. In this case, the index of ‘a’ is 6.
3. All the elements to the left of the root element in the in-order traversal data form the left subtree of
the root. Similarly, all the elements to the right of the root element in the in-order traversal data
form the right subtree of the root.
4. Recursively apply the above steps to construct the left and right subtrees of the root.

Using the above steps, we can construct the binary tree from the given in-order and pre-order traversal data
as shown below:
a
/ \
/ \
/ \
/ \
/ \
b c
/ \ / \
/ \ / \
d e f j
/ \
/ \
g h

example

Tree traversal (Inorder, Preorder an Postorder)

In this article, we will discuss the tree traversal in the data structure. The term 'tree traversal'
means traversing or visiting each node of a tree. There is a single way to traverse the linear
data structure such as linked list, queue, and stack. Whereas, there are multiple ways to traverse
a tree that are listed as follows -

o Preorder traversal
o Inorder traversal
o Postorder traversal

So, in this article, we will discuss the above-listed techniques of traversing a tree. Now, let's
start discussing the ways of tree traversal.
Preorder traversal

This technique follows the 'root left right' policy. It means that, first root node is visited after
that the left subtree is traversed recursively, and finally, right subtree is recursively traversed.
As the root node is traversed before (or pre) the left and right subtree, it is called preorder
traversal.

So, in a preorder traversal, each node is visited before both of its subtrees.

Backward Skip 10sPlay VideoForward Skip 10s

The applications of preorder traversal include -

o It is used to create a copy of the tree.


o It can also be used to get the prefix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited


2.
3. Step 1 - Visit the root node
4. Step 2 - Traverse the left subtree recursively.
5. Step 3 - Traverse the right subtree recursively.

Example

Now, let's see the example of the preorder traversal technique.

Now, start applying the preorder traversal on the above tree. First, we traverse the root
node A; after that, move to its left subtree B, which will also be traversed in preorder.

So, for left subtree B, first, the root node B is traversed itself; after that, its left subtree D is
traversed. Since node D does not have any children, move to right subtree E. As node E also
does not have any children, the traversal of the left subtree of root node A is completed.
Now, move towards the right subtree of root node A that is C. So, for right subtree C, first the
root node C has traversed itself; after that, its left subtree F is traversed. Since node F does not
have any children, move to the right subtree G. As node G also does not have any children,
traversal of the right subtree of root node A is completed.

Therefore, all the nodes of the tree are traversed. So, the output of the preorder traversal of
the above tree is -

A→B→D→E→C→F→G

To know more about the preorder traversal in the data structure, you can follow the
link Preorder traversal.

Postorder traversal

This technique follows the 'left-right root' policy. It means that the first left subtree of the root
node is traversed, after that recursively traverses the right subtree, and finally, the root node
is traversed. As the root node is traversed after (or post) the left and right subtree, it is called
postorder traversal.

So, in a postorder traversal, each node is visited after both of its subtrees.

The applications of postorder traversal include -

o It is used to delete the tree.


o It can also be used to get the postfix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited


2.
3. Step 1 - Traverse the left subtree recursively.
4. Step 2 - Traverse the right subtree recursively.
5. Step 3 - Visit the root node.

Example

Now, let's see the example of the postorder traversal technique.


Now, start applying the postorder traversal on the above tree. First, we traverse the left subtree
B that will be traversed in postorder. After that, we will traverse the right subtree C in
postorder. And finally, the root node of the above tree, i.e., A, is traversed.

So, for left subtree B, first, its left subtree D is traversed. Since node D does not have any
children, traverse the right subtree E. As node E also does not have any children, move to the
root node B. After traversing node B, the traversal of the left subtree of root node A is
completed.

Now, move towards the right subtree of root node A that is C. So, for right subtree C, first its
left subtree F is traversed. Since node F does not have any children, traverse the right
subtree G. As node G also does not have any children, therefore, finally, the root node of the
right subtree, i.e., C, is traversed. The traversal of the right subtree of root node A is completed.

At last, traverse the root node of a given tree, i.e., A. After traversing the root node, the
postorder traversal of the given tree is completed.

Therefore, all the nodes of the tree are traversed. So, the output of the postorder traversal of
the above tree is -

D→E→B→F→G→C→A

To know more about the postorder traversal in the data structure, you can follow the
link Postorder traversal.

Inorder traversal

This technique follows the 'left root right' policy. It means that first left subtree is visited after
that root node is traversed, and finally, the right subtree is traversed. As the root node is
traversed between the left and right subtree, it is named inorder traversal

So, in the inorder traversal, each node is visited in between of its subtrees.

The applications of Inorder traversal includes -

o It is used to get the BST nodes in increasing order.


o It can also be used to get the prefix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited


2.
3. Step 1 - Traverse the left subtree recursively.
4. Step 2 - Visit the root node.
5. Step 3 - Traverse the right subtree recursively.

Example

Now, let's see the example of the Inorder traversal technique.

Now, start applying the inorder traversal on the above tree. First, we traverse the left
subtree B that will be traversed in inorder. After that, we will traverse the root node A. And
finally, the right subtree C is traversed in inorder.

So, for left subtree B, first, its left subtree D is traversed. Since node D does not have any
children, so after traversing it, node B will be traversed, and at last, right subtree of node B,
that is E, is traversed. Node E also does not have any children; therefore, the traversal of the
left subtree of root node A is completed.

After that, traverse the root node of a given tree, i.e., A.

At last, move towards the right subtree of root node A that is C. So, for right subtree C; first,
its left subtree F is traversed. Since node F does not have any children, node C will be
traversed, and at last, a right subtree of node C, that is, G, is traversed. Node G also does not
have any children; therefore, the traversal of the right subtree of root node A is completed.

As all the nodes of the tree are traversed, the inorder traversal of the given tree is completed.
The output of the inorder traversal of the above tree is -

D→B→E→A→F→C→G
ADDITIONAL QUESTIONS
BFS (Breadth First Search)
BFS traversal of a graph produces a spanning tree as final result. Spanning Tree is a graph
without loops. We use Queue data structure with maximum size of total number of vertices
in the graph to implement BFS traversal.
We use the following steps to implement BFS traversal...

Step 1 - Define a Queue of size total number of vertices in the graph.


Step 2 - Select any vertex as starting point for traversal. Visit that vertex and insert it into the
Queue.
Step 3 - Visit all the non-visited adjacent vertices of the vertex which is at front of the Queue
and insert them into the Queue.
Step 4 - When there is no new vertex to be visited from the vertex which is at front of the
Queue then delete that vertex.
Step 5 - Repeat steps 3 and 4 until queue becomes empty.
Step 6 - When queue becomes empty, then produce final spanning tree by removing unused
edges from the graph

Algorithm of BFS :

1. set status of all vertex=1


2. insert the starting vertex in queue and set its status=2
3. while(queue is not empty)
4. remove front vertex in queue and set its status=3
5. if(status of neighbors = 1)
6. then insert neighbors in queue and set its status=2
7. end of while
8. exit
Example
(DFS)Depth First Search
Depth-first search is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as
the root node in the case of a graph) and explores as far as possible along each
branch before backtracking.
Let us understand the working of Depth First Search with the help of the following
illustration:
2.INFIX , PREFIX , AND POSTFIX IN DATA STRUCTURES ?
UNIT-3
1.How do you represent Hash table? Explain

Hash Table
Hash table is one of the most important data structures that uses a special function known as a hash
function that maps a given value with a key to access the elements faster.

A Hash table is a data structure that stores some information, and the information has basically two
main components, i.e., key and value. The hash table can be implemented with the help of an
associative array. The efficiency of mapping depends upon the efficiency of the hash function used
for mapping.

For example, suppose the key value is John and the value is the phone number, so when we pass the
key value in the hash function shown as below:

Hash(key)= index;

When we pass the key in the hash function, then it gives the index.

Hash(john) = 3;

The above example adds the john at the index 3.

Drawback of Hash function

A Hash function assigns each value with a unique key. Sometimes hash table uses an imperfect hash
function that causes a collision because the hash function generates the same key of two different
values.

Hashing
Hashing is one of the searching techniques that uses a constant time. The time complexity in hashing
is O(1). Till now, we read the two techniques for searching, i.e., linear search and binary search. The
worst time complexity in linear search is O(n), and O(logn) in binary search. In both the searching
techniques, the searching depends upon the number of elements but we want the technique that takes a
constant time. So, hashing technique came that provides a constant time.

In Hashing technique, the hash table and hash function are used. Using the hash function, we can
calculate the address at which the value can be stored.

The main idea behind the hashing is to create the (key/value) pairs. If the key is given, then the
algorithm computes the index at which the value would be stored. It can be written as:

Index = hash(key)
There are three ways of calculating the hash function:

o Division method
o Folding method
o Mid square method

In the division method, the hash function can be defined as:

h(ki) = ki % m;

where m is the size of the hash table.

For example, if the key value is 6 and the size of the hash table is 10. When we apply the hash
function to key 6 then the index would be:

h(6) = 6%10 = 6

The index is 6 at which the value is stored.

Refrence video

https://www.youtube.com/watch?v=W5q0xgxmRd8
2. Explain double hashing with an Example.

Double Hashing
 
Double hashing is a collision resolution technique used in hash tables. It works
by using two hash functions to compute two different hash values for a given key.
The first hash function is used to compute the initial hash value, and the second
hash function is used to compute the step size for the probing sequence.
Double hashing has the ability to have a low collision rate, as it uses two hash
functions to compute the hash value and the step size. This means that the
probability of a collision occurring is lower than in other collision resolution
techniques such as linear probing or quadratic probing.
However, double hashing has a few drawbacks. First, it requires the use of two
hash functions, which can increase the computational complexity of the insertion
and search operations. Second, it requires a good choice of hash functions to
achieve good performance. If the hash functions are not well-designed, the
collision rate may still be high.
Advantages of Double hashing
 The advantage of Double hashing is that it is one of the best forms of
probing, producing a uniform distribution of records throughout a hash
table.
 This technique does not yield any clusters.
 It is one of the effective methods for resolving collisions.
Double hashing can be done using :
(hash1(key) + i * hash2(key)) % TABLE_SIZE
Here hash1() and hash2() are hash functions and TABLE_SIZE
is size of hash table.
(We repeat by increasing i when collision occurs)
Method 1: First hash function is typically hash1(key) = key % TABLE_SIZE
A popular second hash function is hash2(key) = PRIME – (key %
PRIME) where PRIME is a prime smaller than the TABLE_SIZE.
A good second Hash function is:
 It must never evaluate to zero
 Just make sure that all cells can be probed
1. 3. What do you mean by a Hash table and the hash function Explain the following hash

function with an Example.


(i) Division Method (ii) Mid Square (iii) Digit Analysis.

Hash Tables
A hash table is a data structure that maps keys to values. It uses a hash function to
calculate the index for the data key and the key is stored in the index.

An example of a hash table is as follows −

The key sequence that needs to be stored in the hash table is −

35 50 11 79 76 85

The hash function h(k) used is:


h(k) = k mod 10

Using linear probing, the values are stored in the hash table as −

Hash Functions
The following are some of the Hash Functions −

Division Method

This is the easiest method to create a hash function. The hash function can be
described as −

h(k) = k mod n

Here, h(k) is the hash value obtained by dividing the key value k by size of
hash table n using the remainder. It is best that n is a prime number as that
makes sure the keys are distributed with more uniformity.

An example of the Division Method is as follows −


k=1276
n=10
h(1276) = 1276 mod 10
= 6

The hash value obtained is 6

A disadvantage of the division method id that consecutive keys map to


consecutive hash values in the hash table. This leads to a poor performance.

Multiplication Method

The hash function used for the multiplication method is −

h(k) = floor( n( kA mod 1 ) )

Here, k is the key and A can be any constant value between 0 and 1. Both k
and A are multiplied and their fractional part is separated. This is then
multiplied with n to get the hash value.

An example of the Multiplication Method is as follows −

k=123
n=100
A=0.618033
h(123) = 100 (123 * 0.618033 mod 1)
= 100 (76.018059 mod 1)
= 100 (0.018059)
= 1

The hash value obtained is 1

An advantage of the multiplication method is that it can work with any value
of A, although some values are believed to be better than others.

Mid Square Method

The mid square method is a very good hash function. It involves squaring the
value of the key and then extracting the middle r digits as the hash value. The
value of r can be decided according to the size of the hash table.

An example of the Mid Square Method is as follows −


Suppose the hash table has 100 memory locations. So r=2 because two digits
are required to map the key to memory location.

k = 50
k*k = 2500
h(50) = 50

The hash value obtained is 50

Refrence :
Hash Functions and Hash Tables (tutorialspoint.com)

4. What is a priority Queue ADT & Explain the Insertion and Deletion Operations on a
priority Queue with Examples.

What is a priority queue?

A priority queue is an abstract data type that behaves similarly to the normal queue
except that each element has some priority, i.e., the element with the highest priority
would come first in a priority queue. The priority of the elements in a priority queue
will determine the order in which elements are removed from the priority queue.

The priority queue supports only comparable elements, which means that the
elements are either arranged in an ascending or descending order.

For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority
queue with an ordering imposed on the values is from least to the greatest. Therefore,
the 1 number would be having the highest priority while 22 will be having the lowest
priority.

Types of Priority Queue

There are two types of priority queue:

o Ascending order priority queue: In ascending order priority queue, a lower priority
number is given as a higher priority in a priority. For example, we take the numbers
from 1 to 5 arranged in an ascending order like 1,2,3,4,5; therefore, the smallest
number, i.e., 1 is given as the highest priority in a priority queue.

o Descending order priority queue: In descending order priority queue, a higher


priority number is given as a higher priority in a priority. For example, we take the
numbers from 1 to 5 arranged in descending order like 5, 4, 3, 2, 1; therefore, the largest
number, i.e., 5 is given as the highest priority in a priority queue.

Operations of a Priority Queue:


A typical priority queue supports the following operations:
1)Insertion in a Priority Queue:
When a new element is inserted in a priority queue, it moves to the empty slot
from top to bottom and left to right. However, if the element is not in the correct
place then it will be compared with the parent node. If the element is not in the
correct order, the elements are swapped. The swapping process continues until all
the elements are placed in the correct position.
2)Deletion in a Priority Queue :
As you know that in a max heap, the maximum element is the root node. And it will
remove the element which has maximum priority first. Thus, you remove the root
node from the queue. This removal creates an empty slot, which will be further
filled with new insertion. Then, it compares the newly inserted element with all the
elements inside the queue to maintain the heap invariant.
3) Peek in a Priority Queue:
This operation helps to return the maximum element from Max Heap or the
minimum element from Min Heap without deleting the node from the priority
queue.

5.Define Dictionary and give the application of the dictionary with duplication in which
sequential access is desired

Dictionary Data Structure


Dictionary is one of the important Data Structures that is usually used to store data in
the key-value format. Each element presents in a dictionary data structure
compulsorily have a key and some value is associated with that particular key. In other
words, we can also say that Dictionary data structure is used to store the data in key-
value pairs. Other names for the Dictionary data structure are associative array, map,
symbol table but broadly it is referred to as Dictionary.

A dictionary or associative array is a general-purpose data structure that is used for


the storage of a group of objects.

Many popular languages add Dictionary or associative array as a primitive data type in
their languages while other languages which don't consider Dictionary or associative
array as a primitive data type have included Dictionary or associative array in their
software libraries. A direct form of hardware-level support for the Dictionary or
associative array is Content-addressable memory.

In Dictionary or associative array, the relation or association between the key and the
value is known as the mapping. We can say that each value in the dictionary is mapped
to a particular key present in the dictionary or vice-versa.

The various operations that are performed on a Dictionary or associative array are:

o Add or Insert: In the Add or Insert operation, a new pair of keys and values is added
in the Dictionary or associative array object.
o Replace or reassign: In the Replace or reassign operation, the already existing value
that is associated with a key is changed or modified. In other words, a new value is
mapped to an already existing key.
o Delete or remove: In the Delete or remove operation, the already present element is
unmapped from the Dictionary or associative array object.
o Find or Lookup: In the Find or Lookup operation, the value associated with a key is
searched by passing the key as a search argument.

ADDITIONAL QUES:

1) ADT, List ADT, stack ADT and Queue ADT


What is Abstract Data Type (ADT) :

An Abstract Data Type (ADT) is a programming concept that defines a high-level


view of a data structure, without specifying the implementation details. In other
words, it is a blueprint for creating a data structure that defines the behavior and
interface of the structure, without specifying how it is implemented.

An ADT in the data structure can be thought of as a set of operations that can be
performed on a set of values. This set of operations actually defines the behavior of
the data structure, and they are used to manipulate the data in a way that suits the
needs of the program.

ADTs are often used to abstract away the complexity of a data structure and to
provide a simple and intuitive interface for accessing and manipulating the data.
This makes it easier for programmers to reason about the data structure, and to use
it correctly in their programs.

Examples of abstract data type in data structures are List, Stack, Queue, etc.

Abstract Data Type Model :


List ADT:

Lists are linear data structures that hold data in a non-continuous structure. The list
is made up of data storage containers known as "nodes." These nodes are linked to
one another, which means that each node contains the address of another block. All
of the nodes are thus connected to one another via these links. You can discover
more about lists in this article: Linked List Data Structure.

Some of the most essential operations defined in List ADT are listed below.

 front(): returns the value of the node present at the front of the list.
 back(): returns the value of the node present at the back of the list.
 push_front(int val): creates a pointer with value = val and keeps this
pointer to the front of the linked list.
 push_back(int val): creates a pointer with value = val and keeps this
pointer to the back of the linked list.
 pop_front(): removes the front node from the list.
 pop_back(): removes the last node from the list.
 empty(): returns true if the list is empty, otherwise returns false.
 size(): returns the number of nodes that are present in the list.

Stack ADT:

A stack is a linear data structure that only allows data to be accessed from the top.
It simply has two operations: push (to insert data to the top of the stack) and pop
(to remove data from the stack). (used to remove data from the stack top).

Some of the most essential operations defined in Stack ADT are listed below.

 top(): returns the value of the node present at the top of the stack.
 push(int val): creates a node with value = val and puts it at the stack top.
 pop(): removes the node from the top of the stack.
 empty(): returns true if the stack is empty, otherwise returns false.
 size(): returns the number of nodes that are present in the stack.
Queue ADT :

A queue is a linear data structure that allows data to be accessed from both ends.
There are two main operations in the queue: push (this operation inserts data to the
back of the queue) and pop (this operation is used to remove data from the front of
the queue).

Some of the most essential operations defined in Queue ADT are listed below.

 front(): returns the value of the node present at the front of the queue.
 back(): returns the value of the node present at the back of the queue.
 push(int val): creates a node with value = val and puts it at the front of the
queue.
 pop(): removes the node from the rear of the queue.
 empty(): returns true if the queue is empty, otherwise returns false.
 size(): returns the number of nodes that are present in the queue.

UNIT-4
Additional ques:
1) What is a priority queue

A priority queue is an abstract data type that behaves similarly to the normal queue
except that each element has some priority, i.e., the element with the highest priority
would come first in a priority queue. The priority of the elements in a priority queue
will determine the order in which elements are removed from the priority queue.

The priority queue supports only comparable elements, which means that the
elements are either arranged in an ascending or descending order.
For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority
queue with an ordering imposed on the values is from least to the greatest. Therefore,
the 1 number would be having the highest priority while 22 will be having the lowest
priority.

Characteristics of a Priority queue

A priority queue is an extension of a queue that contains the following characteristics:

o Every element in a priority queue has some priority associated with it.
o An element with the higher priority will be deleted before the deletion of the lesser
priority.
o If two elements in a priority queue have the same priority, they will be arranged using
the FIFO principle.

Let's understand the priority queue through an example.

We have a priority queue that contains the following values:

1, 3, 4, 8, 14, 22

All the values are arranged in ascending order. Now, we will observe how the priority
queue will look after performing the following operations:

o poll(): This function will remove the highest priority element from the priority queue.
In the above priority queue, the '1' element has the highest priority, so it will be
removed from the priority queue.
o add(2): This function will insert '2' element in a priority queue. As 2 is the smallest
element among all the numbers so it will obtain the highest priority.
o poll(): It will remove '2' element from the priority queue as it has the highest priority
queue.
o add(5): It will insert 5 element after 4 as 5 is larger than 4 and lesser than 8, so it will
obtain the third highest priority in a priority queue.

Types of Priority Queue


There are two types of priority queue:

o Ascending order priority queue: In ascending order priority queue, a lower priority
number is given as a higher priority in a priority. For example, we take the numbers
from 1 to 5 arranged in an ascending order like 1,2,3,4,5; therefore, the smallest
number, i.e., 1 is given as the highest priority in a priority queue.

o Descending order priority queue: In descending order priority queue, a higher


priority number is given as a higher priority in a priority. For example, we take the
numbers from 1 to 5 arranged in descending order like 5, 4, 3, 2, 1; therefore, the largest
number, i.e., 5 is given as the highest priority in a priority queue.

Representation of priority queue


Now, we will see how to represent the priority queue through a one-way list.

We will create the priority queue by using the list given below in which INFO list
contains the data elements, PRN list contains the priority numbers of each data
element available in the INFO list, and LINK basically contains the address of the next
node.
Let's create the priority queue step by step.

In the case of priority queue, lower priority number is considered the higher
priority, i.e., lower priority number = higher priority.

Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be
inserted in the list as shown in the below diagram:

Step 2: After inserting 333, priority number 2 is having a higher priority, and data
values associated with this priority are 222 and 111. So, this data will be inserted based
on the FIFO principle; therefore 222 will be added first and then 111.

Step 3: After inserting the elements of priority 2, the next higher priority number is 4
and data elements associated with 4 priority numbers are 444, 555, 777. In this case,
elements would be inserted based on the FIFO principle; therefore, 444 will be added
first, then 555, and then 777.

Step 4: After inserting the elements of priority 4, the next higher priority number is 5,
and the value associated with priority 5 is 666, so it will be inserted at the end of the
queue.
Implementation of Priority Queue
The priority queue can be implemented in four ways that include arrays, linked list,
heap data structure and binary search tree. The heap data structure is the most efficient
way of implementing the priority queue, so we will implement the priority queue using
a heap data structure in this topic. Now, first we understand the reason why heap is
the most efficient way among all the other data structures.

Analysis of complexities using different implementations

Implementation add Remove peek

Linked list O(1) O(n) O(n)

Binary heap O(logn) O(logn) O(1)

Binary search tree O(logn) O(logn) O(1)

2) Priority queue using heap?

What is Heap?
A heap is a tree-based data structure that forms a complete binary tree, and satisfies
the heap property. If A is a parent node of B, then A is ordered with respect to the
node B for all nodes A and B in a heap. It means that the value of the parent node
could be more than or equal to the value of the child node, or the value of the parent
node could be less than or equal to the value of the child node. Therefore, we can say
that there are two types of heaps:

Max heap: The max heap is a heap in which the value of the parent node is greater
than the value of the child nodes.

Min heap: The min heap is a heap in which the value of the parent node is less than
the value of the child nodes.

Both the heaps are the binary heap, as each has exactly two child nodes.

Priority Queue Operations


The common operations that we can perform on a priority queue are insertion,
deletion and peek. Let's see how we can maintain the heap data structure.
o Inserting the element in a priority queue (max heap)

If we insert an element in a priority queue, it will move to the empty slot by looking
from top to bottom and left to right.

If the element is not in a correct place then it is compared with the parent node; if it is
found out of order, elements are swapped. This process continues until the element is
placed in a correct position.

o Removing the minimum element from the priority queue

As we know that in a max heap, the maximum element is the root node. When we
remove the root node, it creates an empty slot. The last inserted element will be added
in this empty slot. Then, this element is compared with the child nodes, i.e., left-child
and right child, and swap with the smaller of the two. It keeps moving down the tree
until the heap property is restored.

3) Binary Search Trees, Definition, ADT, Implementation, Operations- Searching, Insertion,


Deletion.
Binary Search Tree(BST)
Binary search tree is a data structure that quickly allows us to maintain a
sorted list of numbers.

 It is called a binary tree because each tree node has a maximum of


two children.

 It is called a search tree because it can be used to search for the


presence of a number in O(log(n)) time.
The properties that separate a binary search tree from a regular binary
tree is
1. All nodes of left subtree are less than the root node

2. All nodes of right subtree are more than the root node

3. Both subtrees of each node are also BSTs i.e. they have the above
two properties

A tree having a right subtree with one value smaller than the root is shown to demonstrate
that it is not a valid binary search tree
The binary tree on the right isn't a binary search tree because the right
subtree of the node "3" contains a value smaller than it.

There are two basic operations that you can perform on a binary search
tree:

Search Operation
The algorithm depends on the property of BST that if each left subtree has
values below root and each right subtree has values above the root.

If the value is below the root, we can say for sure that the value is not in the
right subtree; we need to only search in the left subtree and if the value is
above the root, we can say for sure that the value is not in the left subtree;
we need to only search in the right subtree.

Algorithm:

If root == NULL
return NULL;
If number == root->data
return root->data;
If number < root->data
return search(root->left)
If number > root->data
return search(root->right)

Let us try to visualize this with a diagram.


4 is not found so, traverse through the left subtree of 8

4 is not found so, traverse through the right subtree of 3


4 is not found so, traverse through the left subtree of 6

4 is found

If the value is found, we return the value so that it gets propagated in each
recursion step as shown in the image below.

If you might have noticed, we have called return search(struct node*) four
times. When we return either the new node or NULL, the value gets
returned again and again until search(root) returns the final result.
If the value is found in any of the subtrees, it is propagated up so that in the end it is
returned, otherwise null is returned

If the value is not found, we eventually reach the left or right child of a leaf
node which is NULL and it gets propagated and returned.

Insert Operation
Inserting a value in the correct position is similar to searching because we
try to maintain the rule that the left subtree is lesser than root and the right
subtree is larger than root.

We keep going to either right subtree or left subtree depending on the


value and when we reach a point left or right subtree is null, we put the new
node there.

Algorithm:

If node == NULL
return createNode(data)
if (data < node->data)
node->left = insert(node->left, data);
else if (data > node->data)
node->right = insert(node->right, data);
return node;

The algorithm isn't as simple as it looks. Let's try to visualize how we add a
number to an existing BST.

4<8 so, transverse through the left child of 8

4>3 so, transverse through the right child of 8


4<6 so, transverse through the left child of 6

Insert 4 as a left child of 6

We have attached the node but we still have to exit from the function
without doing any damage to the rest of the tree. This is where the return

node; at the end comes in handy. In the case of NULL , the newly created
node is returned and attached to the parent node, otherwise the same node
is returned without any change as we go up until we return to the root.
This makes sure that as we move back up the tree, the other node
connections aren't changed.
Image showing the importance of returning the root element at the end so that the elements
don't lose their position during the upward recursion step.

Deletion Operation
There are three cases for deleting a node from a binary search tree.

Case I

In the first case, the node to be deleted is the leaf node. In such a case,
simply delete the node from the tree.
4 is to be deleted

Delete the node

Case II

In the second case, the node to be deleted lies has a single child node. In
such a case follow the steps below:

1. Replace that node with its child node.

2. Remove the child node from its original position.


6 is to be deleted

copy the value of its child to the node and delete the child

Final tree
Case III

In the third case, the node to be deleted has two children. In such a case
follow the steps below:

1. Get the inorder successor of that node.

2. Replace the node with the inorder successor.

3. Remove the inorder successor from its original position.

3 is to be deleted

Copy the value of the inorder successor (4) to the node


Delete the inorder successor

Python, Java and C/C++ Examples


Python

Java

C++

# Binary Search Tree operations in Python

# Create a node
class Node:
def __init__(self, key):
self.key = key
self.left = None
self.right = None

# Inorder traversal
def inorder(root):
if root is not None:
# Traverse left
inorder(root.left)
# Traverse root
print(str(root.key) + "->", end=' ')

# Traverse right
inorder(root.right)

# Insert a node
def insert(node, key):

# Return a new node if the tree is empty


if node is None:
return Node(key)

# Traverse to the right place and insert the node


if key < node.key:
node.left = insert(node.left, key)
else:
node.right = insert(node.right, key)

return node

# Find the inorder successor


def minValueNode(node):
current = node

# Find the leftmost leaf


while(current.left is not None):
current = current.left

return current

# Deleting a node
def deleteNode(root, key):

# Return if the tree is empty


if root is None:
return root

# Find the node to be deleted


if key < root.key:
root.left = deleteNode(root.left, key)
elif(key > root.key):
root.right = deleteNode(root.right, key)
else:
# If the node is with only one child or no child
if root.left is None:
temp = root.right
root = None
return temp

elif root.right is None:


temp = root.left
root = None
return temp

# If the node has two children,


# place the inorder successor in position of the node to be deleted
temp = minValueNode(root.right)

root.key = temp.key

# Delete the inorder successor


root.right = deleteNode(root.right, temp.key)

return root

root = None
root = insert(root, 8)
root = insert(root, 3)
root = insert(root, 1)
root = insert(root, 6)
root = insert(root, 7)
root = insert(root, 10)
root = insert(root, 14)
root = insert(root, 4)

print("Inorder traversal: ", end=' ')


inorder(root)

print("\nDelete 10")
root = deleteNode(root, 10)
print("Inorder traversal: ", end=' ')
inorder(root)
Binary Search Tree Complexities
Time Complexity

Best Case Average Case Worst Case


Operation
Complexity Complexity Complexity

Search O(log n) O(log n) O(n)

Insertion O(log n) O(log n) O(n)

Deletion O(log n) O(log n) O(n)

Here, n is the number of nodes in the tree.


Space Complexity

The space complexity for all the operations is O(n) .

Binary Search Tree Applications


1. In multilevel indexing in the database

2. For dynamic sorting

3. For managing virtual memory areas in Unix kernel


IMP QUES
1) Explain priority Queue operations with example.(Binary Heaps). Mention applications of
priority queues.

Priority Queue using Binary Heap




Priority Queue is an extension of the queue with the following properties:
1. Every item has a priority associated with it.
2. An element with high priority is dequeued before an element with low
priority.
3. If two elements have the same priority, they are served according to
their order in the queue.
A Binary Heap is a Binary Tree with the following properties:
1. It is a Complete Tree. This property of Binary Heap makes them
suitable to be stored in an array.
2. A Binary Heap is either Min Heap or Max Heap.
3. In a Min Binary Heap, the key at the root must be minimum among all
keys present in Binary Heap. The same property must be recursively
true for all nodes in Binary Tree.
4. Similarly, in a Max Binary Heap, the key at the root must be maximum
among all keys present in Binary Heap. The same property must be
recursively true for all nodes in Binary Tree.
Operation on Binary Heap
 insert(p): Inserts a new element with priority p.
 extractMax(): Extracts an element with maximum priority.
 remove(i): Removes an element pointed by an iterator i.
 getMax(): Returns an element with maximum priority.
 changePriority(i, p): Changes the priority of an element pointed
by i to p.
Example of A Binary Max Heap
 Suppose below is the given Binary Heap that follows all the properties
of Binary Max Heap.
 Now a node with value 32 need to be insert in the above heap: To
insert an element, attach the new element to any leaf. For Example A
node with priority 32 can be added to the leaf of the node 7. But this
violates the heap property. To maintain the heap property, shift up the
new node 32.

 Shift Up Operation get node with 32 at the correct position: Swap


the incorrectly placed node with its parent until the heap property is
satisfied. For Example: As node 7 is less than node 32 so, swap
node 7 and node 32. Then, swap node 31 and node 32.
 ExtractMax: The maximum value is stored at the root of the tree. But
the root of the tree cannot be directly removed. First, it is replaced with
any one of the leaves and then removed. For Example: To remove Node
45, it is first replaced with node 7. But this violates the heap property,
so move the replaced node down. For that, use shift-down operation.

 ShiftDown operation: Swap the incorrectly placed node with a larger


child until the heap property is satisfied. For Example Node 7 is
swapped with node 32 then, last it is swapped with node 31.
 ChangePriority: Let the changed element shift up or down depending
on whether its priority decreased or increased. For Example: Change
the priority of nodes 11 to 35, due to this change the node has to shift
up the node in order to maintain the heap property.
 Remove: To remove an element, change its priority to a value larger
than the current maximum, then shift it up, and then extract it using
extract max. Find the current maximum using getMax.
 GetMax: The max value is stored at the root of the tree. To getmax, just
return the value at the root of the tree.
Array Representation of Binary Heap
Since the heap is maintained in form of a complete binary tree, because of this
fact the heap can be represented in the form of an array. To keep the tree
complete and shallow, while inserting a new element insert it in the leftmost
vacant position in the last level i.e., at the end of our array. Similarly, while
extracting maximum replace the root with the last leaf at the last level i.e., the last
element of the array. Below is the illustration of the same:
APPLICATIONS OF PRORITY QUEUE:
1. Task Scheduling:
 In operating systems, priority queues are used for scheduling tasks or
processes based on their priority levels. Higher priority tasks are
executed before lower priority ones.
2. Dijkstra's Shortest Path Algorithm:
 Priority queues are commonly employed in algorithms like Dijkstra's
algorithm for finding the shortest paths in graphs. The priority queue
helps in efficiently selecting and processing vertices with the smallest
known distances.
3. Huffman Coding:
 Huffman coding, used in data compression, employs a priority queue to
build an optimal binary tree for encoding and decoding. The elements
in the priority queue represent characters and their frequencies.
4. A Search Algorithm:*
 A* is a popular pathfinding algorithm used in artificial intelligence and
robotics. It uses a priority queue to explore paths efficiently based on a
combination of the cost of the path and a heuristic function.
5. Job Scheduling:
In systems where jobs or tasks have varying priorities, a priority queue

can be used for scheduling and executing jobs based on their
importance or deadlines.
6. Networking Algorithms:
 Priority queues play a role in network routing algorithms. For example,
in Quality of Service (QoS) management, where different types of
network traffic have different priorities, a priority queue helps manage
and process packets accordingly.
7. Simulation Systems:
 In simulation applications, where events occur at different times and
have different priorities, a priority queue can be used to manage and
process these events in the correct order.
8. Data Compression:
 Certain compression algorithms, like the Burrows-Wheeler transform,
involve priority queues for efficient reordering of characters.
9. Load Balancing:
 Priority queues can be employed in load balancing scenarios, where
tasks or jobs are distributed among multiple servers based on their
priority or workload.
10. Graph Algorithms:
 Various graph algorithms, such as Prim's algorithm for minimum
spanning trees, use priority queues to select and process edges based
on their weights.

2.Define Collision? Explain Collision resolution Techniques with example.

collision

In data structures, a collision occurs when two or more distinct keys hash to the same
index in a hash table. Hash tables are data structures that use a hash function to map
keys to indices in an array, where the values associated with the keys are stored. Ideally,
each key should map to a unique index, but due to the finite size of the array and the
infinite possible keys, collisions can occur.

Collision Resolution Techniques


There are two types of collision resolution techniques.
 Separate chaining (open hashing)
 Open addressing (closed hashing)
Separate chaining: This method involves making a linked list out of the slot where the
collision happened, then adding the new key to the list. Separate chaining is the term used
to describe how this connected list of slots resembles a chain. It is more frequently utilized
when we are unsure of the number of keys to add or remove.
Time complexity
 Its worst-case complexity for searching is o(n).
 Its worst-case complexity for deletion is o(n).
Advantages of separate chaining
 It is easy to implement.
 The hash table never fills full, so we can add more elements to the chain.
 It is less sensitive to the function of the hashing.
Disadvantages of separate chaining
 In this, the cache performance of chaining is not good.
 Memory wastage is too much in this method.
 It requires more space for element links.
Open addressing: To prevent collisions in the hashing table open, addressing is employed
as a collision-resolution technique. No key is kept anywhere else besides the hash table. As
a result, the hash table’s size is never equal to or less than the number of keys. Additionally
known as closed hashing.
The following techniques are used in open addressing:
 Linear probing
 Quadratic probing
 Double hashing
Linear probing: This involves doing a linear probe for the following slot when a collision
occurs and continuing to do so until an empty slot is discovered.
The worst time to search for an element in linear probing is O. The cache performs best with
linear probing, but clustering is a concern. This method’s key benefit is that it is simple to
calculate.
Disadvantages of linear probing:
 The main problem is clustering.
 It takes too much time to find an empty slot.
Quadratic probing: When a collision happens in this, we probe for the i2-nd slot in the
ith iteration, continuing to do so until an empty slot is discovered. In comparison to linear
probing, quadratic probing has a worse cache performance. Additionally, clustering is less
of a concern with quadratic probing.
Double hashing: In this, you employ a different hashing algorithm, and in the ith iteration,
you look for (i * hash 2(x)). The determination of two hash functions requires more time.
Although there is no clustering issue, the performance of the cache is relatively poor when
using double probing.

UNIT-5
1. Explain rotations of AVL trees for Balancing.
AVL Tree

 AVL Tree is invented by GM Adelson - Velsky and EM Landis in 1962. The tree is named
AVL in honour of its inventors.
 AVL Tree can be defined as height balanced binary search tree in which each node is
associated with a balance factor which is calculated by subtracting the height of its
right sub-tree from that of its left sub-tree.
 Tree is said to be balanced if balance factor of each node is in between -1 to 1,
otherwise, the tree will be unbalanced and need to be balanced.

AVL Rotations

We perform rotation in AVL tree only in case if Balance Factor is other than -1, 0, and 1. There
are basically four types of rotations which are as follows:

1. L L rotation: Inserted node is in the left subtree of left subtree of A


2. R R rotation : Inserted node is in the right subtree of right subtree of A
3. L R rotation : Inserted node is in the right subtree of left subtree of A
4. R L rotation : Inserted node is in the left subtree of right subtree of A

Where node A is the node whose balance Factor is other than -1, 0, 1.

The first two rotations LL and RR are single rotations and the next two rotations LR and RL are
double rotations. For a tree to be unbalanced, minimum height must be at least 2, Let us
understand each rotation

1. RR Rotation

When BST becomes unbalanced, due to a node is inserted into the right subtree of the
right subtree of A, then we perform RR rotation, RR rotation is an anticlockwise
rotation, which is applied on the edge below a node having balance factor -2

In above example, node A has balance factor -2 because a node C is inserted in the
right subtree of A right subtree. We perform the RR rotation on the edge below A.
2. LL Rotation

When BST becomes unbalanced, due to a node is inserted into the left subtree of the
left subtree of C, then we perform LL rotation, LL rotation is clockwise rotation, which
is applied on the edge below a node having balance factor 2.

In above example, node C has balance factor 2 because a node A is inserted in the left
subtree of C left subtree. We perform the LL rotation on the edge below A.

3. LR Rotation

Double rotations are bit tougher than single rotation which has already explained
above. LR rotation = RR rotation + LL rotation, i.e., first RR rotation is performed on
subtree and then LL rotation is performed on full tree, by full tree we mean the first
node from the path of inserted node whose balance factor is other than -1, 0, or 1.

Let us understand each and every step very clearly:

State Action

A node B has been inserted into the right subtree of A the left subtree of C, because of which C
has become an unbalanced node having balance factor 2. This case is L R rotation where: Inserted
node is in the right subtree of left subtree of C
As LR rotation = RR + LL rotation, hence RR (anticlockwise) on subtree rooted at A is performed
first. By doing RR rotation, node A, has become the left subtree of B.

After performing RR rotation, node C is still unbalanced, i.e., having balance factor 2, as inserted
node A is in the left of left of C

Now we perform LL clockwise rotation on full tree, i.e. on node C. node C has now become the
right subtree of node B, A is left subtree of B

Balance factor of each node is now either -1, 0, or 1, i.e. BST is balanced now.

4. RL Rotation

As already discussed, that double rotations are bit tougher than single rotation which
has already explained above. R L rotation = LL rotation + RR rotation, i.e., first LL
rotation is performed on subtree and then RR rotation is performed on full tree, by full
tree we mean the first node from the path of inserted node whose balance factor is
other than -1, 0, or 1.

State Action
A node B has been inserted into the left subtree of C the right subtree of A,
because of which A has become an unbalanced node having balance factor - 2.
This case is RL rotation where: Inserted node is in the left subtree of right subtree
of A

As RL rotation = LL rotation + RR rotation, hence, LL (clockwise) on subtree


rooted at C is performed first. By doing RR rotation, node C has become the right
subtree of B.

After performing LL rotation, node A is still unbalanced, i.e. having balance factor
-2, which is because of the right-subtree of the right-subtree node A.

Now we perform RR rotation (anticlockwise rotation) on full tree, i.e. on node A.


node C has now become the right subtree of node B, and node A has become
the left subtree of B.

Balance factor of each node is now either -1, 0, or 1, i.e., BST is balanced now.
Example question:
Q: Construct an AVL tree having the following elements
H, I, J, B, A, E, C, F, D, G, K, L

1. Insert H, I, J

On inserting the above elements, especially in the case of H, the BST becomes
unbalanced as the Balance Factor of H is -2. Since the BST is right-skewed, we will
perform RR Rotation on node H.

The resultant balance tree is:

ADVERTISEMENT

2. Insert B, A
On inserting the above elements, especially in case of A, the BST becomes unbalanced
as the Balance Factor of H and I is 2, we consider the first node from the last inserted
node i.e. H. Since the BST from H is left-skewed, we will perform LL Rotation on node
H.

The resultant balance tree is:

3. Insert E
On inserting E, BST becomes unbalanced as the Balance Factor of I is 2, since if we
travel from E to I we find that it is inserted in the left subtree of right subtree of I, we
will perform LR Rotation on node I. LR = RR + LL rotation

3 a) We first perform RR rotation on node B

The resultant tree after RR rotation is:

3b) We first perform LL rotation on the node I

The resultant balanced tree after LL rotation is:

4. Insert C, F, D
On inserting C, F, D, BST becomes unbalanced as the Balance Factor of B and H is -2, since if
we travel from D to B we find that it is inserted in the right subtree of left subtree of B, we
will perform RL Rotation on node I. RL = LL + RR rotation.

4a) We first perform LL rotation on node E

The resultant tree after LL rotation is:

4b) We then perform RR rotation on node B

The resultant balanced tree after RR rotation is:


5. Insert G

On inserting G, BST become unbalanced as the Balance Factor of H is 2, since if we


travel from G to H, we find that it is inserted in the left subtree of right subtree of H,
we will perform LR Rotation on node I. LR = RR + LL rotation.

5 a) We first perform RR rotation on node C

The resultant tree after RR rotation is:


5 b) We then perform LL rotation on node H

The resultant balanced tree after LL rotation is:

6. Insert K

On inserting K, BST becomes unbalanced as the Balance Factor of I is -2. Since the BST
is right-skewed from I to K, hence we will perform RR Rotation on the node I.

The resultant balanced tree after RR rotation is:


7. Insert L

On inserting the L tree is still balanced as the Balance Factor of each node is now either,
-1, 0, +1. Hence the tree is a Balanced AVL tree

2) Construct Red-Black trees for the following numbers


10,85,15, 70,20,60,30,50,65,80,90,40,5,55?
Certainly! Let’s construct a Red-Black tree for the given numbers: 10, 85, 15, 70, 20, 60, 30, 50, 65, 80,
90, 40, 5, 55.
1. Start by creating a binary search tree (BST) using the given numbers:
o Insert 10 as the root.
o Insert 85 to the right of 10.
o Insert 15 to the left of 10.
o Insert 70 to the right of 85.
o Insert 20 to the left of 15.
o Insert 60 to the right of 70.
o Insert 30 to the left of 20.
o Insert 50 to the right of 60.
o Insert 65 to the left of 60.
o Insert 80 to the right of 70.
o Insert 90 to the right of 85.
o Insert 40 to the left of 30.
o Insert 5 to the left of 10.
o Insert 55 to the right of 50.
2. Now, let’s convert this BST into a Red-Black tree by applying the necessary rotations and
recoloring. The Red-Black tree properties are as follows:
o Root Property: The root is black.
o External Property: Every leaf (NIL) is black.
o Internal Property: The children of a red node are black. Possible parents of red nodes
are black.
o Depth Property: All leaves have the same black depth.

Here’s the resulting Red-Black tree:


30 (B)
/ \
15 (R) 70 (R)
/ \ / \
10 20 60 85 (B)
/ / \ \
5 40 50 90
\ /
55 80
/
65

In the tree:

 Nodes in red are colored red.


 Nodes in black are colored black.

This Red-Black tree ensures balanced operations like insertion, deletion, and searching with a maximum of
two rotations. It guarantees an average time complexity of O(log₂n) for all operations.

Red-black tree in Data Structure


The red-Black tree is a binary search tree. The prerequisite of the red-black tree is that we
should know about the binary search tree. In a binary search tree, the values of the nodes in
the left subtree should be less than the value of the root node, and the values of the nodes in
the right subtree should be greater than the value of the root node.
Each node in the Red-black tree contains an extra bit that represents a color to ensure that the
tree is balanced during any operations performed on the tree like insertion, deletion, etc. In
a binary search tree, the searching, insertion and deletion take O(log2n) time in the average
case, O(1) in the best case and O(n) in the worst case.

Let's understand the different scenarios of a binary search tree.

In the above tree, if we want to search the 80. We will first compare 80 with the root node. 80
is greater than the root node, i.e., 10, so searching will be performed on the right subtree.
Again, 80 is compared with 15; 80 is greater than 15, so we move to the right of the 15, i.e., 20.
Now, we reach the leaf node 20, and 20 is not equal to 80. Therefore, it will show that the
element is not found in the tree. After each operation, the search is divided into half. The above
BST will take O(logn) time to search the element.

The above tree shows the right-skewed BST. If we want to search the 80 in the tree, we will
compare 80 with all the nodes until we find the element or reach the leaf node. So, the above
right-skewed BST will take O(N) time to search the element.

In the above BST, the first one is the balanced BST, whereas the second one is the unbalanced
BST. We conclude from the above two binary search trees that a balanced tree takes less time
than an unbalanced tree for performing any operation on the tree.

Therefore, we need a balanced tree, and the Red-Black tree is a self-balanced binary search
tree. Now, the question arises that why do we require a Red-Black tree if AVL is also a height-
balanced tree. The Red-Black tree is used because the AVL tree requires many rotations when
the tree is large, whereas the Red-Black tree requires a maximum of two rotations to balance
the tree. The main difference between the AVL tree and the Red-Black tree is that the AVL tree
is strictly balanced, while the Red-Black tree is not completely height-balanced. So, the AVL
tree is more balanced than the Red-Black tree, but the Red-Black tree guarantees O(log2n)
time for all operations like insertion, deletion, and searching.

Insertion is easier in the AVL tree as the AVL tree is strictly balanced, whereas deletion and
searching are easier in the Red-Black tree as the Red-Black tree requires fewer rotations.

As the name suggests that the node is either colored in Red or Black color. Sometimes no
rotation is required, and only recoloring is needed to balance the tree.

Properties of Red-Black tree


o It is a self-balancing Binary Search tree. Here, self-balancing means that it balances the
tree itself by either doing the rotations or recoloring the nodes.
o This tree data structure is named as a Red-Black tree as each node is either Red or Black
in color. Every node stores one extra information known as a bit that represents the
color of the node. For example, 0 bit denotes the black color while 1 bit denotes the
red color of the node. Other information stored by the node is similar to the binary
tree, i.e., data part, left pointer and right pointer.
o In the Red-Black tree, the root node is always black in color.
o In a binary tree, we consider those nodes as the leaf which have no child. In contrast,
in the Red-Black tree, the nodes that have no child are considered the internal nodes
and these nodes are connected to the NIL nodes that are always black in color. The NIL
nodes are the leaf nodes in the Red-Black tree.
o If the node is Red, then its children should be in Black color. In other words, we can say
that there should be no red-red parent-child relationship.
o Every path from a node to any of its descendant's NIL node should have same number
of black nodes.

Is every AVL tree can be a Red-Black tree?

Yes, every AVL tree can be a Red-Black tree if we color each node either by Red or Black color.
But every Red-Black tree is not an AVL because the AVL tree is strictly height-balanced while
the Red-Black tree is not completely height-balanced.

Insertion in Red Black tree


The following are some rules used to create the Red-Black tree:
1. If the tree is empty, then we create a new node as a root node with the color black.
2. If the tree is not empty, then we create a new node as a leaf node with a color red.
3. If the parent of a new node is black, then exit.
4. If the parent of a new node is Red, then we have to check the color of the parent's
sibling of a new node.

4a) If the color is Black, then we perform rotations and recoloring.

4b) If the color is Red then we recolor the node. We will also check whether the parents' parent
of a new node is the root node or not; if it is not a root node, we will recolor and recheck the
node.

Let's understand the insertion in the Red-Black tree.

10, 18, 7, 15, 16, 30, 25, 40, 60

Step 1: Initially, the tree is empty, so we create a new node having value 10. This is the first
node of the tree, so it would be the root node of the tree. As we already discussed, that root
node must be black in color, which is shown below:

Step 2: The next node is 18. As 18 is greater than 10 so it will come at the right of 10 as shown
below.

We know the second rule of the Red Black tree that if the tree is not empty then the newly
created node will have the Red color. Therefore, node 18 has a Red color, as shown in the
below figure:
Now we verify the third rule of the Red-Black tree, i.e., the parent of the new node is black or
not. In the above figure, the parent of the node is black in color; therefore, it is a Red-Black
tree.

Step 3: Now, we create the new node having value 7 with Red color. As 7 is less than 10, so it
will come at the left of 10 as shown below.

Now we verify the third rule of the Red-Black tree, i.e., the parent of the new node is black or
not. As we can observe, the parent of the node 7 is black in color, and it obeys the Red-Black
tree's properties.

Step 4: The next element is 15, and 15 is greater than 10, but less than 18, so the new node
will be created at the left of node 18. The node 15 would be Red in color as the tree is not
empty.

The above tree violates the property of the Red-Black tree as it has Red-red parent-child
relationship. Now we have to apply some rule to make a Red-Black tree. The rule 4 says that if
the new node's parent is Red, then we have to check the color of the parent's sibling of a
new node. The new node is node 15; the parent of the new node is node 18 and the sibling
of the parent node is node 7. As the color of the parent's sibling is Red in color, so we apply
the rule 4a. The rule 4a says that we have to recolor both the parent and parent's sibling node.
So, both the nodes, i.e., 7 and 18, would be recolored as shown in the below figure.

We also have to check whether the parent's parent of the new node is the root node or not.
As we can observe in the above figure, the parent's parent of a new node is the root node, so
we do not need to recolor it.
Step 5: The next element is 16. As 16 is greater than 10 but less than 18 and greater than 15,
so node 16 will come at the right of node 15. The tree is not empty; node 16 would be Red in
color, as shown in the below figure:

In the above figure, we can observe that it violates the property of the parent-child relationship
as it has a red-red parent-child relationship. We have to apply some rules to make a Red-Black
tree. Since the new node's parent is Red color, and the parent of the new node has no sibling,
so rule 4a will be applied. The rule 4a says that some rotations and recoloring would be
performed on the tree.

Since node 16 is right of node 15 and the parent of node 15 is node 18. Node 15 is the left of
node 18. Here we have an LR relationship, so we require to perform two rotations. First, we
will perform left, and then we will perform the right rotation. The left rotation would be
performed on nodes 15 and 16, where node 16 will move upward, and node 15 will move
downward. Once the left rotation is performed, the tree looks like as shown in the below figure:

In the above figure, we can observe that there is an LL relationship. The above tree has a Red-
red conflict, so we perform the right rotation. When we perform the right rotation, the median
element would be the root node. Once the right rotation is performed, node 16 would become
the root node, and nodes 15 and 18 would be the left child and right child, respectively, as
shown in the below figure.

After rotation, node 16 and node 18 would be recolored; the color of node 16 is red, so it will
change to black, and the color of node 18 is black, so it will change to a red color as shown in
the below figure:

Step 6: The next element is 30. Node 30 is inserted at the right of node 18. As the tree is not
empty, so the color of node 30 would be red.
The color of the parent and parent's sibling of a new node is Red, so rule 4b is applied. In rule
4b, we have to do only recoloring, i.e., no rotations are required. The color of both the parent
(node 18) and parent's sibling (node 15) would become black, as shown in the below image.

We also have to check the parent's parent of the new node, whether it is a root node or not.
The parent's parent of the new node, i.e., node 30 is node 16 and node 16 is not a root node,
so we will recolor the node 16 and changes to the Red color. The parent of node 16 is node
10, and it is not in Red color, so there is no Red-red conflict.
Step 7: The next element is 25, which we have to insert in a tree. Since 25 is greater than 10,
16, 18 but less than 30; so, it will come at the left of node 30. As the tree is not empty, node
25 would be in Red color. Here Red-red conflict occurs as the parent of the newly created is
Red color.

Since there is no parent's sibling, so rule 4a is applied in which rotation, as well as recoloring,
are performed. First, we will perform rotations. As the newly created node is at the left of its
parent and the parent node is at the right of its parent, so the RL relationship is formed. Firstly,
the right rotation is performed in which node 25 goes upwards, whereas node 30 goes
downwards, as shown in the below figure.

After the first rotation, there is an RR relationship, so left rotation is performed. After right
rotation, the median element, i.e., 25 would be the root node; node 30 would be at the right
of 25 and node 18 would be at the left of node 25.

Now recoloring would be performed on nodes 25 and 18; node 25 becomes black in color,
and node 18 becomes red in color.
Step 8: The next element is 40. Since 40 is greater than 10, 16, 18, 25, and 30, so node 40 will
come at the right of node 30. As the tree is not empty, node 40 would be Red in color. There
is a Red-red conflict between nodes 40 and 30, so rule 4b will be applied.

As the color of parent and parent's sibling node of a new node is Red so recoloring would be
performed. The color of both the nodes would become black, as shown in the below image.

After recoloring, we also have to check the parent's parent of a new node, i.e., 25, which is not
a root node, so recoloring would be performed, and the color of node 25 changes to Red.

After recoloring, red-red conflict occurs between nodes 25 and 16. Now node 25 would be
considered as the new node. Since the parent of node 25 is red in color, and the parent's
sibling is black in color, rule 4a would be applied. Since 25 is at the right of the node 16 and
16 is at the right of its parent, so there is an RR relationship. In the RR relationship, left rotation
is performed. After left rotation, the median element 16 would be the root node, as shown in
the below figure.
After rotation, recoloring is performed on nodes 16 and 10. The color of node 10 and node 16
changes to Red and Black, respectively as shown in the below figure.

Step 9: The next element is 60. Since 60 is greater than 16, 25, 30, and 40, so node 60 will
come at the right of node 40. As the tree is not empty, the color of node 60 would be Red.

As we can observe in the above tree that there is a Red-red conflict occurs. The parent node
is Red in color, and there is no parent's sibling exists in the tree, so rule 4a would be applied.
The first rotation would be performed. The RR relationship exists between the nodes, so left
rotation would be performed.

When left rotation is performed, node 40 will come upwards, and node 30 will come
downwards, as shown in the below figure:

After rotation, the recoloring is performed on nodes 30 and 40. The color of node 30 would
become Red, while the color of node 40 would become black.
The above tree is a Red-Black tree as it follows all the Red-Black tree properties.

Deletion in Red Back tree

Let's understand how we can delete the particular node from the Red-Black tree. The following
are the rules used to delete the particular node from the tree:

Step 1: First, we perform BST rules for the deletion.

Step 2:

Case 1: if the node is Red, which is to be deleted, we simply delete it.

Let's understand case 1 through an example.

Suppose we want to delete node 30 from the tree, which is given below.

Initially, we are having the address of the root node. First, we will apply BST to search the node.
Since 30 is greater than 10 and 20, which means that 30 is the right child of node 20. Node 30
is a leaf node and Red in color, so it is simply deleted from the tree.

If we want to delete the internal node that has one child. First, replace the value of the internal
node with the value of the child node and then simply delete the child node.

Let's take another example in which we want to delete the internal node, i.e., node 20.
We cannot delete the internal node; we can only replace the value of that node with another
value. Node 20 is at the right of the root node, and it is having only one child, node 30. So,
node 20 is replaced with a value 30, but the color of the node would remain the same, i.e.,
Black. In the end, node 20 (leaf node) is deleted from the tree.

If we want to delete the internal node that has two child nodes. In this case, we have to decide
from which we have to replace the value of the internal node (either left subtree or right
subtree). We have two ways:

o Inorder predecessor: We will replace with the largest value that exists in the left
subtree.
o Inorder successor: We will replace with the smallest value that exists in the right
subtree.

Suppose we want to delete node 30 from the tree, which is shown below:

Node 30 is at the right of the root node. In this case, we will use the inorder successor. The
value 38 is the smallest value in the right subtree, so we will replace the value 30 with 38, but
the node would remain the same, i.e., Red. After replacement, the leaf node, i.e., 30, would be
deleted from the tree. Since node 30 is a leaf node and Red in color, we need to delete it (we
do not have to perform any rotations or any recoloring).

Case 2: If the root node is also double black, then simply remove the double black and make
it a single black.

Case 3: If the double black's sibling is black and both its children are black.

o Remove the double black node.


o Add the color of the node to the parent (P) node.

1. If the color of P is red then it becomes black.


2. If the color of P is black, then it becomes double black.
o The color of double black's sibling changes to red.
o If still double black situation arises, then we will apply other cases.

Let's understand this case through an example.

Suppose we want to delete node 15 in the below tree.

We cannot simply delete node 15 from the tree as node 15 is Black in color. Node 15 has two
children, which are nil. So, we replace the 15 value with a nil value. As node 15 and nil node
are black in color, the node becomes double black after replacement, as shown in the below
figure.

In the above tree, we can observe that the double black's sibling is black in color and its
children are nil, which are also black. As the double black's sibling and its children have
black so it cannot give its black color to neither of these. Now, the double black's parent node
is Red so double black's node add its black color to its parent node. The color of the node 20
changes to black while the color of the nil node changes to a single black as shown in the
below figure.
After adding the color to its parent node, the color of the double black's sibling, i.e., node 30
changes to red as shown in the below figure.

In the above tree, we can observe that there is no longer double black's problem exists, and it
is also a Red-Black tree.

Case 4: If double black's sibling is Red.

o Swap the color of its parent and its sibling.


o Rotate the parent node in the double black's direction.
o Reapply cases.

Let's understand this case through an example.

Suppose we want to delete node 15.

Initially, the 15 is replaced with a nil value. After replacement, the node becomes double black.
Since double black's sibling is Red so color of the node 20 changes to Red and the color of
the node 30 changes to Black.
Once the swapping of the color is completed, the rotation towards the double black would be
performed. The node 30 will move upwards and the node 20 will move downwards as shown
in the below figure.

In the above tree, we can observe that double black situation still exists in the tree. It satisfies
the case 3 in which double black's sibling is black as well as both its children are black. First,
we remove the double black from the node and add the black color to its parent node. At the
end, the color of the double black's sibling, i.e., node 25 changes to Red as shown in the below
figure.

In the above tree, we can observe that the double black situation has been resolved. It also
satisfies the properties of the Red Black tree.

Case 5: If double black's sibling is black, sibling's child who is far from the double black is
black, but near child to double black is red.

o Swap the color of double black's sibling and the sibling child which is nearer to the
double black node.
o Rotate the sibling in the opposite direction of the double black.
o Apply case 6

Suppose we want to delete the node 1 in the below tree.

First, we replace the value 1 with the nil value. The node becomes double black as both the
nodes, i.e., 1 and nil are black. It satisfies the case 3 that implies if DB's sibling is black and
both its children are black. First, we remove the double black of the nil node. Since the parent
of DB is Black, so when the black color is added to the parent node then it becomes double
black. After adding the color, the double black's sibling color changes to Red as shown below.

We can observe in the above screenshot that the double black problem still exists in the tree.
So, we will reapply the cases. We will apply case 5 because the sibling of node 5 is node 30,
which is black in color, the child of node 30, which is far from node 5 is black, and the child of
the node 30 which is near to node 5 is Red. In this case, first we will swap the color of node 30
and node 25 so the color of node 30 changes to Red and the color of node 25 changes to
Black as shown below.
Once the swapping of the color between the nodes is completed, we need to rotate the sibling
in the opposite direction of the double black node. In this rotation, the node 30 moves
downwards while the node 25 moves upwards as shown below.

As we can observe in the above tree that double black situation still exists. So, we need to case
6. Let's first see what is case 6.

Case 6: If double black's sibling is black, far child is Red

o Swap the color of Parent and its sibling node.


o Rotate the parent towards the Double black's direction
o Remove Double black
o Change the Red color to black.

Now we will apply case 6 in the above example to solve the double black's situation.
In the above example, the double black is node 5, and the sibling of node 5 is node 25, which
is black in color. The far child of the double black node is node 30, which is Red in color as
shown in the below figure:

First, we will swap the colors of Parent and its sibling. The parent of node 5 is node 10, and the
sibling node is node 25. The colors of both the nodes are black, so there is no swapping would
occur.

In the second step, we need to rotate the parent in the double black's direction. After rotation,
node 25 will move upwards, whereas node 10 will move downwards. Once the rotation is
performed, the tree would like, as shown in the below figure:

In the next step, we will remove double black from node 5 and node 5 will give its black color
to the far child, i.e., node 30. Therefore, the color of node 30 changes to black as shown in the
below figure.

You might also like