0% found this document useful (0 votes)
7 views34 pages

Daa Unit 4 Notes

The document discusses the Greedy Method, which is a problem-solving approach that makes the best choice at each step without considering future consequences. It outlines the properties of greedy algorithms, their components, advantages, and disadvantages, and provides examples such as Job Sequencing with Deadlines and Minimum Spanning Tree. Additionally, it explains algorithms like Kruskal's and Prim's for finding minimum spanning trees using the greedy approach.

Uploaded by

acercore9640
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views34 pages

Daa Unit 4 Notes

The document discusses the Greedy Method, which is a problem-solving approach that makes the best choice at each step without considering future consequences. It outlines the properties of greedy algorithms, their components, advantages, and disadvantages, and provides examples such as Job Sequencing with Deadlines and Minimum Spanning Tree. Additionally, it explains algorithms like Kruskal's and Prim's for finding minimum spanning trees using the greedy approach.

Uploaded by

acercore9640
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-IV

GREEDY METHOD

The general method of Greedy


Greedy Method:
Following are a few points about the greedy method.
 The first note point is that we have to find the best method/option out of many present
ways.
 In this method, we will be deciding the output by focusing on the first stage. We don’t
think about the future.
 The greedy method may or may not give the best output.
A greedy Algorithm solves problems by making the choice that seems to be the best
at that particular moment. There are many optimization problems that can be
determined using a greedy algorithm. A greedy algorithm may provide a solution
that is close to optimal to some issues that might have no efficient solution. A
greedy algorithm works if a problem has the following two properties:
1. Greedy Choice Property: By creating a locally optimal solution we can reach a globally
optimal solution. In other words, by making “greedy” choices we can obtain an optimal
solution.
2. Optimal substructure: Optimal solutions will always contain optimal subsolutions. In
other words, we say that the answers to subproblems of an optimal solution are optimal.

Examples:
Following are a few examples of Greedy algorithms
 Machine scheduling
 Fractional Knapsack Problem
 Minimum Spanning Tree
 Huffman Code
 Job Sequencing
 Activity Selection Problem

Components of Greedy Algorithm


Greedy algorithms consist of the following five components −
 A candidate set − A solution is created with the help of a candidate set.
 A selection function − It is used to choose the best candidate that is to be added to the
solution.
 A feasibility function − A feasibility function is useful in determining whether a candidate
can be used to contribute to the solution or not.
 An objective function − This is used to assign a value to a solution or a partial solution.
 A solution function − A solution function is used to indicate whether a complete solution
has been reached or not.
Areas of Application
The greedy approach is used to solve many problems. Out of all the problems, here
we have a few of them as follows:
 One of the applications could be finding the shortest path between two vertices
using Dijkstra’s algorithm.
 Another is finding the minimal spanning tree in a graph using Prim’s /Kruskal’s
algorithm

Greedy Algorithm:
getOptimal(Item, array[], int num)
Initialize empty result as, result = {}
While (All items are not considered)

// In order to select an item we make a greedy here.


j = SelectAnItem()

// If j is feasible, add j to the result


if (feasible(j))
result = result U j
return result

Why should we choose a greedy algorithm?

The greedy approach has a few characteristics that might make it suitable for
optimization. One reason to choose a greedy algorithm is to achieve the most
feasible solution immediately.
If we take the activity selection problem as a reference that is explained below. We
can observe that if more activities can be done before finishing the current activity,
then these activities can be performed within the same time.
Another reason to choose this algorithm is to divide a problem recursively based on
a condition. This ensures that there is no need to combine all the solutions.
While considering the activity selection problem, we achieve the “recursive division”
step by scanning a list of items only once and considering certain activities.

Advantages

 It is easy to implement.
 Has fewer time complexities.
 Can be used for the purpose of optimization or finding close to optimization in the case
of NP-Hard problems.
Disadvantages

 One disadvantage of this algorithm is that the local optimal solution may not always be
globally optimal.
Job Sequencing with Deadlines
Job Sequencing with Deadlines problem uses the greedy approach. So we have to
find the best method/option in the greedy method out of many present ways. In this
method/ approach, we focus on the first stage, decide the output, and don't think about
the future.

Many optimization problems can be determined using the greedy algorithm. Some
issues have no efficient solution, but the greedy algorithm may provide a solution close
to optimal. Here, we will briefly see the Job sequencing with deadlines definition,
algorithm, and example.

What is Job Sequencing with Deadlines?


In a job sequencing with deadlines problem, the objective is to find a sequence of jobs
completed within their deadlines, giving a maximum profit. Let us consider the set of n
given jobs associated with deadlines, and profit is earned if a job is completed by its
deadline. These jobs need to be ordered so that the maximum profit is earned.

It may happen that all the given jobs can not be completed within their deadlines.
Assume that the deadline of ith job Ji is di and the profit received from job Ji is pi. Hence,
the optimal solution of job sequencing with deadlines algorithm is a feasible solution with
maximum profit.

Points to Remember for Job Sequencing with Deadlines

 Each job has deadline di & it can process the job within its deadline; only one job
can be processed at a time.
 Only one CPU is available for processing all jobs.
 CPU can take only one unit at a time for processing any job.
 All jobs arrived at the same time.

Job Sequencing with Deadlines Algorithm


Suppose there are jobs J(i) with the deadline D(i) and profit P(i) where 0≤i≤1. These
jobs are ordered according to profit p1⩾p2⩾p3⩾...⩾pn.

Job-Sequencing-With-Deadline (D, J, n, k)

D(0) := J(0) := 0

k:= 1

J(1) := 1 // means first job is selected

for i = 2 … n do
r:= k

while D(J(r)) > D(i) and D(J(r)) ≠ r do

r:= r – 1

if D(J(r)) ≤ D(i) and D(i) > r then

for l = k … r + 1 by -1 do

J(l + 1): = J(l)

J(r + 1): = i

k:= k + 1

Job Sequencing with Deadlines Example


Let us consider a given job sequencing problem, as shown in the table below. We have
to find the sequence of jobs completed within their deadlines and give the maximum
profit. Each job is associated with the deadline and profit as given below:

Job J1 J2 J3 J4 J5
Deadline 2 1 1 2 3
Profit 40 100 20 60 20

The given jobs are sorted as per their profit in descending order to solve this problem.
Hence, the jobs are ordered after sorting, as shown in the following table.

Job J2 J4 J1 J5 J3
Deadline 1 2 2 3 1
Profit 100 60 40 20 20

From the given set of jobs, first, we select J2, as it should be completed within its
deadline and contributes maximum profit.

 Next, J4 is selected as it gives more profit than J1.


 J1 cannot be selected in the next clock as its deadline is over. Hence J5 is
selected as it executes within its deadline.
 Job J3 is discarded as it should not be executed within its deadline.

Therefore, the sequence of jobs (J2, J4, J5) is executed within their deadline and gives
the maximum profit.

The total profit of the sequence is 100 + 60 + 20 = 180.

Minimum Spanning Tree


Before knowing about the minimum spanning tree, we should know about the spanning tree.

To understand the concept of spanning tree, consider the below graph:

The above graph can be represented as G(V, E), where 'V' is the number of vertices,
and 'E' is the number of edges. The spanning tree of the above graph would be
represented as G`(V`, E`). In this case, V` = V means that the number of vertices in the
spanning tree would be the same as the number of vertices in the graph, but the
number of edges would be different. The number of edges in the spanning tree is the
subset of the number of edges in the original graph. Therefore, the number of edges
can be written as:

E` € E

It can also be written as:

E` = |V| - 1

Two conditions exist in the spanning tree, which is as follows:

o The number of vertices in the spanning tree would be the same as the number of
vertices in the original graph.
V` = V
o The number of edges in the spanning tree would be equal to the number of
edges minus 1.
E` = |V| - 1
o The spanning tree should not contain any cycle.
o The spanning tree should not be disconnected.

Consider the below graph:


The above graph contains 5 vertices. As we know, the vertices in the spanning tree
would be the same as the graph; therefore, V` is equal 5. The number of edges in the
spanning tree would be equal to (5 - 1), i.e., 4. The following are the possible spanning
trees:
What is a minimum spanning tree?

The minimum spanning tree is a spanning tree whose sum of the edges is minimum.
Consider the below graph that contains the edge weight:
The following are the spanning trees that we can make from the above graph.

o The first spanning tree is a tree in which we have removed the edge between the
vertices 1 and 5 shown as below:
The sum of the edges of the above tree is (1 + 4 + 5 + 2): 12
o The second spanning tree is a tree in which we have removed the edge between
the vertices 1 and 2 shown as below:
The sum of the edges of the above tree is (3 + 2 + 5 + 4) : 14
o The third spanning tree is a tree in which we have removed the edge between
the vertices 2 and 3 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 5) : 11
o The fourth spanning tree is a tree in which we have removed the edge between
the vertices 3 and 4 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 4) : 10. The edge cost 10
is minimum so it is a minimum spanning tree.

General properties of minimum spanning tree:

o If we remove any edge from the spanning tree, then it becomes disconnected.
Therefore, we cannot remove any edge from the spanning tree.
o If we add an edge to the spanning tree then it creates a loop. Therefore, we
cannot add any edge to the spanning tree.
o In a graph, each edge has a distinct weight, then there exists only a single and
unique minimum spanning tree. If the edge weight is not distinct, then there can
be more than one minimum spanning tree.
o A complete undirected graph can have an nn-2 number of spanning trees.
o Every connected and undirected graph contains atleast one spanning tree.
o The disconnected graph does not have any spanning tree.
o In a complete graph, we can remove maximum (e-n+1) edges to construct a
spanning tree.

Let's understand the last property through an example.

Consider the complete graph which is given below:


The number of spanning trees that can be made from the above complete graph equals to nn-2 = 44-
2
= 16.

Therefore, 16 spanning trees can be created from the above graph.

The maximum number of edges that can be removed to construct a spanning tree equals to e-
n+1 = 6 - 4 + 1 = 3.

MST using Kruskal’s algorithm

Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If the
cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Step #2 uses the Union-Find algorithm to detect cycles.
So we recommend reading the following post as a prerequisite.
Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy approach. The
Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST
constructed so far. Let us understand it with an example:
Below is the illustration of the above approach:

Input Graph:
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be having (9 –
1) = 8 edges.
After sorting:
Weight Src Dest
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6: No cycle is formed, include it.

Step 2: Pick edge 8-2: No cycle is formed, include it.


Step 3: Pick edge 6-5: No cycle is formed, include it.

Step 4: Pick edge 0-1: No cycle is formed, include it.

Step 5: Pick edge 2-5: No cycle is formed, include it.


Step 6: Pick edge 8-6: Since including this edge results in the cycle, discard it.
Step 7: Pick edge 2-3: No cycle is formed, include it.

Step 8: Pick edge 7-8: Since including this edge results in the cycle, discard it.
Step 9: Pick edge 0-7: No cycle is formed, include it.

Step 10: Pick edge 1-2: Since including this edge results in the cycle, discard it.
Step 11: Pick edge 3-4: No cycle is formed, include it.
Note: Since the number of edges included in the MST equals to (V – 1), so the
algorithm stops here.

PRIM’S ALGORITHM
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path
first algorithms.
Prim's algorithm, in contrast with Kruskal's algorithm, treats the nodes as a single tree
and keeps on adding new nodes to the spanning tree from the given graph.
To contrast with Kruskal's algorithm and to understand Prim's algorithm better, we shall
use the same example −

Step 1 - Remove all loops and parallel edges


Remove all loops and parallel edges from the given graph. In case of parallel edges,
keep the one which has the least cost associated and remove all others.

Step 2 - Choose any arbitrary node as root node

In this case, we choose S node as the root node of Prim's spanning tree. This node is
arbitrarily chosen, so any node can be the root node. One may wonder why any video
can be a root node. So the answer is, in the spanning tree all the nodes of a graph are
included and because it is connected then there must be at least one edge, which will
join it to the rest of the tree.

Step 3 - Check outgoing edges and select the one with less cost

After choosing the root node S, we see that S,A and S,C are two edges with weight 7
and 8, respectively. We choose the edge S,A as it is lesser than the other.

Now, the tree S-7-A is treated as one node and we check for all edges going out from it.
We select the one which has the lowest cost and include it in the tree.
After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will
check all the edges again. However, we will choose only the least cost edge. In this
case, C-3-D is the new edge, which is less than other edges' cost 8, 6, 4, etc.

After adding node D to the spanning tree, we now have two edges going out of it having
the same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step will
again yield edge 2 as the least cost. Hence, we are showing a spanning tree with both
edges included.
Knapsack Problem:

Firstly, we have given a knapsack of the maximum capacity of m kg and n items with
their weight and profit. Fill in the knapsack with a subset of items such that the selected
weight is less than or equal to the capacity of the knapsack and the profit of items is
maximum.

Algorithm of solving Knapsack Problem using


Greedy Method:
 Assume a knapsack of capacity M and n items having profit pi and weight wi
 Sort items by profit/weight ratio: pi/wi
 Consider items in order of decreasing ratio
 Take as much of each item as possible.
 Traverse this sorted list as:
if(wi ≤ M)
{
Xi=1;
M=M-wi;
}
else if (wi > M && M>0)
{
Xi=M/wi;
M=0;
}
else
{
Xi=0;
}

Example:-
Let us consider that the capacity of the knapsack M=60 and the list of provided items
are shown in the following table-
However, the provided items are not sorted based on (pi/wi). After sorting, the items are
shown in the following table.

Solution:-
Afterward, sorting all the items according to (pi/wi), first item B is chosen as the weight
of B is less than the capacity of the knapsack. Next, item A is chosen, as the available
capacity of the knapsack is greater than the weight of A. Then, C is chosen as the next
item.
However, the whole item cannot be chosen as the remaining capacity of the knapsack
is less than the weight of C.
Hence, fraction of C (i.e.(60-50)/20)is chosen.
Chiefly, the capacity of the knapsack is equal to the selected items. Hence, no more
items can be selected.
So, the total weight of the selected items is 10+40+20*(10/20)=60
Also, the total profit is 100+280+120*(10/20)=380+60=440
Hence, this is the optimal solution. Moreover, we cannot gain more profit by selecting
any different combination of items.

We may find that the output spanning tree of the same graph using two different
algorithms is same.
Dijkstra’s Shortest Path Algorithm
Given a graph and a source vertex in the graph, find the shortest paths from the
source to all vertices in the given graph.
Examples:
Input: src = 0, the graph is shown below.

Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8

Implementing Dijkstra Algorithm

Dijkstra shortest path algorithm using Prim’s Algorithm in O(V2):

Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree.
Like Prim’s MST, generate a SPT (shortest path tree) with a given source as a root.
Maintain two sets, one set contains vertices included in the shortest-path tree, other
set includes vertices not yet included in the shortest-path tree. At every step of the
algorithm, find a vertex that is in the other set (set not yet included) and has a
minimum distance from the source.
Follow the steps below to solve the problem:
 Create a set sptSet (shortest path tree set) that keeps track of vertices included in
the shortest-path tree, i.e., whose minimum distance from the source is calculated
and finalized. Initially, this set is empty.
 Assign a distance value to all vertices in the input graph. Initialize all distance
values as INFINITE. Assign the distance value as 0 for the source vertex so that it
is picked first.
 While sptSet doesn’t include all vertices
 Pick a vertex u which is not there in sptSet and has a minimum distance
value.
 Include u to sptSet.
 Then update distance value of all adjacent vertices of u.
 To update the distance values, iterate through all adjacent
vertices.
 For every adjacent vertex v, if the sum of the distance value of u
(from source) and weight of edge u-v, is less than the distance
value of v, then update the distance value of v.
Note: We use a boolean array sptSet[] to represent the set of vertices included in
SPT. If a value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array
dist[] is used to store the shortest distance values of all vertices.
Below is the illustration of the above approach:

Illustration:

To understand the Dijkstra’s Algorithm lets take a graph and find the shortest path
from source to all nodes.
Consider below graph and src = 0

Step 1:
 The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF,
INF, INF, INF, INF, INF} where INF indicates infinite.
 Now pick the vertex with a minimum distance value. The vertex 0 is picked, include
it in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update distance
values of its adjacent vertices.
 Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as 4
and 8.
The following subgraph shows vertices and their distance values, only the vertices
with finite distance values are shown. The vertices included in SPT are shown
in green colour.
Step 2:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). The vertex 1 is picked and added to sptSet.
 So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of
1.
 The distance value of vertex 2 becomes 12.

Step 3:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}.
 Update the distance values of adjacent vertices of 7. The distance value of vertex 6
and 8 becomes finite (15 and 9 respectively).

Step 4:
 Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
 Update the distance values of adjacent vertices of 6. The distance value of vertex 5
and 8 are updated.

We repeat the above steps until sptSet includes all vertices of the given graph.
Finally, we get the following Shortest Path Tree (SPT).
ALL PAIRAS SHORTEST PATH (FLOYD WARSHALL
ALGORITHM)
Floyd-Warshall Algorithm
The Floyd-Warshall algorithm is a dynamic programming algorithm used to discover the
shortest paths in a weighted graph, which includes negative weight cycles. The
algorithm works with the aid of computing the shortest direction between every pair of
vertices within the graph, the usage of a matrix of intermediate vertices to keep music of
the exceptional-recognized route thus far.

But before we get started, let us briefly understand what Dynamic Programming is.

Dynamic programming is a technique used in computer science and mathematics to


remedy complicated troubles with the aid of breaking them down into smaller
subproblems and solving each subproblem as simple as soon as. It is a technique of
optimization that can be used to locate the pleasant technique to a hassle with the aid of
utilizing the solutions to its subproblems.

The key idea behind dynamic programming is to keep the solutions to the subproblems
in memory, so they can be reused later whilst solving larger problems. This reduces the
time and area complexity of the set of rules and lets it resolve tons larger and extra
complex issues than a brute force approach might

Working of Floyd-Warshall Algorithm:


1. Initialize a distance matrix D wherein D[i][j] represents the shortest distance
between vertex i and vertex j.

2. Set the diagonal entries of the matrix to 0, and all other entries to infinity.

3. For every area (u,v) inside the graph, replace the gap matrix to mirror the weight
of the brink: D[u][v] = weight(u,v).

4. For every vertex okay in the graph, bear in mind all pairs of vertices (i,j) and
check if the path from i to j through k is shorter than the current best path. If it is,
update the gap matrix: D[i][j] = min(D[i][j], D[i][k] D[k][j]).

5. After all iterations, the matrix D will contain the shortest course distances
between all pairs of vertices.

Example:

Floyd-Warshall is an algorithm used to locate the shortest course between all pairs of
vertices in a weighted graph. It works by means of keeping a matrix of distances
between each pair of vertices and updating this matrix iteratively till the shortest paths
are discovered.

Let's see at an example to illustrate how the Floyd-Warshall algorithm works:

Consider the following weighted graph:

Figure: A Weighted Graph

In this graph, the vertices are represented by letters (A, B, C, D), and the numbers on
the edges represent the weights of those edges.

To follow the Floyd-Warshall algorithm to this graph, we start by way of initializing a


matrix of distances among every pair of vertices. If two vertices are immediately related
by using a side, their distance is the load of that edge. If there may be no direct edge
among vertices, their distance is infinite.
In the first iteration of the set of rules, we keep in mind the possibility of the usage of
vertex 1 (A) as an intermediate vertex in paths among all pairs of vertices. If the space
from vertex 1 to vertex 2 plus the space from vertex 2 to vertex three is much less than
the present-day distance from vertex 1 to vertex three, then we replace the matrix with
this new distance. We try this for each possible pair of vertices.

In the second iteration, we recollect the possibility to use of vertex 2 (B) as an


intermediate vertex in paths among all pairs of vertices. We replace the matrix in the
same manner as earlier before.

In the third iteration, we consider the possibility of using vertex 3 (C) as an intermediate
vertex in paths between all pairs of vertices.

Finally, in the fourth and final iteration, we consider the possibility of using vertex 4 (D)
as an intermediate vertex in paths between all pairs of vertices.
After the fourth iteration, we have got the shortest path between every pair of vertices in
the graph. For example, the shortest path from vertex A to vertex D is 4, which is the
value in the matrix at row A and column D.

ALGORITHM:

void fillDistanceMatrix(int A[n][n], int D[n][n]) {

for (int i = 0; i < n; i++) {

for (int j = 0; j < n; j++) {

if (i == j)

D[i][j] = 0;

else if (A[i][j] == 0)

D[i][j] = INF;

else

D[i][j] = A[i][j];

void floydWarshall(int A[n][n], int D[n][n]) {


fillDistanceMatrix(A, D);

for (int k = 0; k < n; k++) {


for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (D[i][k] < INF && D[k][j] < INF)
if (D[i][k] + D[k][j] < D[i][j])
D[i][j] = D[i][k] + D[k][j];
}
}
}
}

Time complexity - This algorithm has an overall time complexity of O(N^3), in


which N is the wide variety of vertices within the graph. It is suitable for dense graphs
with poor weight cycles, as it can manage them without getting caught in a limitless
loop. However, for sparse graphs, different algorithms like Dijkstra's algorithm or
Bellman-Ford algorithms can be extra efficient.

Tree traversal Techniques(Inorder, Preorder and Postorder)


The term 'tree traversal' means traversing or visiting each node of a tree. There is a
single way to traverse the linear data structure such as linked list, queue, and stack.
Whereas, there are multiple ways to traverse a tree that are listed as follows -

o Preorder traversal

o Inorder traversal

o Postorder traversal

So, in this article, we will discuss the above-listed techniques of traversing a tree. Now,
let's start discussing the ways of tree traversal.

Preorder traversal

This technique follows the 'root left right' policy. It means that, first root node is visited
after that the left subtree is traversed recursively, and finally, right subtree is recursively
traversed. As the root node is traversed before (or pre) the left and right subtree, it is
called preorder traversal.

So, in a preorder traversal, each node is visited before both of its subtrees.

The applications of preorder traversal include -

o It is used to create a copy of the tree.

o It can also be used to get the prefix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited

2. Step 1 - Visit the root node

3. Step 2 - Traverse the left subtree recursively.


4. Step 3 - Traverse the right subtree recursively.

Example

Now, let's see the example of the preorder traversal technique.

Now, start applying the preorder traversal on the above tree. First, we traverse the root
node A; after that, move to its left subtree B, which will also be traversed in preorder.

So, for left subtree B, first, the root node B is traversed itself; after that, its left
subtree D is traversed. Since node D does not have any children, move to right
subtree E. As node E also does not have any children, the traversal of the left subtree of
root node A is completed.

Now, move towards the right subtree of root node A that is C. So, for right subtree C,
first the root node C has traversed itself; after that, its left subtree F is traversed. Since
node F does not have any children, move to the right subtree G. As node G also does
not have any children, traversal of the right subtree of root node A is completed.

Therefore, all the nodes of the tree are traversed. So, the output of the preorder
traversal of the above tree is -

A→B→D→E→C→F→G

Postorder traversal:

This technique follows the 'left-right root' policy. It means that the first left subtree of the
root node is traversed, after that recursively traverses the right subtree, and finally, the
root node is traversed. As the root node is traversed after (or post) the left and right
subtree, it is called postorder traversal.

So, in a postorder traversal, each node is visited after both of its subtrees.

The applications of postorder traversal include -


o It is used to delete the tree.

o It can also be used to get the postfix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited

2.

3. Step 1 - Traverse the left subtree recursively.

4. Step 2 - Traverse the right subtree recursively.

5. Step 3 - Visit the root node.

Example

Now, let's see the example of the postorder traversal technique.

Now, start applying the postorder traversal on the above tree. First, we traverse the left
subtree B that will be traversed in postorder. After that, we will traverse the right
subtree C in postorder. And finally, the root node of the above tree, i.e., A, is traversed.

So, for left subtree B, first, its left subtree D is traversed. Since node D does not have
any children, traverse the right subtree E. As node E also does not have any children,
move to the root node B. After traversing node B, the traversal of the left subtree of root
node A is completed.

Now, move towards the right subtree of root node A that is C. So, for right subtree C,
first its left subtree F is traversed. Since node F does not have any children, traverse the
right subtree G. As node G also does not have any children, therefore, finally, the root
node of the right subtree, i.e., C, is traversed. The traversal of the right subtree of root
node A is completed.

At last, traverse the root node of a given tree, i.e., A. After traversing the root node, the
postorder traversal of the given tree is completed.

Therefore, all the nodes of the tree are traversed. So, the output of the postorder
traversal of the above tree is -

D→E→B→F→G→C→A

.Inorder traversal
This technique follows the 'left root right' policy. It means that first left subtree is visited
after that root node is traversed, and finally, the right subtree is traversed. As the root
node is traversed between the left and right subtree, it is named inorder traversal.

So, in the inorder traversal, each node is visited in between of its subtrees.

The applications of Inorder traversal includes -

o It is used to get the BST nodes in increasing order.

o It can also be used to get the prefix expression of an expression tree.

Algorithm

1. Until all nodes of the tree are not visited

2. Step 1 - Traverse the left subtree recursively.

3. Step 2 - Visit the root node.

4. Step 3 - Traverse the right subtree recursively.

Example

Now, let's see the example of the Inorder traversal technique.


Now, start applying the inorder traversal on the above tree. First, we traverse the left
subtree B that will be traversed in inorder. After that, we will traverse the root node A.
And finally, the right subtree C is traversed in inorder.

So, for left subtree B, first, its left subtree D is traversed. Since node D does not have
any children, so after traversing it, node B will be traversed, and at last, right subtree of
node B, that is E, is traversed. Node E also does not have any children; therefore, the
traversal of the left subtree of root node A is completed.

After that, traverse the root node of a given tree, i.e., A.

At last, move towards the right subtree of root node A that is C. So, for right subtree C;
first, its left subtree F is traversed. Since node F does not have any children, node C will
be traversed, and at last, a right subtree of node C, that is, G, is traversed. Node G also
does not have any children; therefore, the traversal of the right subtree of root node A is
completed.

As all the nodes of the tree are traversed, the inorder traversal of the given tree is
completed. The output of the inorder traversal of the above tree is -

D→B→E→A→F→C→G

Complexity of Tree traversal techniques


The time complexity of tree traversal techniques discussed above is O(n), where 'n' is
the size of binary tree.

Whereas the space complexity of tree traversal techniques discussed above is O(1) if
we do not consider the stack size for function calls. Otherwise, the space complexity of
these techniques is O(h), where 'h' is the tree's height.
GRAPH TRAVERSAL TECHNIQUES
A graph is a mathematical structure used to model pairwise relations between objects. It
consists of two primary components: vertices (also called nodes) and edges (also called
links)

Graph traversal is the process of visiting all the vertices (nodes) in a graph in a
systematic way. It involves exploring or traversing each vertex in a graph by following
the edges that connect the vertices. There are two common methods for graph
traversal: Breadth-First Search (BFS) and Depth-First Search (DFS).

Types of Graphs

There are various types of graph algorithms that you would be looking at in this article
but before that, let's look at some types of terms to imply the fundamental variations
between them.

Order: Order defines the total number of vertices present in the graph.

Size: Size defines the number of edges present in the graph.

Self-loop: It is the edges that are connected from a vertex to itself.

Isolated vertex: It is the vertex that is not connected to any other vertices in the graph.

Vertex degree: It is defined as the number of edges incident to a vertex in a graph.

Weighted graph: A graph having value or weight of vertices.

Unweighted graph: A graph having no value or weight of vertices.

Directed graph: A graph having a direction indicator.

Undirected graph: A graph where no directions are defined.


Let's now carry forward the main discussion and learn about different types of graph
algorithms.

Breadth-First Search

Traversing or searching is one of the most used operations that are undertaken while
working on graphs. Therefore, in breadth-first-search (BFS), you start at a particular
vertex, and the algorithm tries to visit all the neighbours at the given depth before
moving on to the next level of traversal of vertices. Unlike trees, graphs may contain
cyclic paths where the first and last vertices are remarkably the same always. Thus, in
BFS, you need to keep note of all the track of the vertices you are visiting. To implement
such an order, you use a queue data structure which First-in, First-out approach. To
understand this, see the image given below.

Algorithm

1. Start putting anyone vertices from the graph at the back of the queue.
2. First, move the front queue item and add it to the list of the visited node.

3. Next, create nodes of the adjacent vertex of that list and add them which have
not been visited yet.

4. Keep repeating steps two and three until the queue is found to be empty.

Pseudocode

1. Set all nodes to "not visited";

2. q = new Queue();

3. q.enqueue(initial node);

4. while ( q !=empty ) do

5. {

6. x = q.dequeue();

7. if ( x has not been visited )

8. {

9. visited[x] = true; // Visit node x !

10.

11. for ( every edge (x, y) /* we are using all edges ! */ )

12. if ( y has not been visited )

13. q.enqueue(y); // Use the edge (x,y) !!!

14. }

15. }

Complexity: 0(V+E) where V is vertices and E is edges.

Applications

BFS algorithm has various applications. For example, it is used to determine


the shortest path and minimum spanning tree. It is also used in web crawlers to
creates web page indexes. It is also used as powering search engines on social media
networks and helps to find out peer-to-peer networks in BitTorrent.

Depth-first search
In depth-first-search (DFS), you start by particularly from the vertex and explore as
much as you along all the branches before backtracking. In DFS, it is essential to keep
note of the tracks of visited nodes, and for this, you use stack data structure.

Algorithm

1. Start by putting one of the vertexes of the graph on the stack's top.

2. Put the top item of the stack and add it to the visited vertex list.

3. Create a list of all the adjacent nodes of the vertex and then add those nodes to
the unvisited at the top of the stack.

4. Keep repeating steps 2 and 3, and the stack becomes empty.

Pseudocode

1. DFS(G,v) ( v is the vertex where the search starts )

2. Stack S := {}; ( start with an empty stack )

3. for each vertex u, set visited[u] := false;

4. push S, v;

5. while (S is not empty) do

6. u := pop S;

7. if (not visited[u]) then

8. visited[u] := true;

9. for each unvisited neighbour w of uu


10. push S, w;

11. end if

12. end while

13. END DFS()

Applications

DFS finds its application when it comes to finding paths between two vertices and
detecting cycles. Also, topological sorting can be done using the DFS algorithm easily.
DFS is also used for one-solution puzzles.

You might also like