0% found this document useful (0 votes)
14 views72 pages

Algorithm Final Project Sample

This document outlines a project on Algorithm Analysis and Design focusing on minimum spanning trees using Kruskal's and Prim's algorithms. It presents a theoretical framework, implementation details, and a comparative analysis of the algorithms in terms of time and space complexity. The project aims to develop a tour guide scheduling system for Saudi Arabia that optimizes travel routes between tourist attractions.

Uploaded by

ph.alamer99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views72 pages

Algorithm Final Project Sample

This document outlines a project on Algorithm Analysis and Design focusing on minimum spanning trees using Kruskal's and Prim's algorithms. It presents a theoretical framework, implementation details, and a comparative analysis of the algorithms in terms of time and space complexity. The project aims to develop a tour guide scheduling system for Saudi Arabia that optimizes travel routes between tourist attractions.

Uploaded by

ph.alamer99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Imam Abdulrahman Bin Faisal University

College of Computer Science & Information Technology

Department of Computer Science

Algorithm Analysis and Design Project


Academic Year 2023 - 2024

Instructor: Mrs.Lubna Tahlawi

Student Name Student ID

Fatimah Ali AlMutawa 2210002793

Dhuha Nasser Al Abahussain 2210003014

Fatimah Ali Albustani 2210002841

Joud Tofiq Saleh 2210040023

Submission date: December 10, 2023


Table of Contents
Introduction:........................................................................................................................................ 5
Definition of the problem: .................................................................................................................. 7
Pseudocode of the Selected Algorithms............................................................................................. 8
How could the chosen algorithm help solve the problem? ............................................................... 10
Literature review of the chosen algorithms................................................................................... 11
Comparison between chosen algorithms theoretically in terms of time and space complexity of the
best, average, and worst case. ........................................................................................................... 12
Theoretical question .......................................................................................................................... 12
Kruskal’s algorithms ......................................................................................................................... 12
Prim’s algorithm ............................................................................................................................... 13
Kruskal’s algorithms Vs Prim’s algorithm: ...................................................................................... 14
Data set used ..................................................................................................................................... 16
Our Experimental Setup .................................................................................................................... 17
Implementation Setup ....................................................................................................................... 17
Experimental Results: ....................................................................................................................... 19
Kruskal’s Algorithm: ........................................................................................................................ 19
Prim’s Algorithm: ............................................................................................................................. 25
Computational complexity of each algorithm………………………………………………..……..40
T(n) and order of growth concerning the dataset size ……………………………………….……..41
Comparison between Kruskal's, Prim's, and BSF Algorithms’ running time and space complexity
values……………………….…………………………………………………………….………..42
Diagrams that shows the running times and order of growth of each algorithm………………..43
Comparison of algorithms in terms of time and/or space complexity……………………..……….47
Conclusion………………………………………………………………………………………….49
References…………………………………………………………………………………………..50
Appendix ........................................................................................................................................... 51

2
Table of Tables
Table 1:Comparison between Kruskal's algorithm and Prim's algorithm ............................................. 14
Table 2:Machine's characteristics ......................................................................................................... 17
Table 3: Kruskal algorithm's running time for small input ................................................................... 20
Table 4:Kruskal algorithm's small input's experimental average and theoretical estimate comparison 20
Table 5:Kruskal algorithm's running time for large input ..................................................................... 23
Table 6:Kruskal algorithm's large input's experimental average and theoretical estimate comparison 23
Table 7:Prim's algorithm best case running time for small input.......................................................... 26
Table 8:: Prim's algorithm's small input's experimental average and theoretical estimate best case
comparison ............................................................................................................................................ 26
Table 9: Prim's algorithm best case running time for large input ........................................................ 29
Table 10:Prim's algorithm's large input's experimental average and theoretical estimate best case
comparison ............................................................................................................................................ 29
Table 11: Table 11:Prim's algorithm average case running time for small input ................................ 31
Table 12:: Prim's algorithm's small input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 32
Table 13:Prim's algorithm average case running time for large input: ................................................. 33
Table 14: Prim's algorithm's large input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 34
Table 15:Prim's algorithm worst case running time for small input ..................................................... 35
Table 16:: Prim's algorithm's small input's experimental average and theoretical estimate worst case
comparison ............................................................................................................................................ 36
Table 17:Prim's algorithm worst case running time for large input ...................................................... 37
Table 18:Prim's algorithm's large input's experimental average and theoretical estimate worst case
comparison ............................................................................................................................................ 38
Table 19:BFS Algorithm's Best/Average/Worst cases for small input ................................................. 41
Table 20:BFS Algorithm's Best/Average/Worst cases for large input.................................................. 41
Table 21:Kruskal's and Prim's Algorithms’ Experimental Average and BFS's Algorithms' Theoretical
Estimate in Best, Average, and Worst cases for small and large input ................................................. 42
Table 22 :Kruskal's, Prim's, and BFS's Algorithms Space Complexity ................................................ 42
Table 23:Comparison of Kruskal's, Prim's, and BFS's Algorithms' Time Complexity, Space
Complexity, and Theoretical Estimate .................................................................................................. 48

3
Table of Figures
Figure 1:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst
Case) on small input.............................................................................................................................. 21
Figure 2:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst
Case) on large input .............................................................................................................................. 24
Figure 3: Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on small input ...... 27
Figure 4:Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on large input ....... 30
Figure 5: Prim's algorithm's small input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 32
Figure 6:: Experimental Vs. Theoretical Results for Prim's Algorithm (Average Case) on large input
.............................................................................................................................................................. 34
Figure 7: Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on small input ... 36
Figure 8:Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on large input ..... 38
Figure 10: Algorithms' Best Case using Small Input ............................................................................ 43
Figure 11:Algorithms' Best Case using Large Input ............................................................................. 43
Figure 12:Algorithms' Average Case using Small Input ...................................................................... 44
Figure 13:Algorithms' Average Case using Large Input ...................................................................... 44
Figure 14:Algorithms' Worst Case using Small Input .......................................................................... 45
Figure 15:Algorithms' Worst Case using Large Input .......................................................................... 45
Figure 16:BFS Algorithms' Best, Average, and Worst Case using Small Input ................................... 46
Figure 17:BFS Algorithms' Best, Average, and Worst Case using Large Input ................................... 46
Figure 18: Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Small Input ................ 47
Figure 19:Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Large Input ................. 48

4
Introduction
The minimum spanning tree (MST) is defined as a subset of the edges of
a connected, edge-weighted undirected graph that connects all the vertices
together, without any cycles and with the minimum possible total edge weight.
The MST has numerous diverse applications including network design such as:
Electrical circuits, TV cables, Highway roads, setting up telephone wires in a
neighbourhood, and finding the minimum route in a GIS system. In addition,
there are several indirect applications: Max bottleneck paths, Reducing data
storage in sequencing amino acids in a protein and Image registration with
Renyi entropy.

This research will discuss the two main approaches in minimum


spanning trees: Kruskal’s algorithm and Prim’s algorithm. These algorithms
optimise the tree’s paths and ensure that the path chosen uses the least sum of
weights of the tree’s edges.

5
Section I: Theoretical

6
1. Definition of the problem
Following Vision 2030, our application is a Saudi tour guide scheduling
system where each visiting tourist can have a cost-effective schedule with the
tourist attractions and culturally significant archaeological sites of their choice,
along with the shortest path between them. Our system assists Saudi Arabia
tourists and makes their stay comfortable by providing them with an efficient
plan that highlights the least amount of time needed to transport between their
preferred attractions. To sum up, this system reduces costs of transportation,
time needed to transport, and the time required to plan activities.

This report will demonstrate this system using the minimum spanning
tree. Each tourist attraction represents a vertex and the distance between such
attractions represent edges. We will be taking into consideration the tourist
attractions, calculating the distance (as edges) between such sites, and finding
the shortest path that includes all the desired destinations.

7
2. Pseudocode of the Selected Algorithms.

2.1 Kruskal’s algorithm:

MST-KRUSKAL(G, W)
1. A =φ
2. for each vertex y E G. V
3. MAKE-SET(v)
4. create a single list of the edges in G. E
5. sort the list of edges into monotonically increasing order by
weight W
6. for each edge (u, v) taken from the sorted list in order
7. if FIND-SET(u) # FIND-SET (v)
8. A = A U {(u, v),
9. UNION(u, v)
10.return A

8
2.2 Prim’s Algorithm:

MST-PRIM(G, w, r)

1. for each vertex u ∈ G.V


2. u.key = ∞
3. u.π = NIL
4. r.key = 0
5. Q = Ø
6. for each vertex u ∈ G.V
7. INSERT(Q, u)
8. while Q ≠ Ø
9. u = EXTRACT-MIN(Q) // add u to the tree
10. for each vertex v in G.Adj[u] // update keys of u’s non-tree
neighbors
11. if v ∈ Q and w(u, v) < v.key
12. v.π = u
13. v.key = w(u, v)
14. DECREASE-KEY(Q, v, w(u, v))

9
3. How could the chosen algorithm help solve the problem?
Kruskal’s algorithm and Prim’s algorithm are the most frequently employed
greedy algorithms for finding the shortest path in minimum spanning trees. They both
operate on connected, weighted, and undirected graphs. Prim's algorithm begins with
any vertex, then MST is expanded by including the edge with the lowest weight from
a vertex inside the MST to a vertex outside the MST. Kruskal's method begins with an
empty collection of edges and sorts them in ascending weight order. It then selected
the first V-1 number of sorted edges and iterates through them, adding each edge to
the MST as long as the edge does not form any cycle.

Saudi Arabia’s new focus on flourishing and thriving in being the international
destination for tourism, has driven for a fundamental change in the tourism sector.
Tour guide scheduling systems make efficient use of Kruskal’s and Prim’s algorithms
in several ways. The minimum spanning trees used by these two algorithms are
effectively paralleled as the tourist destinations and the distance between them, as
vertices and edges respectively. Kruskal’s and Prim’s algorithms aim to cover all the
vertices (attractions) in the graph while maintaining the minimum total weight cost of
the connecting edges (roads between cities). Thus, these two algorithms greatly assist
in reaching our goals of finding the shortest path that includes all the desired
destinations. (Stanford University, n.d)

10
4. Literature review of the chosen algorithms
According to Dian Sano’s research paper titled “ Modeling Prim's
Algorithm for Tour Agencies' Minimum Traveling Paths to Increase
Profitability”, Prim’s algorithm can be efficiently used in tourism agencies
around the world. Dian Sano puts a great emphasis on how this algorithm can
be used to save costs, fuel, and drive business to success. That is by using
Prim’s algorithm to find the best travelling paths that make up the shortest tour
trip. This paper uses 24 tourism sites in a region of Indonesia, Greater Malang,
as the input. This input was then converted to a two-dimensional adjacency
matrix, to be able to plug it in the algorithm. Furthermore, different tourism
agencies could alter this input according to their tourism sites. Finally, Dian
Sano concluded that Prim’s algorithm is the best algorithm to use in finding the
shortest total tour trip for tourism agencies globally. (Dian Sano et al., 2020)

The research paper “MINIMUM SPANNING TREE ROUTE FOR


MAJOR TOURIST CENTERS IN THE BRONG AHAFO REGION OF
GHANA” by Gakutsah and Fracis focuses on enhancing the tourism in Brong
Ahafo Region of Ghana to inspire vitality for rural areas, improve its
employment rate, and flourish its communities’ architecture. They represented
the 11 most prominent tourist sites in Brong Ahafo in a Minimum Spanning
Tree, using the sites as vertices and the distances between them (in kilometres)
as edges. The researchers utilised Prim’s algorithm to find the minimum total
distance that connects the attractions, and concluded that distance to be 360km.
(Gakutsah and Fracis , 2011)

11
5. Comparison between chosen algorithms theoretically in terms of
time and space complexity of the best, average, and worst case.

Theoretical question
1. Kruskal’s algorithms

Kruskal’s algorithm begins by sorting the edges in an ascending order. Then,


starting from the minimum edge, it identifies whether it is a “safe edge” or not.
A safe edge is an edge that does not form a cycle. We check this property by
keeping all the connected vertices in the MST in a disjoint set and checking that
this property is maintained every time we add a vertex to the MST. Finally,
Kruskal’s algorithm can be classified as a greedy algorithm because at every
step it adds an edge to the MST with the least possible weight. The running time
of Kruskal’s algorithm for a graph G = (V, E) depends on the specific
implementation of the disjoint-set data structure.

Best Case:
The best case of Kruskal’s algorithm occurs when the graph is sparse, and the edges
are sorted in ascending order. The running time would be O (E log V).

Worst Case and Average Case:


The worst and average cases occur when the graph is dense, and the edges are sorted
in descending order. The time complexity for both is the same, which is O (E log V).

12
2. Prim’s algorithm

Prim’s algorithm is a greedy algorithm that generates a minimum spanning


tree. A prominent property in Prim’s algorithm is that the edges in the set
always form a tree. This algorithm begins from an arbitrary root vertex and
grows until it covers all the other vertices. In every step, the algorithm adds a
“light edge” that connects the set to an isolated vertex. It also considers
whether the edges are “safe edges” before adding them. Prim’s algorithm is
considered a greedy algorithm because at every step, it adds an edge that will
contribute the least amount of weight to the tree’s weight.

Best Case:
The best case for Prim’s algorithm occurs when a Fibonacci heap is used. The time
complexity is O (E log V).

Average Case:
The average case occurs when a binary heap is used. The time complexity is O (E
log V).

Worst Case:
The worst case occurs when a dense graph is used to generate the adjacency
matrix. The time complexity is O(V^2).

13
3. Kruskal’s algorithms Vs Prim’s algorithm:

Aspect Kruskal’s algorithms Prim’s algorithm

Main Find the Minimum Spanning Find the Minimum Spanning


Objective Tree (MST) of a connected Tree (MST) of a connected graph
graph

Decision made Selecting the minimum weight Selecting the minimum key value
based on edge that does not create a vertex from the set of vertices not
cycle yet

Basic Concept Greedy approach Greedy approach

Decision Sort the edges in non- Gradually add vertices to the


Sequence decreasing order of weight and MST while maintaining
Production add them to the MST if they do connectivity and minimizing the
not create a cycle total weight

Approach Sort the edges and add them to Start with an empty MST and add
Used the MST based on weight vertices one by one

Often used in Network design, clustering, Network design, clustering,


image segmentation, etc. image segmentation, etc

Disadvantages Can be slower for extremely Can be slower for dense graphs
large graphs compared to Kruskal's algorithm
Table 1:Comparison between Kruskal's algorithm and Prim's algorithm

14
Section II: Implementation

15
1. Data set used.
We generated random values for our graphs and heaps using the Java class
java.util.Random; and the method Math.random.

For Kruskal’s algorithm’s best case, we generated Java code that results in a sparse
graph. Similarly, the worst case’s Java code generates a dense graph. Meanwhile,
For Prim's algorithm, the best case is a complete graph, the average case is a binary
heap, and the worst case is a dense graph.

Similarly, we generated Java code whose output is a random Fibonacci heap, which
has edges sorted in ascending order for Prim’s algorithm’s best case. We used a
similar code to generate a random binary heap for Prim’s algorithm’s average case.
Finally, we generated a dense graph, which gets converted into a binary heap for
Prim’s worst case.

16
Our Experimental Setup
1. What kind of machine did you use?

Characteristics Machine
Operating System Windows 11
Processor Intel Core i7
RAM 16 GB
System type 64-bit
Speed of the machine 2.8 GHz
Table 2:Machine's characteristics

2. What timing mechanism?

The code uses the System.nanoTime() method to measure the elapsed time
in nanoseconds. To improve accuracy, the measured time is then converted
to microseconds, which provides a more precise and detailed measurement
scale for analysing the execution time of the code.

3. How many times did you repeat each experiment?


We repeated each experiment 20-60 times to minimise the impact of
outliers on the measured running time and obtain more accurate results.
4. What times are reported?
The times reported are the average of the obtained running times of the
experiment.
5. How did you select the inputs?
starting from 5 to 20 by adding 5 to the input size each time for small
inputs and starting from 44 to 60 by adding 5 to the input size for larger
inputs to get the most accurate graph which shows how the running time
is affected by the input size.

17
6. Did You Use the Same Inputs for the Two Algorithms?

For maximum accuracy, we utilized identical input for every execution


of the greedy approach. This approach minimized any potential
discrepancies and ensured consistent and comparable results.

Note: We chose our largest input size to be 60 to represent Saudi Arabia’s 60


most prominent cities.

18
Experimental Results:
I) Kruskal’s Algorithm:

Note: Kruskal’s best, average, and worst case all run in the same time
complexity of O (E log V). Because of the identical running time, we used only
two tables to represent the small and large input.

1. Best\Average\Worst Case:
a. Running time with small input:

Input size 5 10 15 20
Time 1 (µs) 467 544 559 566
Time 2 (µs) 746 702 509 1277
Time 3 (µs) 682 528 616 716
Time 4 (µs) 707 1314 607 1179
Time 5 (µs) 867 1239 580 842
Time 6 (µs) 439 531 494 1469
Time 7 (µs) 690 596 506 1109
Time 8(µs) 995 1327 604 860
Time 9 (µs) 458 1195 870 600
Time 10 (µs) 482 535 747 880
Time 11 (µs) 483 616 539 747
Time 12 (µs) 767 466 779 863
Time 13 (µs) 767 693 729 851
Time 14 (µs) 465 614 501 1547
Time 15 (µs) 554 598 758 666
Time 16 (µs) 456 634 642 1086
Time 17 (µs) 837 532 694 619

19
Time 18 (µs) 693 1128 537 576
Time 19 (µs) 534 886 688 590
Time 20 (µs) 469 713 705 808
Experimental Average 627.9 679.55 633.2 892.55
Theoretical estimate 2.7958 9 16.4652 24.7195
Experimental/theory 224.5868 75.5055 38.4568 36.1071
Table 3: Kruskal algorithm's running time for small input

Input size Experimental Theoretical Experimental/theory


average estimate
5 627.9 2.7958 224.5868
10 679.55 9 75.5055
15 633.2 16.4652 38.4568
20 892.55 24.7195 36.1071
Table 4:Kruskal algorithm's small input's experimental average and theoretical estimate comparison

20
Kruskal algorithm's Best\Average\Worst Case:
Running time with small input
1000
900
800
700
600
500
400
300
200
100
0
5 10 15 20

Experimental Theoretical Estimate

Figure 1:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst Case) on small input

21
2. Best\Average\Worst Case:

b. Running time with large input:

Input size 45 50 55 60

Time 1 (µs) 1738 1159 1408 1263


Time 2 (µs) 1045 1057 1664 2383

Time 3 (µs) 1025 1531 1163 1450


Time 4 (µs) 1078 1033 2237 1603

Time 5 (µs) 1322 1089 1648 1447

Time 6 (µs) 978 1586 2524 1447

Time 7 (µs) 1009 1446 1664 1479


Time 8(µs) 1106 1000 1356 1492

Time 9 (µs) 1154 1331 1397 1262

Time 10 (µs) 1578 1025 1351 1420

Time 11 (µs) 1666 1542 1017 1367

Time 12 (µs) 1069 1184 2561 1683


Time 13 (µs) 949 1126 1146 1920

Time 14 (µs) 1156 1330 1147 1476

Time 15 (µs) 836 1422 1556 1093


Time 16 (µs) 1175 990 1681 1580

Time 17 (µs) 1096 1083 1189 2137

Time 18 (µs) 919 1050 1137 1705

Time 19 (µs) 960 2010 1478 1679


Time 20 (µs) 1205 2144 1574 2230

Experimental Average 1153.2 1306.9 1415.6 1605.8

Theoretical estimate 72.7413 83.2495 93.9795 104.9109

22
Experimental/theory 15.8534 15.6985 15.0628 15.3063
Table 5:Kruskal algorithm's running time for large input

Input size Experimental Theoretical Experimental/theory


average estimate
45 1153.2 72.7413 15.8534
50 1306.9 83.2495 15.6985

55 1415.6 93.9795 15.0628


60 1605.8 104.9109 15.3063
Table 6:Kruskal algorithm's large input's experimental average and theoretical estimate comparison

23
Experimental Vs. Theoretical Results for Kruskal's
Algorithm (Best, Average, and Worst Case) on large input
1800
1600
1400
1200
1000
800
600
400
200
0
45 50 55 60

Experimental average time Theoretical estimate

Figure 2:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst Case) on large input

24
II) Prim’s Algorithm:

1. Best Case:
a. Running time for the best case with small input:

Input size 5 10 15 20
Time 1 (µs) 728 737 623 919

Time 2 (µs) 1183 740 773 842

Time 3 (µs) 1104 708 623 840

Time 4 (µs) 858 783 612 724


Time 5 (µs) 615 635 765 1017

Time 6 (µs) 646 713 728 905

Time 7 (µs) 721 883 2424 800

Time 8(µs) 803 948 1175 874

Time 9 (µs) 582 572 1959 907

Time 10 (µs) 526 690 866 696

Time 11 (µs) 2995 896 722 813

Time 12 (µs) 1021 935 981 1464

Time 13 (µs) 729 588 616 825

Time 14 (µs) 909 583 819 748

Time 15 (µs) 705 798 2662 876


Time 16 (µs) 1281 729 994 772
Time 17 (µs) 719 1443 951 790

Time 18 (µs) 764 741 2361 834

Time 19 (µs) 829 840 901 16405

Time 20 (µs) 814 712 587 15463


Experimental Average 926.6 783.7 1107.1 2375.7

25
Theoretical estimate 2.7958 9 16.4652 24.7195
Experimental/theory 331.4257 87.078 67.239 96.106
Table 7:Prim's algorithm best case running time for small input

Input size Experimental Theoretical Experimental/theory


average estimate
5 926.6 2.7958 331.42571

10 783.7 9 87.07778
15 1107.1 16.4652 67.2388
20 2375.7 24.7195 96.10631
Table 8:: Prim's algorithm's small input's experimental average and theoretical estimate best case comparison

26
Experimental Vs. Theoretical Results for Prim's Algorithm
(Best Case) on small input
2500

2000

1500

1000

500

0
5 10 15 20

Experimental average time Theoretical estimate

Figure 3: Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on small input

27
2. Best case

b. Running time for the best case with large input:

Input size 45 50 55 60
Time 1 (µs) 1656 1650 2391 2071
Time 2 (µs) 1367 1590 1905 4272

Time 3 (µs) 1338 1936 2007 2607


Time 4 (µs) 1722 3223 1746 5512
Time 5 (µs) 1362 1864 2303 4595
Time 6 (µs) 1224 1730 3319 2269

Time 7 (µs) 1474 1771 2283 2314

Time 8(µs) 2261 1726 2133 2261

Time 9 (µs) 1408 1604 1875 2413

Time 10 (µs) 1884 1536 1826 2328

Time 11 (µs) 2171 1980 1999 2037


Time 12 (µs) 1907 1705 1835 2358

Time 13 (µs) 1351 2094 2121 2117


Time 14 (µs) 1480 1720 1860 2439

Time 15 (µs) 1694 3637 1666 2185


Time 16 (µs) 1411 1547 2255 2591

Time 17 (µs) 4484 1383 1795 2183


Time 18 (µs) 1228 1668 1874 2573
Time 19 (µs) 1303 2236 2152 2217

Time 20 (µs) 1625 1955 1771 3052

Experimental Average 1717.5 1927.8 2055.8 2719.7

Theoretical estimate 72.7413 83.2495 93.9795 104.9109

28
Experimental/theory 23.611071 23.156 21.875 25.924
Table 9: Prim's algorithm best case running time for large input

Input size Experimental Theoretical Experimental/theory


average estimate
45 1717.5 72.7413 23.61107
50 1927.8 83.2495 23.15689

55 2055.8 93.9795 21.8749


60 2719.7 104.9109 25.9239
Table 10:Prim's algorithm's large input's experimental average and theoretical estimate best case comparison

29
Experimental Vs. Theoretical Results for Prim's
Algorithm (Best Case) on large input
3000

2500

2000

1500

1000

500

0
45 50 55 60

Experimental average time Theoretical estimate

Figure 4:Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on large input

30
3. Average Case:
a. Running time for the average case with small input:

Input size 5 10 15 20
Time 1 (µs) 881 1162 705 1304

Time 2 (µs) 1075 1018 1171 1381


Time 3 (µs) 1330 805 1153 833

Time 4 (µs) 764 1202 1330 1073

Time 5 (µs) 771 1993 845 1028

Time 6 (µs) 2031 1199 907 995


Time 7 (µs) 11550 929 1091 1262

Time 8(µs) 708 2542 867 848


Time 9 (µs) 1104 886 1229 967

Time 10 (µs) 782 2522 1123 1013

Time 11 (µs) 822 1698 911 1064


Time 12 (µs) 772 890 934 843

Time 13 (µs) 909 946 1675 913


Time 14 (µs) 1082 901 882 968

Time 15 (µs) 884 932 1535 956

Time 16 (µs) 841 1317 1029 942


Time 17 (µs) 790 1044 1421 1350
Time 18 (µs) 1170 1093 1474 998

Time 19 (µs) 795 870 892 918

Time 20 (µs) 749 958 808 999


Experimental Average 1490.5 1245.4 1099.1 1032.8

Theoretical estimate 2.7958 9 16.4652 24.7195

Experimental/theory 533.12111 138.37 66.753 41.779

Table 11:Prim's algorithm average case running time for small input.

31
Input size Experimental Theoretical Experimental/theory
average estimate
5 1490.5 2.7958 533.12111

10 1245.4 9 138.37
15 1099.1 16.4652 66.753

20 1032.8 24.7195 41.779


Table 12:: Prim's algorithm's small input's experimental average and theoretical estimate average case comparison

Experimental Vs. Theoretical Results for Prim's


Algorithm (Average Case) on small input
1600
1400
1200
1000
800
600
400
200
0
5 10 15 20

Experimental average time Theoretical estimate

Figure 5: Prim's algorithm's small input's experimental average and theoretical estimate average case comparison

32
b. Running time for the average case with large input:

Input size 45 50 55 60
Time 1 (µs) 2017 2428 2524 3076
Time 2 (µs) 2373 2163 2927 2997

Time 3 (µs) 1992 2236 2778 3414

Time 4 (µs) 2122 3020 2823 2878

Time 5 (µs) 2701 2102 2307 2863

Time 6 (µs) 1453 2049 2258 3639

Time 7 (µs) 2407 2240 2395 2804

Time 8(µs) 2769 3238 3538 2717


Time 9 (µs) 2567 2074 2363 2830

Time 10 (µs) 3131 2386 2675 3468

Time 11 (µs) 1641 1910 4128 2948


Time 12 (µs) 2378 2075 2337 2694

Time 13 (µs) 1702 2541 3669 2586

Time 14 (µs) 2064 1917 2534 2667

Time 15 (µs) 2457 2097 2390 3019

Time 16 (µs) 1902 2145 2315 2974

Time 17 (µs) 1692 3509 2810 2796


Time 18 (µs) 1699 2127 3151 2897
Time 19 (µs) 1904 1956 2454 3069

Time 20 (µs) 2419 3802 2567 2644


Experimental Average 2169.5 2400.8 2747.2 2949

Theoretical estimate 72.7413 83.2495 93.9795 104.9109

Experimental/theory 29.8248725 28.838 29.231 28.11


Table 13:Prim's algorithm average case running time for large input:

33
Input size Experimental Theoretical Experimental/theory
average estimate

45 2169.5 72.7413 29.8248725


50 2400.8 83.2495 28.838

55 2747.2 93.9795 29.231

60 2949 104.9109 28.11

Table 14: Prim's algorithm's large input's experimental average and theoretical estimate average case comparison

Experimental Vs. Theoretical Results for Prim's


Algorithm (Average Case) on large input
3500
3000
2500
2000
1500
1000
500
0
45 50 55 60

Experimental average time Theoretical estimate

Figure 6:: Experimental Vs. Theoretical Results for Prim's Algorithm (Average Case) on large input

34
1. Worst Case:
a. Running time for the worst case with small input:

Input size 5 10 15 20
Time 1 (µs) 474 1341 566 858

Time 2 (µs) 444 652 497 639


Time 3 (µs) 474 659 668 678

Time 4 (µs) 728 700 602 759

Time 5 (µs) 525 650 631 613


Time 6 (µs) 572 534 555 619
Time 7 (µs) 564 544 564 833

Time 8(µs) 1406 726 538 651


Time 9 (µs) 468 624 769 668

Time 10 (µs) 462 527 791 622

Time 11 (µs) 972 596 595 607

Time 12 (µs) 639 860 550 922

Time 13 (µs) 594 638 559 862

Time 14 (µs) 570 551 548 650

Time 15 (µs) 482 552 584 1152

Time 16 (µs) 522 668 699 1387

Time 17 (µs) 584 597 711 763


Time 18 (µs) 551 614 742 871
Time 19 (µs) 599 532 562 746

Time 20 (µs) 557 519 650 603

Experimental Average 609.35 654.2 619.05 775.15

Theoretical estimate 25 100 225 400


Experimental/theory 24.374 6.542 2.7513 1.9379
Table 15:Prim's algorithm worst case running time for small input

35
Input size Experimental Theoretical Experimental/theory
average estimate

5 609.35 25 24.374
10 654.2 100 6.542
15 619.05 225 2.7513

20 775.15 400 1.9379

Table 16:: Prim's algorithm's small input's experimental average and theoretical estimate worst case comparison

Experimental Vs. Theoretical Results for Prim's


Algorithm (Worst Case) on small input
900
800
700
600
500
400
300
200
100
0
5 10 15 20

Experimental average time Theoretical estimate

Figure 7: Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on small input

36
b. Running time for the worst case with large input:

Input size 45 50 55 60

Time 1 (µs) 2067 2699 2518 3153


Time 2 (µs) 1811 2263 2441 2568

Time 3 (µs) 1678 2457 2587 2703

Time 4 (µs) 1616 3336 2207 5162

Time 5 (µs) 1579 2180 2343 2500


Time 6 (µs) 2131 2492 3425 2849

Time 7 (µs) 1541 2098 2294 3435


Time 8(µs) 2430 1896 2489 4272

Time 9 (µs) 1625 1884 2403 3573

Time 10 (µs) 1732 3874 2705 4281

Time 11 (µs) 2089 3107 2412 2683

Time 12 (µs) 1960 1910 2823 2725

Time 13 (µs) 2391 2422 3587 2806

Time 14 (µs) 1578 2577 2729 3725


Time 15 (µs) 1590 2522 2173 2667

Time 16 (µs) 1751 2291 2308 3990

Time 17 (µs) 2160 2335 3306 3221


Time 18 (µs) 1747 2340 2493 2894
Time 19 (µs) 2613 2049 2516 3766

Time 20 (µs) 1921 2199 2150 2373

Experimental Average 1900.5 2446.6 2595.5 3267.3


Theoretical estimate 2025 2500 3025 3600

Experimental/theory 0.938518
0.97862 0.858 0.9076
52
Table 17:Prim's algorithm worst case running time for large input

37
Input size Experimental Theoretical Experimental/theory
average estimate
45 1900.5 2025 0.93851852

50 2446.6 2500 0.97862

55 2595.5 3025 0.858


60 3267.3 3600 0.9076

Table 18:Prim's algorithm's large input's experimental average and theoretical estimate worst case comparison

Experimental Vs. Theoretical Results for Prim's


Algorithm (Worst Case) on large input
4000
3500
3000
2500
2000
1500
1000
500
0
45 50 55 60

Experimental average time Theoretical estimate

Figure 8:Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on large input

38
Section III: Analysis of the
selected algorithms that were
implemented already.

39
1. Find the computational complexity of each algorithm (analyzing line of
codes).

MST-KRUSKAL(G, w)
1. A = Ø // O(1)
2. for each vertex v ∈ G.V // O(V)
3. MAKE-SET(v) // O(1) (amortized time complexity)
4. create a single list of the edges in G.E // O(V + E)
5. sort the list of edges into monotonically increasing order by weight w // O(E log E) = O(E
log V)
6. for each edge (u, v) taken from the sorted list in order // O(E)
7. if FIND-SET(u) ≠ FIND-SET(v) // O(log V) (amortized time complexity)
8. A = A ∪ {(u, v)} // O(1)
9. UNION(u, v) // O(log V) (amortized time complexity)
10. return A // O(1)

Total time complexity:

O(1) + O(1) + O(|V|) + O(1) + O(|E|) + O(|E| log |E|) + O(|E|) + O(log* n) + O(1) +
O(log* n) + O(1)
Simplifying the expression:
O(E log V)
------------------------

MST-PRIM(G, w, r)
1. for each vertex u ∈ G.V // O(V)
2. u.key = ∞ // O(1)
3. u.π = NIL // O(1)
4. r.key = 0 // O(1)
5. Q = Ø // O(1)
6. for each vertex u ∈ G.V // O(V)
7. INSERT(Q, u) // O(log V)
8. while Q ≠ Ø // O(V)
9. u = EXTRACT-MIN(Q) // O(log V)
10. for each vertex v in G.Adj[u] // O(E)
11. if v ∈ Q and w(u, v) < v.key // O(1)
12. v.π = u // O(1)
13. v.key = w(u, v) // O(1)
14. DECREASE-KEY(Q, v, w(u, v)) // O(log V)

Total time complexity:


O(1) + O(|V|) + O(1) + O(1) + O(1) + O(1) + O(|V|) + O(log |V|) + O(|V|) + O(log |V|)
+ O(|E|) + O(1) + O(1) + O(1) + O(log |V|)

Simplifying the expression:


O(V log V + E log V) = O(E log V)

40
2. Find T(n) and order of growth concerning the dataset size.

Breadth First Algorithm:

Best case running time: O(V+E)


Thus, T(n)= V+E
Input: Sparse graph with a small number of edges relative to vertices.

Average case running time O(V+E)


Thus, T(n)= V+E
Input: Random graphs or graphs with moderate density.

Worst case running time: O(V+E)


Thus, T(n)= V+E
Input: Dense graph with a large number of edges relative to vertices.

Note: like many algorithms, Breadth First Algorithm’s worst case depends on its input’s data
structure. This report demonstrates the worst case using a very dense minimum spanning tree.

1. Best/Average/Worst

a. Small input:

Input Size 5 10 15 20
Theoretical 9 10 29 39
Estimate
Theoretical 9 × 10^-6 10 × 10^-6 29 × 10^-6 39 × 10^-6
Estimate
In microsecond
Table 19:BFS Algorithm's Best/Average/Worst cases for small input

b. Large input:

Input Size 45 50 55 60
Theoretical 89 99 109 119
Estimate
Theoretical 89 × 10^-6 99 × 10^-6 109 × 10^-6 119 × 10^-6
Estimate
In microsecond
Table 20:BFS Algorithm's Best/Average/Worst cases for large input

41
3. In addition to the asymptotic analysis (theoretical bounds), you should
also find the actual running time of the algorithms on the same machine
and using the same programming language. Since the actual running time
depends on the input sequence, use several random input sequences to
produce an average running time for each value of n tested and report the
execution time in milliseconds.

Notes:
1) The worst case of prim’s algorithm in small size input becoming smaller than time
complexity of best case, the reason is that because of the small input size cannot be accurate.
2) In the Breadth First Algorithm we use the Theoretical Estimate so after we get the time
complexity, we divide it into the machine speed which is 2.8 GHz.

Comparison table:

Kruskal’s algorithm Prim’s Breadth First


(Experimental algorithm Algorithm
Cases Size Average) (Experimental (Theoretical Estimate)
milliseconds Average) Time complexity/GHz
milliseconds
Small (5) 0.6279 0.9266 3.214
Best case
Large (60) 1.6058 2.7197 42.5
Average Small (5) 0.6279 1.4905 3.214
case Large (60) 1.6058 2.949 42.5
Worst Small (5) 0.6279 0.60935 3.214
case Large (60) 1.6058 3.2673 42.5
Table 21:Kruskal's and Prim's Algorithms’ Experimental Average and BFS's Algorithms' Theoretical Estimate in Best,
Average, and Worst cases for small and large input

Kruskal’s algorithm Prim’s Breadth First


Size (Space Complexity Algorithm Algorithm
in bytes) (Space Complexity in (Space Complexity in
bytes) bytes)
Small (5) 5 5 5
Large (60) 60 60 60
Table 22 :Kruskal's, Prim's, and BFS's Algorithms Space Complexity

42
4. Draw a diagram that shows the running times and order of growth of
each algorithm.

Note: All graphs’ x-axes represent the input size, and the y-axes represent the time. All
graph’s time axes are represented in microseconds.

1. Best case graph using small input:

Figure 9: Algorithms' Best Case using Small Input

2. Best case graph using large input:

Figure 10:Algorithms' Best Case using Large Input

43
3. Average case graph using small input:

Figure 11:Algorithms' Average Case using Small Input

4. Average case graph using large input:

Figure 12:Algorithms' Average Case using Large Input

44
Worst case graph using small input:

Figure 13:Algorithms' Worst Case using Small Input

Worst case graph using large input:

Figure 14:Algorithms' Worst Case using Large Input

Since the Breadth First Search Algorithm’s time is extremely small in comparison with Prim’s
Algorithm and Kruskal’s Algorithm, the graph below clearly demonstrates Breadth First Search
Algorithm’s diagram. Breadth First Search has the same time complexity in terms of its best,
average, and worst case, so only 2 diagrams are needed to showcase small and large input.

45
Figure 15:BFS Algorithms' Best, Average, and Worst Case using Small Input

Figure 16:BFS Algorithms' Best, Average, and Worst Case using Large Input

46
5. Compare your algorithms with any of the existing approaches in terms of
time and/or space complexity.

In terms of time complexity, both Prim's algorithm and Kruskal's algorithm have
similar complexities, with Prim's algorithm having a slightly better time complexity in
most cases. However, Kruskal's algorithm can have a better worst-case time complexity
if an efficient sorting algorithm is used.

In terms of space complexity, all three algorithms have the same space
complexity as O(V) since they all require storing information about vertices or edges.
Overall, the choice between Prim's algorithm, Kruskal's algorithm, and BFS algorithm
depends on the specific requirements and characteristics of the problem at hand. Prim's
algorithm and Kruskal's algorithm are specifically designed for finding minimum
spanning trees, while BFS is a general-purpose graph traversal algorithm.

Figure 17: Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Small Input

47
Figure 18:Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Large Input

The Prim's algorithm, Kruskal's algorithm, and Breadth-First Search (BFS) have
different applications and complexities. Tablet shows comparison of the time and space
complexity of these algorithms:

Algorithm Time Complexity Space Complexity Theoretical Estimate


Prim's Algorithm O(E log V) O(V) O(E log V)
Kruskal's Algorithm O(E log V) O(V) O(E log V)
Breadth-First Search O(V + E) O(V) O(V + E)
Table 23:Comparison of Kruskal's, Prim's, and BFS's Algorithms' Time Complexity, Space Complexity, and Theoretical
Estimate

48
Conclusion

This report explores two algorithms used to find the best path a tourist can take
to visit their preferred list of tourist attractions. Our problem is demonstrated using the
minimum spanning tree, in which we implemented two widely used algorithms:
Prim's approach and Kruskal's approach. Each algorithm tackles the problem
differently, utilizing its own distinct method of solving, code, and type of input. They
also possess their own advantages and disadvantages, the most prominent feature
being their time complexity using the input’s best, average, and worst cases. This
document analyses the problem, the algorithms’ solution of the problem, and assesses
each algorithm both theoretically and experimentally. Theoretically, we examined the
definition of the problem and pseudocodes for both algorithms are provided. In the
experimental result section, we showcased the input data, and offered a comparison
between the theoretical and experimental time complexities. Furthermore, the impact
of input size on these complexities is analysed. For Kruskal's approach, the best,
average and best cases were the same but, in the theoretical estimate was smaller than
the experimental result in both large and small input size. In Prim's approach, best and
average cases have the same result, where the theoretical estimate is smaller than the
experimental result for both large and small input size. In the worst case for prim's, the
theoretical estimate was smaller than experimental result for small size input. But, for
large size inputs, the theoretical estimate was larger than the experimental result.
Through analysis and experimentation, we understand how these algorithms operate.
Depending on that we have determined that Kruskal's algorithm proves to be more
efficient for the minimum spanning tree algorithm.

49
References

GAKUTSAH, FRANCIS. (2011). MINIMUM SPANNING TREE ROUTE FOR MAJOR

TOURIST CENTERS IN THE (Doctoral dissertation, KWAME NKRUMAH

UNIVERSITY OF SCIENCE AND TECHNOLOGY).

GeeksforGeeks. (2023, August 4). Prim’s algorithm for minimum spanning tree (MST).

GeeksforGeeks. https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-

greedy-algo-5/

Sano, A. V. D., Nindito, H., Madyatmadja, E. D., & Sianipar, C. P. (2020, August).

Modeling Prim's Algorithm for Tour Agencies' Minimum Traveling Paths to Increase

Profitability. In 2020 International Conference on Information Management and

Technology (ICIMTech) (pp. 699-703). IEEE.

50
Appendix

Codes used for the algorithms and input:


Kruskal Algorithm’s Java code:
// Java program for Kruskal's algorithm

import java.util.ArrayList;

import java.util.Comparator;

import java.util.List;

public class Main {

// Defines edge structure

static class Edge {

int src, dest, weight;

public Edge(int src, int dest, int weight)

this.src = src;

this.dest = dest;

this.weight = weight;

// Defines subset element structure

static class Subset {

int parent, rank;

public Subset(int parent, int rank)

51
{

this.parent = parent;

this.rank = rank;

// Starting point of program execution

public static void main(String[] args)

int V = 4;

List<Edge> graphEdges = new ArrayList<Edge>(

List.of(new Edge(0, 1, 10), new Edge(0, 2, 6),

new Edge(0, 3, 5), new Edge(1, 3, 15),

new Edge(2, 3, 4)));

// Sort the edges in non-decreasing order

// (increasing with repetition allowed)

graphEdges.sort(new Comparator<Edge>() {

@Override public int compare(Edge o1, Edge o2)

return o1.weight - o2.weight;

});

kruskals(V, graphEdges);

// Function to find the MST

private static void kruskals(int V, List<Edge> edges)

52
{

int j = 0;

int noOfEdges = 0;

// Allocate memory for creating V subsets

Subset subsets[] = new Subset[V];

// Allocate memory for results

Edge results[] = new Edge[V];

// Create V subsets with single elements

for (int i = 0; i < V; i++) {

subsets[i] = new Subset(i, 0);

// Number of edges to be taken is equal to V-1

while (noOfEdges < V - 1) {

// Pick the smallest edge. And increment

// the index for next iteration

Edge nextEdge = edges.get(j);

int x = findRoot(subsets, nextEdge.src);

int y = findRoot(subsets, nextEdge.dest);

// If including this edge doesn't cause cycle,

// include it in result and increment the index

// of result for next edge

if (x != y) {

results[noOfEdges] = nextEdge;

53
union(subsets, x, y);

noOfEdges++;

j++;

// Print the contents of result[] to display the

// built MST

System.out.println(

"Following are the edges of the constructed MST:");

int minCost = 0;

for (int i = 0; i < noOfEdges; i++) {

System.out.println(results[i].src + " -- "

+ results[i].dest + " == "

+ results[i].weight);

minCost += results[i].weight;

System.out.println("Total cost of MST: " + minCost);

// Function to unite two disjoint sets

private static void union(Subset[] subsets, int x,

int y)

int rootX = findRoot(subsets, x);

int rootY = findRoot(subsets, y);

if (subsets[rootY].rank < subsets[rootX].rank) {

54
subsets[rootY].parent = rootX;

else if (subsets[rootX].rank

< subsets[rootY].rank) {

subsets[rootX].parent = rootY;

else {

subsets[rootY].parent = rootX;

subsets[rootX].rank++;

// Function to find parent of a set

private static int findRoot(Subset[] subsets, int i)

if (subsets[i].parent == i)

return subsets[i].parent;

subsets[i].parent

= findRoot(subsets, subsets[i].parent);

return subsets[i].parent;

55
Code used to generate a dense graph to represent Kruskal Algorithm’s
worst case’s input:
import java.util.ArrayList;

import java.util.List;

public class Main {

public static void main(String[] args) {

int numVertices = 5;

List<Edge> graphEdges = generateDenseGraph(numVertices);

System.out.println("List<Edge> graphEdges = new ArrayList<Edge>(");

System.out.print(" List.of(");

for (Edge edge : graphEdges) {

System.out.printf("new Edge(%d, %d, %d), ", edge.source, edge.destination, edge.weight);

System.out.println("));");

private static List<Edge> generateDenseGraph(int numVertices) {

List<Edge> edges = new ArrayList<>();

// Generating a dense graph (complete graph)

for (int i = 0; i < numVertices; i++) {

for (int j = i + 1; j < numVertices; j++) {

int weight = (int) (Math.random() * 10) + 1; // Assigning random weights for demonstration

edges.add(new Edge(i, j, weight));

edges.add(new Edge(j, i, weight)); // Assuming an undirected graph

56
}

return edges;

class Edge {

int source, destination, weight;

Edge(int source, int destination, int weight) {

this.source = source;

this.destination = destination;

this.weight = weight;

57
Code used to generate a sparse graph with the edges sorted in ascending
order to represent Kruskal Algorithm’s best case’s input:
import java.util.ArrayList;

import java.util.Collections;

import java.util.Comparator;

import java.util.List;

import java.util.Random;

public class Main {

public static void main(String[] args) {

int numVertices = 5;

List<Edge> graphEdges = generateSparseGraph(numVertices);

// Sort the edges in ascending order based on their weights

Collections.sort(graphEdges, Comparator.comparingInt(edge -> edge.weight));

System.out.println("List<Edge> graphEdges = new ArrayList<Edge>(");

System.out.print(" List.of(");

for (Edge edge : graphEdges) {

System.out.printf("new Edge(%d, %d, %d), ", edge.source, edge.destination, edge.weight);

System.out.println("));");

private static List<Edge> generateSparseGraph(int numVertices) {

List<Edge> edges = new ArrayList<>();

Random random = new Random();

58
// Generating a sparse graph with random values

for (int i = 0; i < numVertices - 1; i++) {

for (int j = i + 1; j < numVertices; j++) {

int weight = random.nextInt(10) + 1; // Random weight between 1 and 10

edges.add(new Edge(i, j, weight));

return edges;

class Edge {

int source, destination, weight;

Edge(int source, int destination, int weight) {

this.source = source;

this.destination = destination;

this.weight = weight;

59
Prim’s algorithm’s Java code:
// A Java program for Prim's Minimum Spanning Tree (MST)

// algorithm. The program is for adjacency matrix

// representation of the graph

import java.io.*;

import java.lang.*;

import java.util.*;

class MST {

// Number of vertices in the graph

private static final int V = 5;

// A utility function to find the vertex with minimum

// key value, from the set of vertices not yet included

// in MST

int minKey(int key[], Boolean mstSet[])

// Initialize min value

int min = Integer.MAX_VALUE, min_index = -1;

for (int v = 0; v < V; v++)

if (mstSet[v] == false && key[v] < min) {

min = key[v];

min_index = v;

return min_index;

60
// A utility function to print the constructed MST

// stored in parent[]

void printMST(int parent[], int graph[][])

System.out.println("Edge \tWeight");

for (int i = 1; i < V; i++)

System.out.println(parent[i] + " - " + i + "\t"

+ graph[i][parent[i]]);

// Function to construct and print MST for a graph

// represented using adjacency matrix representation

void primMST(int graph[][])

// Array to store constructed MST

int parent[] = new int[V];

// Key values used to pick minimum weight edge in

// cut

int key[] = new int[V];

// To represent set of vertices included in MST

Boolean mstSet[] = new Boolean[V];

// Initialize all keys as INFINITE

for (int i = 0; i < V; i++) {

key[i] = Integer.MAX_VALUE;

mstSet[i] = false;

61
// Always include first 1st vertex in MST.

// Make key 0 so that this vertex is

// picked as first vertex

key[0] = 0;

// First node is always root of MST

parent[0] = -1;

// The MST will have V vertices

for (int count = 0; count < V - 1; count++) {

// Pick the minimum key vertex from the set of

// vertices not yet included in MST

int u = minKey(key, mstSet);

// Add the picked vertex to the MST Set

mstSet[u] = true;

// Update key value and parent index of the

// adjacent vertices of the picked vertex.

// Consider only those vertices which are not

// yet included in MST

for (int v = 0; v < V; v++)

// graph[u][v] is non zero only for adjacent

// vertices of m mstSet[v] is false for

// vertices not yet included in MST Update

// the key only if graph[u][v] is smaller

// than key[v]

62
if (graph[u][v] != 0 && mstSet[v] == false

&& graph[u][v] < key[v]) {

parent[v] = u;

key[v] = graph[u][v];

// Print the constructed MST

printMST(parent, graph);

public static void main(String[] args)

MST t = new MST();

int graph[][] = new int[][] { { 0, 2, 0, 6, 0 },

{ 2, 0, 3, 8, 5 },

{ 0, 3, 0, 0, 7 },

{ 6, 8, 0, 0, 9 },

{ 0, 5, 7, 9, 0 } };

// Print the solution

t.primMST(graph);

63
Code that generate a dense graph where each vertex is connected to every other
vertex and that represent prim’s Algorithm’s worst case’s input:
import java.util.Arrays;

class PGraph {

public void Prim(int G[][], int V) {

int INF = 9999999;

int no_edge; // number of edges

// create an array to track selected vertices

// selected will become true if selected, otherwise false

boolean[] selected = new boolean[V];

// set selected to false initially

Arrays.fill(selected, false);

// set number of edges to 0

no_edge = 0;

// choose the 0th vertex and make it true

selected[0] = true;

// print the edges and weights

System.out.println("Edge : Weight");

while (no_edge < V - 1) {

int min = INF;

int x = 0; // row number

int y = 0; // col number

64
for (int i = 0; i < V; i++) {

if (selected[i]) {

for (int j = 0; j < V; j++) {

// not selected and there is an edge

if (!selected[j] && G[i][j] != 0) {

if (min > G[i][j]) {

min = G[i][j];

x = i;

y = j;

System.out.println(x + " - " + y + " : " + G[x][y]);

selected[y] = true;

no_edge++;

public static void main(String[] args) {

PGraph g = new PGraph();

// number of vertices in the graph

int V = 5;

// create a 2D array of size VxV for adjacency matrix to represent graph

int[][] G = generateDenseGraph(V);

65
g.Prim(G, V);

static int[][] generateDenseGraph(int V) {

int[][] graphArray = new int[V][V];

int maxWeight = 100;

// Generate dense graph

for (int i = 0; i < V; i++) {

for (int j = 0; j < V; j++) {

if (i != j) {

// Assign random weights to the edges

graphArray[i][j] = (int) (Math.random() * maxWeight) + 1;

// Print the generated graph

System.out.println("Generated Graph:");

for (int i = 0; i < V; i++) {

for (int j = 0; j < V; j++) {

System.out.print(graphArray[i][j] + " ");

System.out.println();

} return graphArray; }}

66
Code that generates a complete graph, which represents Prim’s algorithm’s best
case’s input:
import java.util.Arrays;

class PGraph {

public void Prim(int G[][], int V) {

int INF = 9999999;

int no_edge; // number of edges

// create an array to track selected vertices

// selected will become true if selected, otherwise false

boolean[] selected = new boolean[V];

// set selected to false initially

Arrays.fill(selected, false);

// set number of edges to 0

no_edge = 0;

// choose the 0th vertex and make it true

selected[0] = true;

// print the edges and weights

System.out.println("Edge : Weight");

while (no_edge < V - 1) {

int min = INF;

int x = 0; // row number

int y = 0; // col number

67
for (int i = 0; i < V; i++) {

if (selected[i]) {

for (int j = 0; j < V; j++) {

// not selected and there is an edge

if (!selected[j] && G[i][j] != 0) {

if (min > G[i][j]) {

min = G[i][j];

x = i;

y = j;

System.out.println(x + " - " + y + " : " + G[x][y]);

selected[y] = true;

no_edge++;

public static void main(String[] args) {

PGraph g = new PGraph();

// number of vertices in the graph

int V = 5;

// create a 2D array of size VxV for adjacency matrix to represent graph

int[][] G = generateBestCaseGraph(V);

68
g.Prim(G, V);

static int[][] generateBestCaseGraph(int V) {

int[][] graphArray = new int[V][V];

// Generate unique weights for the complete graph

int weight = 1;

for (int i = 0; i < V; i++) {

for (int j = i + 1; j < V; j++) {

// Assign the weight to both the (i, j) and (j, i) edges

graphArray[i][j] = weight;

graphArray[j][i] = weight;

weight++;

// Print the generated graph

System.out.println("Generated Graph:");

for (int i = 0; i < V; i++) {

for (int j = 0; j < V; j++) {

System.out.print(graphArray[i][j] + " ");

System.out.println();

return graphArray;

69
Java code that generates a binary heap where the weights of the edges are
randomly assigned, that represent Prim’s algorithm’s average case’s input:
import java.util.Arrays;

import java.util.Random;

class PGraph {

public void Prim(int G[][], int V) {

int INF = 9999999;

int no_edge; // number of edges

// create an array to track selected vertices

// selected will become true if selected, otherwise false

boolean[] selected = new boolean[V];

// set selected to false initially

Arrays.fill(selected, false);

// set number of edges to 0

no_edge = 0;

// choose the 0th vertex and make it true

selected[0] = true;

// print the edges and weights

System.out.println("Edge : Weight");

while (no_edge < V - 1) {

int min = INF;

int x = 0; // row number

70
int y = 0; // col number

for (int i = 0; i < V; i++) {

if (selected[i]) {

for (int j = 0; j < V; j++) {

// not selected and there is an edge

if (!selected[j] && G[i][j] != 0) {

if (min > G[i][j]) {

min = G[i][j];

x = i;

y = j;

System.out.println(x + " - " + y + " : " + G[x][y]);

selected[y] = true;

no_edge++;

public static void main(String[] args) {

PGraph g = new PGraph();

// number of vertices in the graph

int V = 5;

// create a 2D array of size VxV for adjacency matrix to represent graph

int[][] G = generateAverageCaseGraph(V);

71
g.Prim(G, V);

static int[][] generateAverageCaseGraph(int V) {

int[][] graphArray = new int[V][V];

Random random = new Random();

// Generate random weights for the edges

for (int i = 0; i < V; i++) {

for (int j = i + 1; j < V; j++) {

int weight = random.nextInt(10) + 1;

graphArray[i][j] = weight;

graphArray[j][i] = weight;

// Print the generated graph

System.out.println("Generated Graph:");

for (int i = 0; i < V; i++) {

for (int j = 0; j < V; j++) {

System.out.print(graphArray[i][j] + " ");

System.out.println();

} return graphArray; }}

72

You might also like