Algorithm Final Project Sample
Algorithm Final Project Sample
2
Table of Tables
Table 1:Comparison between Kruskal's algorithm and Prim's algorithm ............................................. 14
Table 2:Machine's characteristics ......................................................................................................... 17
Table 3: Kruskal algorithm's running time for small input ................................................................... 20
Table 4:Kruskal algorithm's small input's experimental average and theoretical estimate comparison 20
Table 5:Kruskal algorithm's running time for large input ..................................................................... 23
Table 6:Kruskal algorithm's large input's experimental average and theoretical estimate comparison 23
Table 7:Prim's algorithm best case running time for small input.......................................................... 26
Table 8:: Prim's algorithm's small input's experimental average and theoretical estimate best case
comparison ............................................................................................................................................ 26
Table 9: Prim's algorithm best case running time for large input ........................................................ 29
Table 10:Prim's algorithm's large input's experimental average and theoretical estimate best case
comparison ............................................................................................................................................ 29
Table 11: Table 11:Prim's algorithm average case running time for small input ................................ 31
Table 12:: Prim's algorithm's small input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 32
Table 13:Prim's algorithm average case running time for large input: ................................................. 33
Table 14: Prim's algorithm's large input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 34
Table 15:Prim's algorithm worst case running time for small input ..................................................... 35
Table 16:: Prim's algorithm's small input's experimental average and theoretical estimate worst case
comparison ............................................................................................................................................ 36
Table 17:Prim's algorithm worst case running time for large input ...................................................... 37
Table 18:Prim's algorithm's large input's experimental average and theoretical estimate worst case
comparison ............................................................................................................................................ 38
Table 19:BFS Algorithm's Best/Average/Worst cases for small input ................................................. 41
Table 20:BFS Algorithm's Best/Average/Worst cases for large input.................................................. 41
Table 21:Kruskal's and Prim's Algorithms’ Experimental Average and BFS's Algorithms' Theoretical
Estimate in Best, Average, and Worst cases for small and large input ................................................. 42
Table 22 :Kruskal's, Prim's, and BFS's Algorithms Space Complexity ................................................ 42
Table 23:Comparison of Kruskal's, Prim's, and BFS's Algorithms' Time Complexity, Space
Complexity, and Theoretical Estimate .................................................................................................. 48
3
Table of Figures
Figure 1:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst
Case) on small input.............................................................................................................................. 21
Figure 2:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst
Case) on large input .............................................................................................................................. 24
Figure 3: Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on small input ...... 27
Figure 4:Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on large input ....... 30
Figure 5: Prim's algorithm's small input's experimental average and theoretical estimate average case
comparison ............................................................................................................................................ 32
Figure 6:: Experimental Vs. Theoretical Results for Prim's Algorithm (Average Case) on large input
.............................................................................................................................................................. 34
Figure 7: Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on small input ... 36
Figure 8:Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on large input ..... 38
Figure 10: Algorithms' Best Case using Small Input ............................................................................ 43
Figure 11:Algorithms' Best Case using Large Input ............................................................................. 43
Figure 12:Algorithms' Average Case using Small Input ...................................................................... 44
Figure 13:Algorithms' Average Case using Large Input ...................................................................... 44
Figure 14:Algorithms' Worst Case using Small Input .......................................................................... 45
Figure 15:Algorithms' Worst Case using Large Input .......................................................................... 45
Figure 16:BFS Algorithms' Best, Average, and Worst Case using Small Input ................................... 46
Figure 17:BFS Algorithms' Best, Average, and Worst Case using Large Input ................................... 46
Figure 18: Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Small Input ................ 47
Figure 19:Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Large Input ................. 48
4
Introduction
The minimum spanning tree (MST) is defined as a subset of the edges of
a connected, edge-weighted undirected graph that connects all the vertices
together, without any cycles and with the minimum possible total edge weight.
The MST has numerous diverse applications including network design such as:
Electrical circuits, TV cables, Highway roads, setting up telephone wires in a
neighbourhood, and finding the minimum route in a GIS system. In addition,
there are several indirect applications: Max bottleneck paths, Reducing data
storage in sequencing amino acids in a protein and Image registration with
Renyi entropy.
5
Section I: Theoretical
6
1. Definition of the problem
Following Vision 2030, our application is a Saudi tour guide scheduling
system where each visiting tourist can have a cost-effective schedule with the
tourist attractions and culturally significant archaeological sites of their choice,
along with the shortest path between them. Our system assists Saudi Arabia
tourists and makes their stay comfortable by providing them with an efficient
plan that highlights the least amount of time needed to transport between their
preferred attractions. To sum up, this system reduces costs of transportation,
time needed to transport, and the time required to plan activities.
This report will demonstrate this system using the minimum spanning
tree. Each tourist attraction represents a vertex and the distance between such
attractions represent edges. We will be taking into consideration the tourist
attractions, calculating the distance (as edges) between such sites, and finding
the shortest path that includes all the desired destinations.
7
2. Pseudocode of the Selected Algorithms.
MST-KRUSKAL(G, W)
1. A =φ
2. for each vertex y E G. V
3. MAKE-SET(v)
4. create a single list of the edges in G. E
5. sort the list of edges into monotonically increasing order by
weight W
6. for each edge (u, v) taken from the sorted list in order
7. if FIND-SET(u) # FIND-SET (v)
8. A = A U {(u, v),
9. UNION(u, v)
10.return A
8
2.2 Prim’s Algorithm:
MST-PRIM(G, w, r)
9
3. How could the chosen algorithm help solve the problem?
Kruskal’s algorithm and Prim’s algorithm are the most frequently employed
greedy algorithms for finding the shortest path in minimum spanning trees. They both
operate on connected, weighted, and undirected graphs. Prim's algorithm begins with
any vertex, then MST is expanded by including the edge with the lowest weight from
a vertex inside the MST to a vertex outside the MST. Kruskal's method begins with an
empty collection of edges and sorts them in ascending weight order. It then selected
the first V-1 number of sorted edges and iterates through them, adding each edge to
the MST as long as the edge does not form any cycle.
Saudi Arabia’s new focus on flourishing and thriving in being the international
destination for tourism, has driven for a fundamental change in the tourism sector.
Tour guide scheduling systems make efficient use of Kruskal’s and Prim’s algorithms
in several ways. The minimum spanning trees used by these two algorithms are
effectively paralleled as the tourist destinations and the distance between them, as
vertices and edges respectively. Kruskal’s and Prim’s algorithms aim to cover all the
vertices (attractions) in the graph while maintaining the minimum total weight cost of
the connecting edges (roads between cities). Thus, these two algorithms greatly assist
in reaching our goals of finding the shortest path that includes all the desired
destinations. (Stanford University, n.d)
10
4. Literature review of the chosen algorithms
According to Dian Sano’s research paper titled “ Modeling Prim's
Algorithm for Tour Agencies' Minimum Traveling Paths to Increase
Profitability”, Prim’s algorithm can be efficiently used in tourism agencies
around the world. Dian Sano puts a great emphasis on how this algorithm can
be used to save costs, fuel, and drive business to success. That is by using
Prim’s algorithm to find the best travelling paths that make up the shortest tour
trip. This paper uses 24 tourism sites in a region of Indonesia, Greater Malang,
as the input. This input was then converted to a two-dimensional adjacency
matrix, to be able to plug it in the algorithm. Furthermore, different tourism
agencies could alter this input according to their tourism sites. Finally, Dian
Sano concluded that Prim’s algorithm is the best algorithm to use in finding the
shortest total tour trip for tourism agencies globally. (Dian Sano et al., 2020)
11
5. Comparison between chosen algorithms theoretically in terms of
time and space complexity of the best, average, and worst case.
Theoretical question
1. Kruskal’s algorithms
Best Case:
The best case of Kruskal’s algorithm occurs when the graph is sparse, and the edges
are sorted in ascending order. The running time would be O (E log V).
12
2. Prim’s algorithm
Best Case:
The best case for Prim’s algorithm occurs when a Fibonacci heap is used. The time
complexity is O (E log V).
Average Case:
The average case occurs when a binary heap is used. The time complexity is O (E
log V).
Worst Case:
The worst case occurs when a dense graph is used to generate the adjacency
matrix. The time complexity is O(V^2).
13
3. Kruskal’s algorithms Vs Prim’s algorithm:
Decision made Selecting the minimum weight Selecting the minimum key value
based on edge that does not create a vertex from the set of vertices not
cycle yet
Approach Sort the edges and add them to Start with an empty MST and add
Used the MST based on weight vertices one by one
Disadvantages Can be slower for extremely Can be slower for dense graphs
large graphs compared to Kruskal's algorithm
Table 1:Comparison between Kruskal's algorithm and Prim's algorithm
14
Section II: Implementation
15
1. Data set used.
We generated random values for our graphs and heaps using the Java class
java.util.Random; and the method Math.random.
For Kruskal’s algorithm’s best case, we generated Java code that results in a sparse
graph. Similarly, the worst case’s Java code generates a dense graph. Meanwhile,
For Prim's algorithm, the best case is a complete graph, the average case is a binary
heap, and the worst case is a dense graph.
Similarly, we generated Java code whose output is a random Fibonacci heap, which
has edges sorted in ascending order for Prim’s algorithm’s best case. We used a
similar code to generate a random binary heap for Prim’s algorithm’s average case.
Finally, we generated a dense graph, which gets converted into a binary heap for
Prim’s worst case.
16
Our Experimental Setup
1. What kind of machine did you use?
Characteristics Machine
Operating System Windows 11
Processor Intel Core i7
RAM 16 GB
System type 64-bit
Speed of the machine 2.8 GHz
Table 2:Machine's characteristics
The code uses the System.nanoTime() method to measure the elapsed time
in nanoseconds. To improve accuracy, the measured time is then converted
to microseconds, which provides a more precise and detailed measurement
scale for analysing the execution time of the code.
17
6. Did You Use the Same Inputs for the Two Algorithms?
18
Experimental Results:
I) Kruskal’s Algorithm:
Note: Kruskal’s best, average, and worst case all run in the same time
complexity of O (E log V). Because of the identical running time, we used only
two tables to represent the small and large input.
1. Best\Average\Worst Case:
a. Running time with small input:
Input size 5 10 15 20
Time 1 (µs) 467 544 559 566
Time 2 (µs) 746 702 509 1277
Time 3 (µs) 682 528 616 716
Time 4 (µs) 707 1314 607 1179
Time 5 (µs) 867 1239 580 842
Time 6 (µs) 439 531 494 1469
Time 7 (µs) 690 596 506 1109
Time 8(µs) 995 1327 604 860
Time 9 (µs) 458 1195 870 600
Time 10 (µs) 482 535 747 880
Time 11 (µs) 483 616 539 747
Time 12 (µs) 767 466 779 863
Time 13 (µs) 767 693 729 851
Time 14 (µs) 465 614 501 1547
Time 15 (µs) 554 598 758 666
Time 16 (µs) 456 634 642 1086
Time 17 (µs) 837 532 694 619
19
Time 18 (µs) 693 1128 537 576
Time 19 (µs) 534 886 688 590
Time 20 (µs) 469 713 705 808
Experimental Average 627.9 679.55 633.2 892.55
Theoretical estimate 2.7958 9 16.4652 24.7195
Experimental/theory 224.5868 75.5055 38.4568 36.1071
Table 3: Kruskal algorithm's running time for small input
20
Kruskal algorithm's Best\Average\Worst Case:
Running time with small input
1000
900
800
700
600
500
400
300
200
100
0
5 10 15 20
Figure 1:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst Case) on small input
21
2. Best\Average\Worst Case:
Input size 45 50 55 60
22
Experimental/theory 15.8534 15.6985 15.0628 15.3063
Table 5:Kruskal algorithm's running time for large input
23
Experimental Vs. Theoretical Results for Kruskal's
Algorithm (Best, Average, and Worst Case) on large input
1800
1600
1400
1200
1000
800
600
400
200
0
45 50 55 60
Figure 2:Experimental Vs. Theoretical Results for Kruskal's Algorithm (Best, Average, and Worst Case) on large input
24
II) Prim’s Algorithm:
1. Best Case:
a. Running time for the best case with small input:
Input size 5 10 15 20
Time 1 (µs) 728 737 623 919
25
Theoretical estimate 2.7958 9 16.4652 24.7195
Experimental/theory 331.4257 87.078 67.239 96.106
Table 7:Prim's algorithm best case running time for small input
10 783.7 9 87.07778
15 1107.1 16.4652 67.2388
20 2375.7 24.7195 96.10631
Table 8:: Prim's algorithm's small input's experimental average and theoretical estimate best case comparison
26
Experimental Vs. Theoretical Results for Prim's Algorithm
(Best Case) on small input
2500
2000
1500
1000
500
0
5 10 15 20
Figure 3: Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on small input
27
2. Best case
Input size 45 50 55 60
Time 1 (µs) 1656 1650 2391 2071
Time 2 (µs) 1367 1590 1905 4272
28
Experimental/theory 23.611071 23.156 21.875 25.924
Table 9: Prim's algorithm best case running time for large input
29
Experimental Vs. Theoretical Results for Prim's
Algorithm (Best Case) on large input
3000
2500
2000
1500
1000
500
0
45 50 55 60
Figure 4:Experimental Vs. Theoretical Results for Prim's Algorithm (Best Case) on large input
30
3. Average Case:
a. Running time for the average case with small input:
Input size 5 10 15 20
Time 1 (µs) 881 1162 705 1304
Table 11:Prim's algorithm average case running time for small input.
31
Input size Experimental Theoretical Experimental/theory
average estimate
5 1490.5 2.7958 533.12111
10 1245.4 9 138.37
15 1099.1 16.4652 66.753
Figure 5: Prim's algorithm's small input's experimental average and theoretical estimate average case comparison
32
b. Running time for the average case with large input:
Input size 45 50 55 60
Time 1 (µs) 2017 2428 2524 3076
Time 2 (µs) 2373 2163 2927 2997
33
Input size Experimental Theoretical Experimental/theory
average estimate
Table 14: Prim's algorithm's large input's experimental average and theoretical estimate average case comparison
Figure 6:: Experimental Vs. Theoretical Results for Prim's Algorithm (Average Case) on large input
34
1. Worst Case:
a. Running time for the worst case with small input:
Input size 5 10 15 20
Time 1 (µs) 474 1341 566 858
35
Input size Experimental Theoretical Experimental/theory
average estimate
5 609.35 25 24.374
10 654.2 100 6.542
15 619.05 225 2.7513
Table 16:: Prim's algorithm's small input's experimental average and theoretical estimate worst case comparison
Figure 7: Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on small input
36
b. Running time for the worst case with large input:
Input size 45 50 55 60
Experimental/theory 0.938518
0.97862 0.858 0.9076
52
Table 17:Prim's algorithm worst case running time for large input
37
Input size Experimental Theoretical Experimental/theory
average estimate
45 1900.5 2025 0.93851852
Table 18:Prim's algorithm's large input's experimental average and theoretical estimate worst case comparison
Figure 8:Experimental Vs. Theoretical Results for Prim's Algorithm (Worst Case) on large input
38
Section III: Analysis of the
selected algorithms that were
implemented already.
39
1. Find the computational complexity of each algorithm (analyzing line of
codes).
MST-KRUSKAL(G, w)
1. A = Ø // O(1)
2. for each vertex v ∈ G.V // O(V)
3. MAKE-SET(v) // O(1) (amortized time complexity)
4. create a single list of the edges in G.E // O(V + E)
5. sort the list of edges into monotonically increasing order by weight w // O(E log E) = O(E
log V)
6. for each edge (u, v) taken from the sorted list in order // O(E)
7. if FIND-SET(u) ≠ FIND-SET(v) // O(log V) (amortized time complexity)
8. A = A ∪ {(u, v)} // O(1)
9. UNION(u, v) // O(log V) (amortized time complexity)
10. return A // O(1)
O(1) + O(1) + O(|V|) + O(1) + O(|E|) + O(|E| log |E|) + O(|E|) + O(log* n) + O(1) +
O(log* n) + O(1)
Simplifying the expression:
O(E log V)
------------------------
MST-PRIM(G, w, r)
1. for each vertex u ∈ G.V // O(V)
2. u.key = ∞ // O(1)
3. u.π = NIL // O(1)
4. r.key = 0 // O(1)
5. Q = Ø // O(1)
6. for each vertex u ∈ G.V // O(V)
7. INSERT(Q, u) // O(log V)
8. while Q ≠ Ø // O(V)
9. u = EXTRACT-MIN(Q) // O(log V)
10. for each vertex v in G.Adj[u] // O(E)
11. if v ∈ Q and w(u, v) < v.key // O(1)
12. v.π = u // O(1)
13. v.key = w(u, v) // O(1)
14. DECREASE-KEY(Q, v, w(u, v)) // O(log V)
40
2. Find T(n) and order of growth concerning the dataset size.
Note: like many algorithms, Breadth First Algorithm’s worst case depends on its input’s data
structure. This report demonstrates the worst case using a very dense minimum spanning tree.
1. Best/Average/Worst
a. Small input:
Input Size 5 10 15 20
Theoretical 9 10 29 39
Estimate
Theoretical 9 × 10^-6 10 × 10^-6 29 × 10^-6 39 × 10^-6
Estimate
In microsecond
Table 19:BFS Algorithm's Best/Average/Worst cases for small input
b. Large input:
Input Size 45 50 55 60
Theoretical 89 99 109 119
Estimate
Theoretical 89 × 10^-6 99 × 10^-6 109 × 10^-6 119 × 10^-6
Estimate
In microsecond
Table 20:BFS Algorithm's Best/Average/Worst cases for large input
41
3. In addition to the asymptotic analysis (theoretical bounds), you should
also find the actual running time of the algorithms on the same machine
and using the same programming language. Since the actual running time
depends on the input sequence, use several random input sequences to
produce an average running time for each value of n tested and report the
execution time in milliseconds.
Notes:
1) The worst case of prim’s algorithm in small size input becoming smaller than time
complexity of best case, the reason is that because of the small input size cannot be accurate.
2) In the Breadth First Algorithm we use the Theoretical Estimate so after we get the time
complexity, we divide it into the machine speed which is 2.8 GHz.
Comparison table:
42
4. Draw a diagram that shows the running times and order of growth of
each algorithm.
Note: All graphs’ x-axes represent the input size, and the y-axes represent the time. All
graph’s time axes are represented in microseconds.
43
3. Average case graph using small input:
44
Worst case graph using small input:
Since the Breadth First Search Algorithm’s time is extremely small in comparison with Prim’s
Algorithm and Kruskal’s Algorithm, the graph below clearly demonstrates Breadth First Search
Algorithm’s diagram. Breadth First Search has the same time complexity in terms of its best,
average, and worst case, so only 2 diagrams are needed to showcase small and large input.
45
Figure 15:BFS Algorithms' Best, Average, and Worst Case using Small Input
Figure 16:BFS Algorithms' Best, Average, and Worst Case using Large Input
46
5. Compare your algorithms with any of the existing approaches in terms of
time and/or space complexity.
In terms of time complexity, both Prim's algorithm and Kruskal's algorithm have
similar complexities, with Prim's algorithm having a slightly better time complexity in
most cases. However, Kruskal's algorithm can have a better worst-case time complexity
if an efficient sorting algorithm is used.
In terms of space complexity, all three algorithms have the same space
complexity as O(V) since they all require storing information about vertices or edges.
Overall, the choice between Prim's algorithm, Kruskal's algorithm, and BFS algorithm
depends on the specific requirements and characteristics of the problem at hand. Prim's
algorithm and Kruskal's algorithm are specifically designed for finding minimum
spanning trees, while BFS is a general-purpose graph traversal algorithm.
Figure 17: Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Small Input
47
Figure 18:Kruskal's, Prim's, and BFS's Algorithms' Space Complexity using Large Input
The Prim's algorithm, Kruskal's algorithm, and Breadth-First Search (BFS) have
different applications and complexities. Tablet shows comparison of the time and space
complexity of these algorithms:
48
Conclusion
This report explores two algorithms used to find the best path a tourist can take
to visit their preferred list of tourist attractions. Our problem is demonstrated using the
minimum spanning tree, in which we implemented two widely used algorithms:
Prim's approach and Kruskal's approach. Each algorithm tackles the problem
differently, utilizing its own distinct method of solving, code, and type of input. They
also possess their own advantages and disadvantages, the most prominent feature
being their time complexity using the input’s best, average, and worst cases. This
document analyses the problem, the algorithms’ solution of the problem, and assesses
each algorithm both theoretically and experimentally. Theoretically, we examined the
definition of the problem and pseudocodes for both algorithms are provided. In the
experimental result section, we showcased the input data, and offered a comparison
between the theoretical and experimental time complexities. Furthermore, the impact
of input size on these complexities is analysed. For Kruskal's approach, the best,
average and best cases were the same but, in the theoretical estimate was smaller than
the experimental result in both large and small input size. In Prim's approach, best and
average cases have the same result, where the theoretical estimate is smaller than the
experimental result for both large and small input size. In the worst case for prim's, the
theoretical estimate was smaller than experimental result for small size input. But, for
large size inputs, the theoretical estimate was larger than the experimental result.
Through analysis and experimentation, we understand how these algorithms operate.
Depending on that we have determined that Kruskal's algorithm proves to be more
efficient for the minimum spanning tree algorithm.
49
References
GeeksforGeeks. (2023, August 4). Prim’s algorithm for minimum spanning tree (MST).
GeeksforGeeks. https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-
greedy-algo-5/
Sano, A. V. D., Nindito, H., Madyatmadja, E. D., & Sianipar, C. P. (2020, August).
Modeling Prim's Algorithm for Tour Agencies' Minimum Traveling Paths to Increase
50
Appendix
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
this.src = src;
this.dest = dest;
this.weight = weight;
51
{
this.parent = parent;
this.rank = rank;
int V = 4;
graphEdges.sort(new Comparator<Edge>() {
});
kruskals(V, graphEdges);
52
{
int j = 0;
int noOfEdges = 0;
if (x != y) {
results[noOfEdges] = nextEdge;
53
union(subsets, x, y);
noOfEdges++;
j++;
// built MST
System.out.println(
int minCost = 0;
+ results[i].weight);
minCost += results[i].weight;
int y)
54
subsets[rootY].parent = rootX;
else if (subsets[rootX].rank
< subsets[rootY].rank) {
subsets[rootX].parent = rootY;
else {
subsets[rootY].parent = rootX;
subsets[rootX].rank++;
if (subsets[i].parent == i)
return subsets[i].parent;
subsets[i].parent
= findRoot(subsets, subsets[i].parent);
return subsets[i].parent;
55
Code used to generate a dense graph to represent Kruskal Algorithm’s
worst case’s input:
import java.util.ArrayList;
import java.util.List;
int numVertices = 5;
System.out.print(" List.of(");
System.out.println("));");
int weight = (int) (Math.random() * 10) + 1; // Assigning random weights for demonstration
56
}
return edges;
class Edge {
this.source = source;
this.destination = destination;
this.weight = weight;
57
Code used to generate a sparse graph with the edges sorted in ascending
order to represent Kruskal Algorithm’s best case’s input:
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Random;
int numVertices = 5;
System.out.print(" List.of(");
System.out.println("));");
58
// Generating a sparse graph with random values
return edges;
class Edge {
this.source = source;
this.destination = destination;
this.weight = weight;
59
Prim’s algorithm’s Java code:
// A Java program for Prim's Minimum Spanning Tree (MST)
import java.io.*;
import java.lang.*;
import java.util.*;
class MST {
// in MST
min = key[v];
min_index = v;
return min_index;
60
// A utility function to print the constructed MST
// stored in parent[]
System.out.println("Edge \tWeight");
+ graph[i][parent[i]]);
// cut
key[i] = Integer.MAX_VALUE;
mstSet[i] = false;
61
// Always include first 1st vertex in MST.
key[0] = 0;
parent[0] = -1;
mstSet[u] = true;
// than key[v]
62
if (graph[u][v] != 0 && mstSet[v] == false
parent[v] = u;
key[v] = graph[u][v];
printMST(parent, graph);
{ 2, 0, 3, 8, 5 },
{ 0, 3, 0, 0, 7 },
{ 6, 8, 0, 0, 9 },
{ 0, 5, 7, 9, 0 } };
t.primMST(graph);
63
Code that generate a dense graph where each vertex is connected to every other
vertex and that represent prim’s Algorithm’s worst case’s input:
import java.util.Arrays;
class PGraph {
Arrays.fill(selected, false);
no_edge = 0;
selected[0] = true;
System.out.println("Edge : Weight");
64
for (int i = 0; i < V; i++) {
if (selected[i]) {
min = G[i][j];
x = i;
y = j;
selected[y] = true;
no_edge++;
int V = 5;
int[][] G = generateDenseGraph(V);
65
g.Prim(G, V);
if (i != j) {
System.out.println("Generated Graph:");
System.out.println();
} return graphArray; }}
66
Code that generates a complete graph, which represents Prim’s algorithm’s best
case’s input:
import java.util.Arrays;
class PGraph {
Arrays.fill(selected, false);
no_edge = 0;
selected[0] = true;
System.out.println("Edge : Weight");
67
for (int i = 0; i < V; i++) {
if (selected[i]) {
min = G[i][j];
x = i;
y = j;
selected[y] = true;
no_edge++;
int V = 5;
int[][] G = generateBestCaseGraph(V);
68
g.Prim(G, V);
int weight = 1;
graphArray[i][j] = weight;
graphArray[j][i] = weight;
weight++;
System.out.println("Generated Graph:");
System.out.println();
return graphArray;
69
Java code that generates a binary heap where the weights of the edges are
randomly assigned, that represent Prim’s algorithm’s average case’s input:
import java.util.Arrays;
import java.util.Random;
class PGraph {
Arrays.fill(selected, false);
no_edge = 0;
selected[0] = true;
System.out.println("Edge : Weight");
70
int y = 0; // col number
if (selected[i]) {
min = G[i][j];
x = i;
y = j;
selected[y] = true;
no_edge++;
int V = 5;
int[][] G = generateAverageCaseGraph(V);
71
g.Prim(G, V);
graphArray[i][j] = weight;
graphArray[j][i] = weight;
System.out.println("Generated Graph:");
System.out.println();
} return graphArray; }}
72