0% found this document useful (0 votes)
64 views11 pages

Design &analysis of Algorithms Assignment: Uiet Department Information Technology Panjab University SSG-RC

The document compares algorithms for solving optimization problems involving knapsacks or backpacks of limited capacity. It summarizes the 0-1 knapsack problem, which involves selecting items to fill a knapsack without breaking items, and the fractional knapsack problem, which allows breaking items. The fractional knapsack problem can be solved through a greedy approach by sorting items by value per unit weight and filling the knapsack with highest value per unit weight items first. In contrast, the 0-1 knapsack problem requires trying all possible combinations.

Uploaded by

Ak Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views11 pages

Design &analysis of Algorithms Assignment: Uiet Department Information Technology Panjab University SSG-RC

The document compares algorithms for solving optimization problems involving knapsacks or backpacks of limited capacity. It summarizes the 0-1 knapsack problem, which involves selecting items to fill a knapsack without breaking items, and the fractional knapsack problem, which allows breaking items. The fractional knapsack problem can be solved through a greedy approach by sorting items by value per unit weight and filling the knapsack with highest value per unit weight items first. In contrast, the 0-1 knapsack problem requires trying all possible combinations.

Uploaded by

Ak Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

DESIGN &ANALYSIS OF

ALGORITHMS
ASSIGNMENT
UIET DEPARTMENT
INFORMATION TECHNOLOGY
PANJAB UNIVERSITY SSG-RC

SUBMITTED TO:
MR. GURPINDER SINGH

SUBMITTED BY:
SHIVAM SHUKLA
BE(IT) 5th SEM
SG-17814
COMPARISON BETWEEN QUICK & MERGE SORT

BASIS FOR

COMPARISON QUICK SORT MERGE SORT

The splitting of a array of The splitting of a array of

The partition of elements is in any ratio, not elements is in any ratio, not
elements in the
array necessarily divided into half. necessarily divided into half.

Worst case
complexity O(n2) O(nlogn)

It operates fine on any size of

Works well on It works well on smaller array array

It work faster than other sorting

algorithms for small data set It has a consistent speed on any

Speed of execution like Selection sort etc size of data

Additional storage
space requirement Less(In-place) More(not In-place)

Efficiency Inefficient for larger arrays More efficient

Sorting method Internal External


BASIS FOR

COMPARISON QUICK SORT MERGE SORT

Stability Not Stable Stable

Preferred for for Arrays for Linked Lists

Locality of
reference Good Poor
BINARY SEARCH:

Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval
covering the whole array. If the value of the search key is less than the item in the middle of the
interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly
check until the value is found or the interval is empty.
We basically ignore half of the elements just after one comparison.
1. Compare x with the middle element.
2. If x matches with middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in right half subarray after the
mid element. So we recur for right half.
4. Else (x is smaller) recur for the left half.

COMPLEXITY:

Algorithm Time Complexity

Best Worst

Binary Search O(1) O(log n)


Comparison between Kruskal’s and Prim’s Algorithm
Factors Kruskal’s Algorithms Prim’s Algorithm
1. 1.
Steps 2. Select the shortest edge in a network 2. Select any vertex
3. Select the next shortest edge which does not 3. Select the shortest edge connected to
create a cycle that vertex
4. Repeat step 2 until all vertices have been 4. Select the shortest edge connected to any
connected vertex already connected
Kruskal’s Begins with forest and merge into 5. Repeat step 3 until all vertices have been
tree. connected
6. Prim’s always stays as a tree.

Complexity O(NlogN) comparison sort for edges. O(NlogN) search the least weight edge
for every vertices.

Analysis Running Time= O(mlog n) (m=edges,n=nodes) Running Time = O(m+nlogn) (m=edges,


n=nodes)
Testing if an edge creates a cycle can be slow
unless a complicated data structure called a If a heap is not used, the run time will be
“union-find” structure is used. O(n^2) instead of O(m+n log n).
However, using a heap complicates the
It usually only has to check a small fraction of code since we are complicating the data
edges ,but in some cases (like if there was a structure. A Fibonacci heap is the best
vertex connected to graph by only one edge and kind of heap to use, but again, it
it was the longest edge) it would have to check complicates the code.
all the edges.
Unlike Kruskal’s, it doesn’t need to see
This algorithm works best, of course if the all of the graph at once. It can deal with
number of edges is kept to a minimum. it one piece at a time. It also doesn’t
need to worry if adding an edge will
create a cycle since this algorithm deals
primarily with the nodes, and not the
edges.
Algorithm
// Returns the MST by Kruskal’s Algorithm // Returns the MST by Prim’s Algorithm
// Input: A weighted connected graph G = (V,E) // Input: A weighted connected
// Output: Set of edges comprising a MST graph G = (V,E)
sort the edges E by their weights // Output: Set of edges comprising a
ET ; MST
while |ET | + 1 < |V | do VT {any vertex in G}
e next edge in E ET ;
if Et [ {e} does not have a cycle then for i 1 to |V | − 1 do
ET ET [ {e} e minimum-weight edge (v, u)
return ET with v 2 VT and u 2 V − VT
VT VT [ {u}
ET ET [ {e}
return ET
Comparison between Bellman Ford and Floyd Warshall Algorithm

Factors Bellman Ford Algorithm Floyd Warshall Algorithm

Main  Bellman Ford Algorithm is one  Floyd Warshall Algorithm is


example of a single-source an example of all-pairs
Purpose
shortest or SSSP algorithm, i.e., shortest path algorithm,
given a source vertex it finds meaning it computes the
shortest path from source to all shortest path between all
other vertices. pair of nodes.

Time  Time Complexity of Bellman ford  Time Complexity of Floyd


Algorithm: O(E log V) Warshall: O(V3)
Complexity

Working  Cannot be implemented in  Can be implemented in


towards distributed system distributed system
distributed
systems

Edges  Don’t work for Negative Edges  Work for Negative edges
but no negative cycle.
Factors 0/1 Knapsack Fractional Knapsack

Given weights and values of n Fractions of items can be taken


About
items, put these items in a rather than having to make binary
knapsack of capacity W to get the (0-1) choices for each
maximum total value in the item.Fractional Knapsack
knapsack. Problem can be solvable by
In other words, given two integer greedy strategy whereas 0 - 1
arrays val[0..n-1] and wt[0..n-1] problem is not.
which represent values and
weights associated with n items Compute the value per
respectively. pound Vi/Wi for each item.
Also given an integer W which Obeying a Greedy Strategy, we
represents knapsack capacity, find take as possible of the item with
out the maximum value subset of the highest value per pound.
val[] such that sum of the weights
of this subset is smaller than or If the supply of that element is
equal to W. You cannot break an exhausted and we can still carry
item, either pick the complete more, we take as much as
item, or don’t pick it (0-1 possible of the element with the
property). next value per pound.Sorting, the
items by value per pound, the
greedy algorithm run in O (n log
n) time.

Time Time complexity of 0/1 Knapsack FractionalKnapsack has


problem is O(nW) time complexity O(NlogN)
Complexity

Algorithm Knapsack (n, W) Fractional Knapsack (Array W


, Array V, int M)

1. for w = 0, W 1. for i <- 1 to size (V)


2. do V [0,w] ← 0 2. calculate cost[i] <- V[i]
3. for i=0, n / W[i]
4. do V [i, 0] ← 0 3. Sort-Descending (cost)
5. for w = 0, W 4. i ← 1
6. do if (wi≤ w & vi + V [i-1, w 5. while (i <= size(V))
- wi]> V [i -1,W]) 6. if W[i] <= M
7. then V [i, W] ← vi + V [i - 1, 7. M ← M – W[i
w - wi] ]
8. else V [i, W] ← V [i - 1, w] 8. total ← total +
V[i];
9. if W[i] > M
10. i ← i+1

You might also like