0% found this document useful (0 votes)
47 views31 pages

DAA Sanjivani

Asymptotic notation is a mathematical framework used to describe the running time performance of algorithms, categorizing them into best, average, and worst cases. The three main types of asymptotic notation are Big O (upper bound), Omega (lower bound), and Theta (both bounds), each providing insights into an algorithm's efficiency. Understanding these notations is crucial for analyzing and comparing the performance of algorithms in computer science.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views31 pages

DAA Sanjivani

Asymptotic notation is a mathematical framework used to describe the running time performance of algorithms, categorizing them into best, average, and worst cases. The three main types of asymptotic notation are Big O (upper bound), Omega (lower bound), and Theta (both bounds), each providing insights into an algorithm's efficiency. Understanding these notations is crucial for analyzing and comparing the performance of algorithms in computer science.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Q. What is asymptotic notation? Explain Big O, Theta, and Omega Notation.

OR Explain Order Notations


in detail. Explain Big Oh, Theta, and Omega Notation. What is the importance of asymptotic notation?
Explain Big O, Theta, and Omega Notation.

Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its


run-time performance.

Using asymptotic analysis, we can very well conclude the best case, average case, and worst case
scenario of an algorithm.

Asymptotic analysis refers to computing the running time of any operation in mathematical units of
computation.

Usually, the time required by an algorithm falls under three types –

• Best Case − Minimum time required for program execution.


• Average Case − Average time required for program execution.
• Worst Case − Maximum time required for program execution.

Following are the commonly used asymptotic notations to calculate the running time complexity of
an algorithm.

1. Ο Notation
2. Ω Notation
3. θ Notation

1. Big Oh Notation,

• The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time.
• It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to
complete.
• The Big Oh notation is defined as f(n) <= cg(n).
• For Eg,
Consider an2+bn+c gives O(n2) provides a, b, c are any constants and a > 0.

2. Omega Notation, Ω

• The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time.
• It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.
• The Ω notation is defined as f(n) >= cg(n).
• For Eg,
Consider f(n) = 2n2 + 5 , g(n) = 7n
It gives f(n) = Ω g (n) since f(n) > g(n)

3. Theta Notation, θ

• The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's
running time.
• It is represented by g1(n) <= f(n) <= g2(n)
• For Eg,
Consider f(n) = 2n + 8 , g1(n) = 5n, g2(n) = 7n
It gives f(n) = θ g (n) since g1(n) < f(n) < g2(n).

::::::::: Note : Please Add Graph In Every Notation ::::::::::


Q. What do you mean by conditional asymptotic notation? Explain.

Conditional Asymptotic Notation

• Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its


run-time performance.
• Using asymptotic analysis, we can very well conclude the best case, average case, and worst case
scenario of an algorithm.
• Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation.
• Many algorithms are easier to analyze if we impose condition on them initially.
• Imposing such condition, when we specify asymptotic value is called as conditional asymptotic notation.
• Consider f(n) and g(n) are the two functions,

o f(n) = 2n+2
g(n) = n

o To determine Big O relation,


o For n =1,
f(n) = 4 & g(n) = 1, f(n) > g(n) and it is not Big O.

o o For n = 2,
f(n) = 6 & g(n) = 4, f(n) > g(n) and it is not Big O.

o o For n = 3,
f(n) = 8 & g(n) = 9, f(n) < g(n) and it is Big O.

o o For n = 4,
f(n) = 10 & g(n) = 16, f(n) < g(n) and it is Big O.

From the above four iteration, we have found that f(n) maintains Big O relation with g(n) when n>2.
When n<2, f(n) > g(n) and it fails to maintain Big O relation.
If such condition occurs, then it is called as conditional asymptotic notation.

Q. . Explain Recursive algorithm with example.

Recursive Algorithm

• A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and which obtains
the result for the current input by applying simple operations to the returned value for the smaller (or simpler)
input.
• More generally if a problem can be solved utilizing solutions to smaller versions of the same problem, and the
smaller versions reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem.
• For example, the elements of a recursively defined set, or the value of a recursively defined function can be
obtained by a recursive algorithm.
• If a set or a function is defined recursively, then a recursive algorithm to compute its members or
values mirrors the definition.
• Initial steps of the recursive algorithm correspond to the basis clause of the recursive definition and
they identify the basic elements.
• They are then followed by steps corresponding to the inductive clause, which reduce the computation for an
element of one generation to that of elements of the immediately preceding generation.

Advantages of Recursion

• It reduce unnecessary calling of function.


• Through Recursion one can solve problems in easy way while its iterative solution is very big and complex.
Disadvantages of Recursion

• Recursive solution is always logical and it is very difficult to trace. (debug and understand).
• In recursive we must have an “if” statement somewhere to force the function to return without the recursive call
being executed, otherwise the function will never return.
• Recursion takes a lot of stack space, usually not considerable when the program is small and running on a PC.
• Recursion uses more processor time.

Q. List and explain the different aspects in constructing a loop. Explain use of loops.

Use of Loops

• A loop is used for executing a block of statements repeatedly until a given condition returns false.
• To construct a loop we should be aware of three aspects:
1. The initial condition that needs to be true before the loop begins execution.
2. The invariant relation that must hold before, during, and after each iteration of the loop.
3. The condition under which the loop must terminate.

1. To Establish Initial Condition

• It is needed to set the loop control variables, to values which are appropriate for solving the smallest
instance of the problem in question.
• For example, suppose we want to add elements of the given array using a loop.
• The loop variables are, i the loop control variable which is also used as the array index and s, the current value of
the sum.
• The smallest problem in this case is the sum of 0 number of elements. In this cases must be 0. Thus the initial
condition is:
i <-- 0 & s <-- 0

2. To Find the Iterative Construct (invariant relation)

• For maintaining the invariant relation, consider a condition n > 0.


• Thus, in terms of the structured control constructs, the complete algorithm is:
i <-- 0
s <-- 0
while i < n do
i <-- i+1
s <-- s+ a [i]
endwhile.

3. Loop Termination

• The simplest termination condition occurs when it is known in advance the number of times the loop is to be
iterated.
• In that case we can use a FOR structured programming construct.
• For example, to execute a loop 20 times, we may write:
for i <-- 1 to 20 do
---
---
endfor
• Note that it is not necessary that the initial value assigned and the final value are constants, the only requirement is
that their values should be known at the start of the loop.
• Thus we may have:
for i <-- m to n do
---
---
endfor
Q. What are the different means/techniques of improving efficiency of an algorithm?

Efficiency of Algorithm

• The design and implementation of algorithms have a profound influence on their efficiency.
• Every algorithm, when implemented must use some of the system resources to complete its task.
• The resources most relevant to the efficiency are the use of the central processor unit (CPU) time and internal
memory (RAM).
• Hence the need for designing efficient algorithms is present even today.
• There is no standard method for designing efficient algorithms.
• Despite this, there are a few generalizations that can be made about the problem characteristics, while other
characteristics would be specific to a particular problem.
• Following are the few ways to improve the efficiency of an algorithm,

1. Removing Redundant Computations Outside Loops

• The most common mistake when using loops is to repeatedly re-calculate the part of an expression that remains
constant throughout the execution phase of the entire loop.
• For example:
for i=1 to 20 do
begin
x = a*a*a + b* b
write (‘x=’, x)
• In the above code, the redundant calculations are there inside the loop. i.e a*a*a & b*b.
• After removing these redundant calculations, efficiency can be improved.
• For example:
A1 = a*a*a*
B1 = b*b
for i=1 to 20 do
begin
x = A1 + B1
write (‘x=’, x)

2. Referencing of Array Elements

• Generally, arrays are processed by iterative constructs.


• If care is not exercised while programming, redundant computations can creep into array processing.
• Consider, as an example, the following two versions of an algorithm in C, to find the maximum
number and its position in an array

3. Inefficiency due to late termination

• Consider an example of linear search in which the names are stored alphabetically.
• An inefficient implementation for this case would be one where all names were examined even if a node in the list
was reached where it could be definitely said that the name cannot occur later in the list.
• For example, suppose we are searching for Ketan, then as soon as we reach a name that is alphabetically later than
Ketan example, Lalit, then it should not proceed further.
• Another example is of bubble sort, in which it takes (n-1) passes to sort an unsorted array containing n elements.
• But if the array is already sorted, then also it takes (n-1) comparisons and no exchange of elements will be made. It
causes late termination.

4. Early Detection of desired Output Conditions

• It may sometimes happen that, due to the very nature of the input data a particular algorithm, for example, the
Bubble Sort algorithm, establishes the desired output condition before the general condition for termination is met.
• For example, a bubble sort algorithm might receive a set of data that is already in sorted condition.
• When this is the case, it is obvious that the algorithm will have the data in sorted condition long before the general
loop termination conditions are satisfied.
• It is therefore desirable to terminate the sorting as soon as it is established that the input data is already sorted.
• In general we must include additional tests or steps to determine or detect the conditions for early termination.
• These tests should be included only if they are inexpensive with regards to computer time as was the case in the
Bubble sort algorithm.
• In general, It may be necessary to trade-off extra tests and storage versus CPU time to achieve early
termination of algorithms.

Q. Explain complexity of algorithm with example.

Complexity of Algorithm

• Algorithmic complexity is concerned about how fast or slow particular algorithm performs.
• We define complexity as a numerical function T(n) - time versus the input size n.
• We want to define time taken by an algorithm without depending on the implementation details.
• While analyzing an algorithm, we mostly consider time complexity and space complexity.

Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of
the input.

Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of
the length of the input.

• Time and space complexity depends on lots of things like hardware, operating system, processors, etc.
• However, we don't consider any of these factors while analyzing the algorithm.
• We will only consider the execution time of an algorithm.
• Following are the commonly used asymptotic notations to calculate the running time complexity ofan algorithm.
1. Ο Notation
2. Ω Notation
3. θ Notation

:::::::::::::::: Refer 1 Page Answer After This :::::::::::::::::::::::::::::::


Q. List and explain different algorithm strategies.

Algorithm & its Strategies

• An algorithm is a set of self-contained sequence of instructions or actions that contains finite space or sequence and
that will give us a result to a specific problem in a finite amount of time.
• It is a logical and mathematical approach to solve or crack a problem using any possible method.
• Different strategies of algorithms are,
1. Recursive algorithms
2. Divide and conquer algorithm
3. Greedy algorithm
4. Dynamic programming algorithm
5. Backtracking algorithm
6. Brute Force algorithm
7. Randomized algorithm.

1. Recursive algorithm

• It solves the base condtion directly and then recurs with a simpler or easier input every time.
• A base value is set at the starting for which the algorithm terminates.
• It is use to solve the problems which can be broken into simpler or smaller problems of same type.
• For Example, Factorial Function, Fibonacci Series Function.

2. Divide and conquer algorithm

• Divide and conquer consist of two parts first of all it divides the problems into smaller subproblems of the same
type and solve them solve them recusively and then combine them to to form the solution of the original problem.
• For Example, Quick Sort, Merge Sort

3. Greedy algorithm

• Greedy algorithm is an algorithm that solves the problem by taking optimal solution at the local level without
regards for any consequences with the hope of finding optimal solution at the global level.
• For Example, Knapsack Proble.

4. Dynamic programming algorithm

• A dynamic programming algorithm (also known as dynamic optimization algorithm) remembers the past result and
uses them to find new result means it solve complex problems by breaking it down into a collection of simpler
subproblems, then solving each of those subproblems only once ,and storing their solution for future use instead of
recomputing their solutions again.
• For Example, Treavelling Salesman problem

5. Backtracking algorithm

• How about we learn backtracking using an example so let’s say we have a problem “Monk” and we divide it into
four smaller problems “M, R, A, A”. It may be the case that the solution of these problems did not get accepted as
the solution of “Monk”.
• In fact we did not know on which one it depends. So we will check each one of them one by one until we find the
solution for “Monk”.
• For Example, Queens problem

6. Brute force algorithm

• A brute force algorithm simply tries all the possibilities until a satisfactory solution is found.
• Such types of algorithm are also used to find the optimal (best) solution as it checks all the possible solutions.
• And also used for finding a satisfactory solution (not the best), simply stop as soon as a solution of the problem is
found.
• For Example, String Matching Algorithm
:::::::::::::::::: CHAPTER 2 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Q. What is divide & conquer? Explain with suitable example.

Divide & Conquer

• Divide and Conquer is an algorithmic paradigm.


• In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem
is solved independently.
• When we keep on dividing the subproblems into even smaller sub-problems, we may eventually reach a stage
where no more division is possible.
• Those "atomic" smallest possible sub-problem (fractions) are solved.
• The solution of all sub-problems is finally merged in order to obtain the solution of an original problem.
• Divide and conquer algorithm consists of two parts:
1. Divide: Divide the problem into a number of sub problems. The sub problems are solved recursively.
2. Conquer: The solution to the original problem is then formed from the solutions to the sub problems (patching
together the answers).

Control Abstraction of Divide and Conquer

• A control abstraction is a procedure whose flow of control is clear but whose primary operations are specified by
other procedures whose precise meanings are left undefined.
• The control abstraction for divide and conquer technique is given below,
DANDC (P)
{
if SMALL (P) then return S (p);
else
{
divide p into smaller instances p1, p2, …. Pk, k >= 1;
apply DANDC to each of these sub problems;
return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC (pk));
}
}
• SMALL (P) is a Boolean valued function which determines whether the input size is small enough so that the answer
can be computed without splitting.
• If this is so function ‘S’ is invoked otherwise, the problem ‘p’ into smaller sub problems.
• These sub problems p1, p2, . . . , pk are solved by recursive application of DANDC.
• If the sizes of the two sub problems are approximately equal then the computing time of DANDC is:
T(n) = g(n) When n is small
= 2 T(n/2) + f (n) otherwise
where,
T (n) is the time for DANDC on ‘n’ inputs.
g (n) is the time to complete the answer directly for small inputs and
f (n) is the time for Divide and Combine

Class of problems for which D&C are unstable

• All D&C problems do not exhibit optimal problem division.


• Consider the problem of visiting all the vertices of a graph in a simple cycle, starting from a chosen vertex.
• The problem is to find the cycle that yields the smallest total edge-cost.
• We divide the problem into two parts-the set of vertices to be visited before the last vertex, and the last vertex.
• The types of problems for which divide-and-conquer is not suitable are:
1. Where the solution of size n depends upon n sub-solutions, each of size (n-1).
2. Overlapping sub-problems

Timing Analysis of Divide & Conquer Algorithm

• For D&C algorithms the running time is mainly affected by 3 factors:


1. The number of sub-instances (a) into which a problem is split.
2. The ratio of initial problem size to sub-problem size (B)
3. The number of steps required to divide the initial instance and to combine sub solutions, expressed as a function
of the input size, n.

Advantages of Divide & Conquer

• In a perfect world, where the problem is easy to divide, and the sub-problem at some level is easy to solve, divide
and conquer can be optimal for a general case solution, like merge sort.
• Parallel availability, divide and conquer by it’s very nature lends itself well to parallel processing.

Disadvantages of Divide & Conquer

• Problem decomposition may be very complex and thus not really suitable to divide and conquer.
• Recursive nature of the solution may end up duplicating sub-problems, dynamic/memoized solutions may be better
in some of these cases, like Fibonacci.
• Recursion into small/tiny base cases may lead to huge recursive stacks, and efficiency can be lost by
not applying solutions earlier for larger base cases.
Q. Explain binary search algorithm using iterative and recursive version.

Binary Search

• Binary search algorithm works on the principle of divide and conquer.


• For this algorithm to work properly, the data collection should be in the sorted form.
• Binary search looks for a particular item by comparing the middle most item of the collection.
• If a match occurs, then the index of item is returned.
• If the middle item is greater than the item, then the item is searched in the sub-array to the left of the middle item.
• Otherwise, the item is searched for in the sub-array to the right of the middle item.
• This process continues on the sub-array as well until the size of the subarray reduces to zero.
• Binary search is a fast search algorithm with run-time complexity of Ο (log n).

Iterative Version of Binary Search Algorithm


The iterative binary search algorithm is given below

In the above algorithm, the array is divided into two parts with the help of middle.
The searched element is first compared with the middle element and the searching goes on.
The best case complexity of the algorithm is O (n) and the worst & average case is O (logn).

Analysis of Iterative & Recursive Algorithm


After practical implementation of Iterative & Recursive binary search algorithm, it is found that
recursive binary search algorithm is inefficient due to the extra recursive calls as compared to
iterative binary search algorithm.
That is why iterative binary search algorithm is used which is divide & conquer strategy

Recursive Version of Binary Search Algorithm


The recursive binary search algorithm is given below,
In the above algorithm, the algorithm is called recursively until the element is searched.

Recursive Algorithm :
local integer mid;
if low > high then
return high+1;
mid = floor((low + high)/2);
if a[mid] = x then
return mid;
if x < a[mid] then
return RBS(a, low, mid-1, x);
else
return RBS(a, mid+1, high, x);
end
Q. What do you mean by triangulation? Explain different applications.
Triangulation

• Triangulation is a process that takes origin of space and divides it into subregions.
• The space may be of any dimension and the subregions typically are shaped as triangles.
• Triangulation is used in many Real world applications like finite element simulation in which partial
differential equation is simulated over continuous surface or space by first using a triangulation to break
the space into small discrete elements.
• Other applications includes,
1. Surface approximation: A continuous surface is approximated by a mesh of triangles.
This is used in graphical applications like face representation, etc.
2. Nearest neighbour structure identification: The triangulation establishes data identifying the nearest
neighbours of each point in the triangulated data set. This is potentially useful for clustering
application.
Line Side Test

• Given a point C and a line AB, the line side test answers the question on which side of line AB the point C
is.

• The line side test is easily implemented through a matrix determinant operation.
• Let the line be defined by points A = (xa,ya), B = (xb,yb) and the test point be C= (xc,yc)
• Therefore,

• The point C is above the line A->B iff D > 0.

In Circle Test

• Given four points A,B,C,D, the in-circle test answers the question whether D is inside the circle define by A,
B & C.
• Let A = (xa,ya), B = (xb,yb), C = (xc,yc), D = (xd,yd)
• The matrix determinant used for in circle test is:
Q. Explain merge sort algorithm. Explain how D&C strategy is applicable to it. Also analyze the algorithm.
Merge Sort

• Merge Sort is a Divide and Conquer algorithm.


• It divides input array in two halves, calls itself for the two halves and then merges the two sorted
halves.
• The merge() function is used for merging two halves.
• The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges
the two sorted sub-arrays into one.
• The merge sort algorithm is given below,
MergeSort(arr[ ], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
Consider an array which is unsorted, after applying merge sort,

Time Complexity of Merge Sort

• Assume that time taken to merge proportional to n, the time taken by merge sort is given by the recursive
formula,
T(n) = a when n=1, a is any constant
= 2 T(n/2) + cn when n>1, c is a constant
It implies an + cnlog2n -> O (n log2 n)
:::::::::::::::::::::::: CHAPTER 3 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Q. What is Greedy Algorithm? What are its advantages & Disadvantages?
Greedy Algorithm

• Among all the algorithmic approaches, the simplest and straightforward approachis theGreedymethod.
• In this approach, the decision is taken on the basis of current available information without worryingabout
the effect of the current decision in future.
• Greedy algorithms build a solution part by part, choosing the next part in such a way, that it
givesanimmediate benefit.
• This approach never reconsiders the choices taken previously.
• This approach is mainly used to solve optimization problems. Greedy method is easy toimplementand
quite efficient in most of the cases.
• Hence, we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that followslocal
optimal choice at each step with the hope of finding global optimal solution.
• In many problems, it does not produce an optimal solution though it gives an approximate(nearoptimal)
solution in a reasonable time.
Algorithm :
Algorithm Greedy(a, n) // n defines the input set
{
solution = NULL; // initialize solution set
for i=1 to n do
{
x = Select (a); // Selection Function
if Feasible (solution, x) then // Feasibility solution
solution = Union (solution, x); // Include x in the solution set
}
return solution;
}

Advantages of Greedy Algorithm

• Finding solution is quite easy with a greedy algorithm for a problem.


• Analyzing the run time for greedy algorithms will generally be much easier than for other techniques(like
Divide and conquer).
Disadvantages of Greedy Algorithm

• In many problems, Greedy algorithm fails to find an optimal solution, moreover it mayproduceaworst
solution. Problems like Travelling Salesman, etc. cannot be solved using this approach.
• The difficult part is that for greedy algorithms we have to work much harder tounderstandcorrectness
issues. Even with the correct algorithm, it is hard to prove why it is correct. Provingthata greedy algorithm
is correct is more of an art than a science.
Knapsack Problem

• The knapsack problem or rucksack problem is a problem in combinatorial optimization: Givenasetof items,
each with a weight and a value, determine the number of each itemtoincludeinacollection so that the total
weight is less than or equal to a given limit and the total valueisaslargeas possible.
• It derives its name from the problem faced by someone who is constrained by a fixed-sizeknapsackand
must fill it with the most valuable items.
Knapsack Algorithm :

End

Job Sequencing Problem

• Given an array of jobs where every job has a deadline and associated profit if the jobisfinishedbefore the
deadline.
• It is also given that every job takes single unit of time, so the minimum possible deadlinefor anyjobis 1.
How to maximize total profit if only one job can be scheduled at a time.
Job Sequencing Algo :
Kruskal’s Problem :

• Given a connected and undirected graph, a spanning tree of that graph is a subgraph that
isatreeandconnects all the vertices together.
• A single graph can have many different spanning trees.
• A minimum spanning tree (MST) or minimum weight spanning tree for a weighted,
connectedandundirected graph is a spanning tree with weight less than or equal to the weight of
everyotherspanning tree.
• The weight of a spanning tree is the sum of weights given to each edge of the spanningtree.
• A minimum spanning tree has (V – 1) edges where V is the number of vertices in the givengraph.
• Krushkal algorithm is a Greedy Algorithm.
• The Greedy Choice is to pick the smallest weight edge that does not cause a cycle in theMinimumSpanning
Tree constructed so far.
Kruskal’s Algorithm :

Prim’s Problem & Algorithm :

• Prim’s algorithm is also a Greedy algorithm.


• It starts with an empty spanning tree.
• The idea is to maintain two sets of vertices.
• The first set contains the vertices already included in the Minimum Spanning Tree, the other set
contains the vertices not yet included.
• At every step, it considers all the edges that connect the two sets, and picks the minimum weight edge
from these edges.
• After picking the edge, it moves the other endpoint of the edge to the setcontaining Minimum
Spanning Tree.
• The idea behind Prim’s algorithm is simple, a spanning tree means all vertices must be connected.
Dijkstra’s Algorithm

• Given a graph and a source vertex in the graph, find shortest paths from source to all vertices in the
given graph.
• Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree.
• Like Prim’s MST, we generate a SPT (shortest path tree) with given source as root.
• We maintain two sets, one set contains vertices included in shortest path tree, other set includes
vertices not yet included in shortest path tree.
• At every step of the algorithm, we find a vertex which is in the other set (set of not yet included) and
has a minimum distance from the source.
• Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path from a single source
vertex to all other vertices in the given graph.
Algo :
1. Create a set sptSet (shortest path tree set) that keeps track of vertices included in shortest path tree, i.e.,
whose minimum distance from source is calculated and finalized. Initially, this set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all distance values as INFINITE. Assign
distance value as 0 for the source vertex so that it is picked first.
3. While sptSet doesn’t include all vertices
a. Pick a vertex u which is not there in sptSet and has minimum distance value.
b. Include u to sptSet.
c. Update distance value of all adjacent vertices of u. To update the distance values, iterate through all adjacent
vertices.
For every adjacent vertex v, if sum of distance value of u (fromsource) and weight of edge u-v, is less than the
distance value of v, then update the distance value of v.

::::::::::::::: Note : We are not mentioning any type of example we don’t judge any type of numerical so please
prepare all this numreicals by own. :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::: CHAPTER 4 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Q. Explain the Floyd’s Algorithm.
Floyd Warshall’s Algorithm

• The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem.
• The problem is to find shortest distances between every pair of vertices in a given edge weighted directed
Graph.
• The Floyd Warshall algorithm is a shortest path algorithm or graphs.
• It computes the shortest distances between every pair of vertices in the input graph.
• We have seen Bellman Ford Algorithm and Dijkstra’s Algorithm which only computes shortest distance
from a single source.
• Floyd Warshall algorithm finds the shortest to reach from one point to another. The output will also be in
the form of a matrix.
Q. What is Dynamic Programming?

• Method for solving complex problems by breaking them down into simpler sub-problems.
• DP is another technique for problems with optimal substructure: An optimal solution to a problem
contains optimal solutions to sub-problems.
• This doesn't necessarily mean that every optimal solution to a sub-problem will contribute to the main
solution.
• For divide and conquer (top down), the sub-problems are independent so we can solve them in any
order.
• For greedy algorithms (bottom up), we can always choose the "right" sub-problem by a greedy choice.
• In dynamic programming, we solve many sub-problems and store the results: not all of them will
contribute to solving the larger problem. Because of optimal substructure, we can be sure that at least
some of the sub-problems will be useful.
• Dynamic Programming is used when the sub-problems are not independent, e.g. when they share the
same sub-problems. In this case, divide and conquer may do more work than necessary, because it
solves the same sub problem multiple times.

Q. Matrix Chain Multiplication

• Matrix chain multiplication (or Matrix Chain Ordering Problem, MCOP) is an optimization problem that can
be solved using dynamic programming.
• Given a sequence of matrices, the goal is to find the most efficient way to multiply these matrices.
• The problem is not actually to perform the multiplications, but merely to decide the sequence of the
matrix multiplications involved.
Matrix Multiplication Algorithm :
Q. Explain optimal polygon triangulation with suitable example.

• The base case of the recursion is a line segment (i.e., a polygon with zero area), whichhas cost 0.
• Let's define a function based on this intuitive notion.
• Let t(i, j) be the cost of an optimal triangulation of the polygon <vi-1, vi, vi+1, ... , vj>. Sot(i, j) = 0, if
i=j
minimum <= k <= j-1 { t(i,k) + t(k+1,j) + cost (<vi-1, vk, vj>)} if i
Q.
::::::::::::::: CHAPTER 5 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Q. Explain DFS & BFS algorithm with example.

Graph Traversal

• Traversal is a process to visit all the nodes of a graph and may print their values too.
• There are two graph traversal techniques,
1. Depth First Search (DFS)

• Depth First Search (DFS) traversing method uses the stack for storing the visited vertices.
• DFS is the edge based method and works in the recursive fashion where the vertices are explored along a
path (edge).
• The exploration of a node is suspended as soon as another unexplored node is found and the deepest
unexplored nodes are traversed at foremost.
• DFS traverse/visit each vertex exactly once and each edge is inspected exactly twice.

2. Breadth First Search (BFS)

• Breadth First Search (BFS) is the traversing method used in graphs. It uses a queue for storing the visited
vertices.
• In this method the emphasize is on the vertices of the graph, one vertex is selected at first then it is visited
and marked.
• The vertices adjacent to the visited vertex are then visited and stored in the queue sequentially.
• Similarly, the stored vertices are then treated one by one, and their adjacent vertices are visited.
• A node is fully explored before visiting any other node in the graph, in other words, it traverses shallowest
unexplored nodes first.

• BFS and DFS, both of the graph searching techniques have similar running time but different space
consumption, DFS takes linear space because we have to remember single path with unexplored nodes,
while BFS keeps every node in memory.
• DFS yields deeper solutions and is not optimal, but it works well when the solution is dense whereas BFS
is optimal which searches the optimal goal at first.
Q. Explain Backtracking framework in brief. What are the advantages of Back Tracking?
Q. Explain Hamiltonian cycle with example.
Q. Explain Combinatorial Search with example.
Combinatorial Search

• In computer science and artificial intelligence, combinatorial search studies search algorithms for solving
instances of problems that are believed to be hard in general, by efficiently exploring the usually large
solution space of these instances.
• Combinatorial search algorithms achieve this efficiency by reducing the effective size of the searchs pace
or employing heuristics. Some algorithms are guaranteed to find the optimal solution, while others may
only return the best solution found in the part of the state space that was explored.
• Classic combinatorial search problems include solving the eight queens puzzle or evaluating moves in
games with a large game tree, such as chess.
• The benefits that can accrue from analyzing and improving algorithms for problems which exhibit
combinatorial explosion, the time of which grows exponentially in the size of the problem, can be very
high.
• By using exhaustive search techniques, we can solve small problems optimally, although the time
complexity may be enormous.
• For certain applications, it may well pay to spend some extra time, to get an optimal solution.
• A good example occurs in testing a circuit or a program on all possible inputs.
• We can prove the correctness of the device by trying all possible inputs and verifying that they give the
correct answer.
• Proving such correctness is a property to be proud of.
• However, claiming that it works correctly on all the inputs we have tried is worth much less.
• Backtracking as a technique for listing all configurations representing possible solutions for a
combinatorial algorithm problem.
• Techniques for pruning the search, such that, it significantly improves efficiency by eliminating irrelevant
configurations from consideration.
• We can illustrate the power of some pruning techniques to speed up real-life search applications.
• For problems that are too large to be solved using brute-force combinatorial search, we introduce
heuristic methods such as simulated annealing, and genetic algorithms, such heuristic methods are an
important weapon in the practical algorithm development arena.
:::::::::::::::::::::::::::::: CHAPTER 6 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Q. Explain polynomial time (P) and nonpolynomial time (NP) algorithm.

If the complexity of an algorithm is expressed as (pn)), where p(n) is some polynomial of n, then the algorithm is
said to be a polynomial-time algorithm. It is generally accepted that polynomial time algorithms are tractable.
▪ Any algorithm with a time complexity that cannot be bounded by such bound is called a non-polynomial-time
algorithm.
▪ Every decision problem can have only two answers, yes or no.
▪ Hence, a decision problem may belong to a language if it provides an answer ‘yes’ for a specific input.
▪ A language is the totality of inputs for which the answer is Yes.
▪ For input size n, if worst-case time complexity of an algorithm is O(nk), where k is a constant, the algorithm is a
polynomial time algorithm.
▪ Algorithms such as Matrix Chain Multiplication, Single Source Shortest Path, All Pair Shortest Path, Minimum
Spanning Tree, etc. run in polynomial time.
▪ However there are many problems, such as traveling salesperson, optimal graph coloring, Hamiltonian cycles,
finding the longest path in a graph, and satisfying a Boolean formula, for which no polynomial time algorithms is
known.
▪ These problems belong to an interesting class of problems, called the NP-Complete problems, whose status is
unknown.

Polynomial-Class
▪ The class P consists of those problems that are solvable in polynomial time, i.e. these problems can be solved in
time O (nk) in worst-case, where k is constant.
▪ These problems are called tractable, while others are called intractable or super polynomial.
▪ Formally, an algorithm is polynomial time algorithm, if there exists a polynomial p(n) such that the algorithm
can solve any instance of size n in a time O(p(n)).
▪ The advantages in considering the class of polynomial-time algorithms are that all reasonable deterministic
single processor model of computation can be simulated on each other with at most a polynomial time.

NP-Class
▪ The class NP consists of those problems that are verifiable in polynomial time.
▪ NP is the class of decision problems for which it is easy to check the correctness of a claimed answer, with the
aid of a little extra information.
▪ Hence, we aren’t asking for a way to find a solution, but only to verify that an alleged solution really is correct.
▪ Every problem in this class can be solved in exponential time using exhaustive search.
Q. Explain efficiency of recursion with example.

Efficiency of Recursion
▪ The process in which a function calls itself directly or indirectly is called recursion and the corresponding
function is called as recursive function.
▪ Using recursive algorithm, certain problems can be solved quite easily.
▪ Examples of such problems are Towers of Hanoi (TOH), Inorder/Preorder/Postorder Tree Traversals, DFS of
Graph, etc.
▪ In general, a non-recursive version of a program will execute more efficiently in terms of time and space than a
recursive version.
▪ This is because the overhead involved in entering and exiting a block is avoided in the non-recursive version.
▪ In a recursive algorithm a number of local and temporary variables are to be stacked and unstacked on the stack,
which consumes both time and space.
▪ However, in some cases a recursive algorithm is the most natural and logical way to solve a problem.
▪ Thus we have a conflict between the computer's efficiency and the programmer's efficiency.
▪ As an example, consider the classical recursive problem, the famous Tower of Hanoi problem.
Tower of Hanoi
▪ Three pegs are labeled O, I, and D (Origin, Intermediate & Destination).
▪ The problem is to transfer all of the disks to the peg Destination from peg O(Origin) using I as an Intermediate
storage, moving only one disk at a time and never placing a larger disk on top of a smaller one.
Tower of Hanoi Algorithm
Tower (n, O, I, D)
1. If n=1 then move O 🡪 D.
2. Else
3. Tower(n-1, O, D, I)
4. Move O 🡪 D
5. Tower(n-1, I, O, D)
6. End
▪ The solution can be expressed recursively by the following algorithm.

Q. Eplain SPLAY tree in brief.


▪ Splay Tree is a self - adjusted Binary Search Tree in which every operation on element rearranges the tree so that
the element is placed at the root position of the tree.
▪ In a splay tree, every operation is performed at the root of the tree.
▪ All the operations in splay tree are involved with a common operation called "Splaying".
▪ Splaying an element is the process of bringing it to the root position by performing suitable rotation operations.
▪ In a splay tree, splaying an element rearranges all the elements in the tree so that splayed element is placed at
the root of the tree.
▪ By splaying elements we bring more frequently used elements closer to the root of the tree so that any
operation on those elements is performed quickly.
▪ That means the splaying operation automatically brings more frequently used elements closer to the root of the
tree.
▪ Every operation on splay tree performs the splaying operation. For example, the insertion operation first inserts
the new element using the binary search tree insertion process, then the newly inserted element is splayed so
that it is placed at the root of the tree.
▪ The search operation in a splay tree is nothing but searching the element using binary search process and then
splaying that searched element so that it is placed at the root of the tree.
Q. Explain complexity of set operation and Mapping.

Set Operation and Mapping


▪ A recurring theme in computer problem solving is the need to deal with various kinds of sets and mappings.
▪ Frequently, the need arises for sets of integers, strings, and more complex data types, as well as mappings from
a wide variety of types to integers, and So on.
▪ Thus it is worthwhile using these in our discussion of complexity.
▪ A wide variety of different representations for these abstractions is available and they differ in the types of data
they handle, the complexity, space requirements, and coding difficulty.
▪ Thinking of a set as a class, the following operations are typical:

1. Find
▪ Find out whether a member is in the set.
▪ If so, return a locator for it.
▪ A locator is a kind of reference to the member.
▪ It is used in deleting the member or changing it.

2. Add
▪ Add a new member to the set (assuming it is not present already).

3. Delete
▪ Given a locator for a member, remove the member from the set.

4. New
▪ Create a new set.
There are following ways to represent a set,

• a. Set Implementation using an unordered array


▪ It is the simplest way to represent a set.
▪ While using unordered array, the complexity of following operations are given,
Find: O(N)
Add: O(1)
Delete: O(1)
New: O(1)

• b. Binary Search Principle


▪ It is divide and conquer technique.
▪ It maintains the ordered array of sets.

• c. Binary Search Trees


▪ Binary Search tree enables binary searching on linked structure.
▪ To do this, each node has two links.
▪ For Example,
Set = {1,2,4,6,8,10,11,12,13,14,15}
▪ The complexity of find, add, delete is O(log2N).
• d. Bit Vectors
▪ In this method, elements are mapped with integers.
▪ A bit vector is an array with one array element per domain element.
▪ The value of the element is 1 showing that it is a member of set and the value is 0 if is not.
▪ For Example,
Set = {1,2,4,6,8,10,11,12,13,14,15}
▪ The complexity for the following operation,
Find= O(1)
Add= O(1)
Delete= O(1)
New = O(M)

• e. Analysis of Hashing
▪ Hashing Technique is at best to search the element.
▪ It is done in following steps,
a. The elements of the set are distributed relatively evenly into buckets.
b. The size of any one bucket is bounded by a constant.
c. The time to compute the hashing function is bounded by a constant.
▪ Its complexity is O(1).

Q. Give the time analysis of algorithms, Explain complexity with reference ?

Time Analysis/Complexity of Algorithm Or Efficiency of Algorithm


▪ While performing timing analysis, what we require is an estimate of the time, rather than the exact time for
execution.
▪ This is usually done by isolating a particular operation, called an active or a dominant operation, that is central
to the algorithm and which is executed frequently.
▪ The operations in the algorithm, namely, the assignments, the manipulation of the index i, and the access of a
value in the vector, do not occur more frequently than the addition of vector values.
▪ These operations are called book keeping operations and are not generally considered.
▪ However, care is to be taken, to see that none of the book keeping operations are executed significantly more
often than the active operation.
▪ After the active operation is isolated, the number of times it is executed is counted.
▪ If we follow this approach for the above example, then the number of additions of values is N.
▪ As long as the active operation occurs at least as often as the others, the execution time will increase in some
proportion to the number of times the active operation is executed.
▪ Thus, in most algorithms, execution time proportional to N.
Time Analysis of Linear Search
▪ The complexity of above algorithm is measured by the number of comparisons performed while searching an
element.
▪ The worst case gives us an idea about N+1 comparison has been done.
▪ So it complexity is O(n+1) or O(n).
▪ The complexity of above algorithm is measured by the number of comparisons & interchanges
performed while sorting an array.
▪ In every pass, it requires (n-1) comparisons.
▪ The complete array containing n elements are sorted in (n-1) number of passes.
▪ The number of comparison is ,
= (n-1) + (n-2) + (n-3) + ….+ 2 + 1
= n (n-1) / 2
=n
2 – n /2
= O (n2)
Difference between NP-Hard and NP-Complete:
NP-hard NP-Complete

NP-Hard problems(say X) can be solved if and


NP-Complete problems can be solved by a non-
only if there is a NP-Complete problem(say Y)
deterministic Algorithm/Turing Machine in
that can be reducible into X in polynomial
polynomial time.
time.

To solve this problem, it do not have to be in To solve this problem, it must be both NP and NP-
NP . hard problems.

Time is unknown in NP-Hard. Time is known as it is fixed in NP-Hard.

NP-hard is not a decision problem. NP-Complete is exclusively a decision problem.

Not all NP-hard problems are NP-complete. All NP-complete problems are NP-hard

Do not have to be a Decision problem. It is exclusively a Decision problem.

It is optimization problem used. It is Decision problem used.

Example: Determine whether a graph has a


Example: Halting problem, Vertex cover Hamiltonian cycle, Determine whether a Boolean
problem, etc. formula is satisfiable or not, Circuit-satisfiability
problem, etc.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: ALL THE BEST :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

You might also like