Unit-V Bee GGHHGG
Unit-V Bee GGHHGG
What is an algorithm?
An algorithm is a step by step procedure to solve a problem. In normal language,
Algorithms are used to convert our problem solution into step by step statements. These
statements
ts can be converted into computer programming instructions which forms a
takes required data as input, processes data according to the program instructions and
Specifications of Algorithms
Every algorithm must satisfy the following specifications...
1. Input - Every algorithm must take zero or more number of input values from
external.
Consider the given list of numbers as 'L' (input), and the largest number as 'max'
(Output).
Algorithm
1. Step 1: Define a variable 'max' and initialize with '0'.
2. Step 2: Compare first number (say 'x') in the list 'L' with 'max', if 'x' is larger than
'max', set 'max' to 'x'.
3. Step 3: Repeat step 2 for all numbers in the list 'L'.
4. Step 4: Display the value of 'max' as a result.
Step 1: Start
Step 2: Declare variables a, b, c, D, x1, x2, rp and ip;
Step 3: Calculate discriminant
D←b2-4ac
Step 4: If D≥0
r1←(-b+√D)/2a
r2←(-b-√D)/2a
Display r1 and r2 as roots.
Else
Calculate real part and imaginary part
rp←b/2a
ip←√(-D)/2a
Display rp+j(ip) and rp-j(ip) as roots
Step 5: Stop
5: Stop
Max-Min Problem
Let us consider a simple problem that can be solved by divide and conquer
technique.
Problem Statement
The Max-Min Problem in algorithm analysis is finding the maximum and
minimum value in an array.
Solution
To find the maximum and minimum numbers in a given
array numbers[] of size n, the following algorithm can be used. First we
are representing the naive method and then we will present divide and
conquer approach.
Naïve Method
Naïve method is a basic method to solve any problem. In this method, the
maximum and minimum number can be found separately. To find the
maximum and minimum numbers, the following straightforward algorithm
can be used.
for i = 2 to n do
if numbers[i] > max then
max := numbers[i]
if numbers[i] < min then
min := numbers[i]
return (max, min)
Analysis
The number of comparison in Naive method is 2n - 2.
The number of comparisons can be reduced using the divide and conquer
approach. Following is the technique.
Analysis
T(n)=⎧⎩⎨⎪⎪T(⌊n2⌋)+T(⌈n2⌉)+210forn>2forn=2forn=1
So,
T(n)=2.T(n2)+2=2.(2.T(n4)+2)+2.....=3n2−2T(n)=2.T(n2)+2=2.(2.T(n4)+2)+2.....
=3n2−2
Aim:
Algorithm:
Step 1: Start
Step 2: Read number n
Step 3: Set f=0
Step 4: For i=2 to n-1
Step 5: If n mod i==0 then
Step 6: Set f=1 and break
Step 7: Loop
Step 8: If f=0 then
print 'The given number is prime'
else
print 'The given number is not prime'
Step 9: Stop
Program code
#include<stdio.h>
#include<conio.h>
void main( )
{
clrscr();
int n,i,f=0;
printf("Enter the number: ");
scanf("%d",&n);
for(i=2;i<n;i++)
{
if(n%i==0)
{
f=1;
break;
}
}
if(f==0)
printf("The given number is prime");
else
printf("The given number is not prime");
getch();
}
Output
Example
void main(){
int list[20],size,i,sElement;
void main()
{
int first, last, middle, size, i, sElement, list[100];
clrscr();
first = 0;
last = size - 1;
middle = (first+last)/2;
It is known as bubble sort, because with every complete iteration the largest element in
the given array, bubbles up towards the last place or the highest index, just like a water
bubble rises up to the water surface.
Sorting takes place by stepping through all the elements one-by-one and comparing it
with the adjacent element and swapping them if required.
1. Starting with the first element(index = 0), compare the current element with the next
element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next element. Repeat
Step 1.
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
Although the above logic will sort an unsorted array, still the above algorithm is not efficient
because as per the above logic, the outer for loop will keep on executing for 6 iterations
even if the array gets sorted after the second iteration.
So, we can clearly optimize our algorithm.
As we can see, in the first iteration, swapping took place, hence we updated our flag value
to 1, as a result, the execution enters the for loop again. But in the second iteration, no
swapping will occur, hence the value of flag will remain 0, and execution will break out of
loop.
// below we have a simple C program for bubble sort
##include <stdio.h>
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
In the above code, in the function bubbleSort, if for a single complete cycle
of j iteration(inner forloop), no swapping takes place, then flag will remain 0 and then
we will break out of the for loops, because the array has already been sorted.
Complexity Analysis of Bubble Sort
In Bubble Sort, n-1 comparisons will be done in the 1st pass, n-2 in 2nd pass, n-3 in 3rd
pass and so on. So the total number of comparisons will be,
Sum = n(n-1)/2
i.e O(n2)
Insertion sort algorithm arranges a list of elements in a particular order. In insertion sort
algorithm, every iteration moves an element from unsorted portion to sorted portion until
all the elements are sorted in the list.
Step by Step Process
The insertion sort algorithm is performed using following steps...
Step 1 - Assume that first element in the list is in sorted portion and all the
remaining elements are in unsorted portion.
Step 2: Take first element from the unsorted portion and insert that element into
the sorted portion in the order specified.
Step 3: Repeat the above process until all the elements from the unsorted
portion are moved into the sorted portion.
Following is the sample code for insertion sort...
Insertion Sort Logic
//Insertion sort logic
for i = 1 to size-1 {
temp = list[i];
j = i-1;
while ((temp < list[j]) && (j > 0)) {
list[j] = list[j-1];
j = j - 1;
}
list[j] = temp;
}
Example
Complexity of the Insertion Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make (1+2+3+......+n-1) = (n (n-
1))/2 number of comparisions in the worst case. If the list is already sorted then it
Descending). In selection sort, the first element in the list is selected and it is compared repeatedly
with all the remaining elements in the list. If any element is smaller than the selected element (for
Ascending order), then both are swapped so that first position is filled with smallest element in the
sorted order. Next we select the element at second position in the list and it is compared with all the
remaining elements in the list. If any element is smaller than the selected element, then both are
Step 1 - Select the first element of the list (i.e., Element at first position in the list).
Step 2: Compare the selected element with all the other elements in the list.
Step 3: In every comparision, if any element is found smaller than the selected element (for
Step 4: Repeat the same procedure with element in the next position in the list till the entire
list is sorted.
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-3)+......+1) =
(n (n-1))/2 number of comparisions in the worst case. If the list is already sorted then it
void main(){
intsize,i,j,temp,list[100];
clrscr();
getch();
}
Performance Analysis
What is Performance Analysis of an algorithm?
If we want to go from city "A" to city "B", there can be many ways of doing this. We can
go by flight, by bus, by train and also by bicycle. Depending on the availability and
convenience, we choose the one which suits us. Similarly, in computer science there
are multiple algorithms to solve a problem. When we have more than one algorithm to
solve a problem, we need to select the best one. Performance analysis helps us to
When there are multiple alternative algorithms to solve a problem, we analyse them and
pick the one which is best suitable for our requirements. Formal definition is as follows...
algorithms.
That means when we have multiple algorithms to solve a problem, we need to select a
We compare algorithms with each other which are solving same problem, to select the
best algorithm. To compare algorithms, we use a set of parameters or set of elements
like memory required by that algorithm, execution speed of that algorithm, easy to
1. Whether that algorithm is providing the exact solution for the problem?
When we want to analyse an algorithm, we consider only the space and time required
follows...
Space Complexity
What is Space complexity?
When we design an algorithm to solve a problem, it needs some computer memory to
complete its execution. For any algorithm, memory is required for the following
purposes...
To calculate the space complexity, we must know the memory required to store different
datatype values (according to the compiler). For example, the C Programming
Language compiler requires the following...
return a*a;
}
In the above piece of code, it requires 2 bytes of memory to store variable 'a' and
another 2 bytes of memory is used for return value.
That means, totally it requires 4 bytes of memory to complete its execution. And
this 4 bytes of memory is fixed for any input value of 'a'. This space complexity is
said to be Constant Space Complexity.
If any algorithm requires a fixed amount of space for all input values then that
space complexity is said to be Constant Space Complexity.
Consider the following piece of code...
Example 2
int sum(int A[ ], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
In the above piece of code it requires
'n*2' bytes of memory to store array variable 'a[ ]'
2 bytes of memory for integer parameter 'n'
4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
2 bytes of memory for return value.
That means, totally it requires '2n+8' bytes of memory to complete its execution.
Here, the total amount of memory required depends on the value of 'n'. As 'n'
value increases the space required also increases proportionately. This type of
space complexity is said to be Linear Space Complexity.
If the amount of space required by an algorithm is increased with the increase of
input value, then that space complexity is said to be Linear Space Complexity.
Time Complexity
What is Time complexity?
Every algorithm requires some amount of computer time to execute its instruction to
perform the task. This computer time required is called time complexity.
Now, we calculate the time complexity of following example code by using the above
defined model machine...
Consider the following piece of code...
return a+b;
int sum = 0, i;
return sum;
In above calculation
Cost is the amount of computer time required for a single operation in each line.
Repeatation is the amount of computer time required by each operation for all its
repeatations.
Total is the amount of computer time required by each operation to execute.
So above code requires '4n+4' Units of computer time to complete the task. Here the
exact time is not fixed. And it changes based on the nvalue. If we increase the n value
then the time required also increases linearly.
Totally it takes '4n+4' units of time to complete its execution and it is Linear Time
Complexity.
If the amount of time required by an algorithm is increased with the increase of
input value then that time complexity is said to be Linear Time Complexity.
Asymptotic Notations
What is Asymptotic Notation?
Whenever we want to perform analysis of an algorithm, we need to calculate the
not provide exact amount of resource required. So instead of taking exact amount of
resource we represent that complexity in a general form (Notation) which produces the
basic nature of that algorithm. We use that general form (Notation) for analysis process.
complexity.
algorithm, we use only the most significant terms in the complexity of that algorithm and
ignore least significant terms in the complexity of that algorithm (Here complexity can be
Algorithm 1 : 5n2 + 2n + 1
Algorithm 2 : 10n2 + 8n + 3
Generally, when we analyze an algorithm, we consider the time complexity for larger
values of input data (i.e. 'n' value). In above two time complexities, for larger value
of 'n' the term '2n + 1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term '10n2'.
Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very
larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value
of 'n' we ignore the least significant terms to represent overall time required by an
algorithm. In asymptotic notation, we use only the most significant terms to represent
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
1. Big - Oh (O)
Complexity.
That means Big - Oh notation always indicates the maximum time required by an
algorithm for all input values. That means Big - Oh notation describes the worst case of
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
In above graph after a particular input value n0, always C g(n) is greater than f(n) which
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values
⇒3n + 2 <= C n
3n + 2 = O(n)
Time Complexity.
That means Big - Omega notation always indicates the minimum time required by an
algorithm for all input values. That means Big - Omega notation describes the best case
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values
⇒3n + 2 >= C n
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)
Time Complexity.
That means Big - Theta notation always indicates the average time required by an
algorithm for all input values. That means Big - Theta notation describes the average
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >=
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value
In above graph after a particular input value n 0, always C1 g(n) is less than f(n) and
C2 g(n) is greater than f(n) which indicates the algorithm's average bound.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for
By using Big - Theta notation we can represent the time compexity as follows...
3n + 2 = Θ(n)
Asymptotic Notations
What is Asymptotic Notation?
Whenever we want to perform analysis of an algorithm, we need to calculate the
not provide exact amount of resource required. So instead of taking exact amount of
resource we represent that complexity in a general form (Notation) which produces the
basic nature of that algorithm. We use that general form (Notation) for analysis process.
complexity.
algorithm, we use only the most significant terms in the complexity of that algorithm and
ignore least significant terms in the complexity of that algorithm (Here complexity can be
Algorithm 1 : 5n2 + 2n + 1
Algorithm 2 : 10n2 + 8n + 3
Generally, when we analyze an algorithm, we consider the time complexity for larger
values of input data (i.e. 'n' value). In above two time complexities, for larger value
of 'n' the term '2n + 1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term '10n2'.
Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very
larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value
of 'n' we ignore the least significant terms to represent overall time required by an
algorithm. In asymptotic notation, we use only the most significant terms to represent
1. Big - Oh (O)
Complexity.
That means Big - Oh notation always indicates the maximum time required by an
algorithm for all input values. That means Big - Oh notation describes the worst case of
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values
⇒3n + 2 <= C n
3n + 2 = O(n)
Time Complexity.
That means Big - Omega notation always indicates the minimum time required by an
algorithm for all input values. That means Big - Omega notation describes the best case
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n 0, C > 0 and n0 >= 1. Then we can
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values
⇒3n + 2 >= C n
3n + 2 = Ω(n)
Time Complexity.
That means Big - Theta notation always indicates the average time required by an
algorithm for all input values. That means Big - Theta notation describes the average
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >=
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value
C2 g(n) is greater than f(n) which indicates the algorithm's average bound.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n)
(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for
3n + 2 = Θ(n)
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the
running time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time. It measures the worst case time complexity or
the
Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or the
best amount of time an algorithm can possibly take to complete.
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and
the upper bound of an algorithm's running time. It is represented as follows
−
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Common Asymptotic Notations
Following is a list of some common asymptotic notations −
constant − Ο(1)
logarithmic − Ο(log n)
linear − Ο(n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nΟ(1)
exponential − 2Ο(n)