ALGORITHMS
CUITM205
What is an Algorithm
• An algorithm is a finite set of instructions that are
carried out in a specific order to perform a specific
task
• Algorithm is a step-by-step procedure, which
defines a set of instructions to be executed in a
certain order to get the desired output.
• Algorithms are generally created independent of
underlying languages, i.e. an algorithm can be
implemented in more than one programming
language
Characteristics of an Algorithm
• Unambiguous − Algorithm should be clear and unambiguous. Each of
its steps (or phases), and their inputs/outputs should be clear and must
lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs,
and should match the desired output.
• Finiteness − Algorithms must terminate after a finite number of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions, which
should be independent of any programming code.
How to write an Algorithm
• There are no well-defined standards for writing
algorithms. Rather, it is problem and resource
dependent.
• Algorithms are never written to support a
particular programming code.
• Algorithms are written in a step-by-step
manner, but it is not always the case.
• Algorithm writing is a process and is executed
after the problem domain is well-defined.
Example of an Algorithm
• An algorithm to add two numbers
• Step 1 - Start
• Step 2 -Declare variables num1 ,num2 and sum
• Step 3 – Read values of num1 and num2
• Step 4 – Add num1 and num2 and assign the
result to sum
• Step 5 Display sum
• Step 6 Stop
Algorithm Analysis
• Efficiency of an algorithm can be analyzed at two different stages,
before implementation and after implementation. They are the
following −
A Priori Analysis −
• This is a theoretical analysis of an algorithm. The analysis is performed
prior to running it on a specific system.
• This analysis is a stage where a function is defined using some
theoretical model. Hence, we determine the time and space complexity
of an algorithm by just looking at the algorithm rather than running it
on a particular system with a different memory, processor, and compiler.
• Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect
on the implementation.
Algorithm Analysis
• A Posterior Analysis − This is an empirical analysis of an
algorithm. The selected algorithm is implemented using
programming language. This is then executed on target
computer machine. In this analysis, actual statistics like
running time and space required, are collected
• The analysis of an algorithm means we perform analysis of an
algorithm only after running it on a system. It directly
depends on the system and changes from system to system.
• In an industry, we cannot perform Apostiari analysis as the
software is generally made for an anonymous user, which
runs it on a system different from those present in the
industry.
Algorithm Complexity
• Suppose X is an algorithm and n is the size of input data,
the time and space used by the algorithm X are the two
main factors, which decide the efficiency of X.
• Time Factor – Time is measured by counting the number of
key operations such as comparisons in the sorting
algorithm.
• Space Factor − Space is measured by counting the
maximum memory space required by the algorithm.
• The complexity of an algorithm f(n) gives the running time
and/or the storage space required by the algorithm in
terms of n as the size of input data.
Space Complexity
• Space complexity of an algorithm represents the amount of memory
space required by the algorithm to run to completion The space required
by an algorithm is equal to the sum of
• the following two components −
• A fixed part that is a space required to store certain data and variables,
that are independent of the size of the problem. For example, simple
variables and constants used, program size, etc.
• A variable part is a space required by variables, whose size depends on
the size of the problem. For example, dynamic memory allocation,
recursion stack space, etc.
• Space complexity S(P) of any algorithm P is S(P) = C + S(I), where C is the
fixed part and S(I) is the variable part of the algorithm, which depends on
instance characteristic I.
• Following is a simple example that tries to explain the concept
Time Complexity
• Time complexity of an algorithm represents the amount of time
required by the algorithm to run to completion. Time requirements
can be defined as a numerical function T(n),
• where T(n) can be measured as the number of steps, provided each
step consumes constant time.
• For example, addition of two n-bit integers takes n steps.
• Consequently, the total computational time is
• T(n) = compile time + execution time
T(n) = c*n,
• Here, we observe that T(n) grows linearly as the input size increases
Compile and Run Time
Complexity
Compile Time
• Time taken to compile an algorithm
• While compiling syntax and sematic errors are checked
Run time
• Time taken to execute a compiled program
• Depends on the number of instructions in the
algorithm
• Run time is calculated only for executable statements
Different types of time
complexities
1. Constant time – O (1)
2. Linear time – O (n)
3. Logarithmic time – O (log n)
4. Quadratic time – O (n^2)
Different types of time
complexities
• Constant time – O (1)
• An algorithm is said to have constant time with
order O (1) when it is not dependent on the
input size n. Irrespective of the input size n, the
runtime will always be the same.
• Linear time – O(n)
• An algorithm is said to have a linear time
complexity when the running time increases
linearly with the length of the input. When the
function involves checking all the values in
input data, with this order O(n).
Different types of time
complexities
• Logarithmic time – O (log n)
• An algorithm is said to have a logarithmic time
complexity when it reduces the size of the input data
in each step. This indicates that the number of
operations is not the same as the input size. The
number of operations gets reduced as the input size
increases. Algorithms are found in binary trees or
binary search functions.
• Quadratic time – O (n2)
• An algorithm is said to have a non-linear time
complexity where the running time increases non-
linearly (n2) with the length of the input. Generally,
nested loops come under this order where one loop
takes O(n) and if the function involves a loop within a
2
Time Complexity Big O Notation
Rules of calculating Time complexity
1. Drop lower order Terms
2. Drop all the constant multipliers
Example
int total = 0, i, sum = 0, n = 4; // C1 - constant
for (i = 0; i < n; i++) {
// C2 no of times=n+1 (+1 for the end false condition)
sum = sum + i; // C3 no of times=n
printf(sum); // C1 - constant
}
int a = 0, b = 0, i, j, k, N; //C1
for (i = 0; i < N; i++) { //C2(n+1)
for (j = 0; j < N; j++) { //C2(n+1)
a = a + j; //C1
}
}
for (k = 0; k < N; k++) { //C3(n+1)
b = b + k; //C1
}
Asymptotic Analysis
• Asymptotic analysis of an algorithm refers to defining the mathematical
boundation/framing of its run-time performance.
• Asymptotic analysis refers to computing the running time of any operation
in mathematical units of computation. For example, the running time of
one operation is computed as f(n) and may be for another operation it is
computed as g(n2).
• Using asymptotic analysis, we can very well conclude the best case,
average case, and worst case scenario of an algorithm.
• Asymptotic analysis is input bound i.e., if there's no input to the algorithm,
it is concluded to work in a constant time. Other than the "input" all other
factors are considered constant
Asymptotic Analysis
• Best Case − Minimum time required for
program execution.
• Average Case − Average time required for
program execution.
• Worst Case − Maximum time required for
program execution
Asymptotic Notations
• Following are the commonly used asymptotic
notations to calculate the running time
complexity of an algorithm.
• Ο Notation
• Ω Notation
• θ Notation
Big O Notation
• The notation Ο(n) is the formal way to express
the upper bound of an algorithm's running
time.
• It measures the worst case time complexity or
the longest amount of time an algorithm can
possibly take to complete
Big O Notation
Omega Notation, Ω
• The notation Ω(n) is the formal way to express
the lower bound of an algorithm's running
time.
• It measures the best case time complexity or
the best amount of time an algorithm can
possibly take to complete.
Omega Notation, Ω
.
Omega Notation, Ω
• For example, for a function f(n)
• Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such
that g(n) ≤ c.f(n) for all n > n0. }
θ Notation
• The notation θ(n) is the formal way to express
both the lower bound and the upper bound of
an algorithm's running time.
θ Notation
Space and Time Efficiency
• There is a trade-off between memory use and
runtime performance
• The more time efficiency an algorithm has the
less space efficiency it has