DS Lect 02 (Complexity of Algo)
DS Lect 02 (Complexity of Algo)
M. Qasim Riaz
Growth of Algorithm
Growth of Algorithm
Concept of Big O Notation
• Big O notation is a mathematical notation that describes the
limiting behavior of a function when the argument approaches
infinity.
• In the context of algorithms, Big O notation is used to check
efficiency of an algorithm by expressing the upper bound on its
time / space complexity in terms of the input size.
• The notation O(f(n)) represents the upper bound of the growth
rate of an algorithm's time or space complexity as a function of
the input size (n).
• It provides a way to describe how the runtime or space
requirements of an algorithm scale with the size of the input.
• Big O notation helps in comparing algorithms with bothering
the exact details of the hardware, programming language, or
constant factors.
Growth of Algorithm
Common Big O complexities include:
O(1) - Constant time complexity: The algorithm's performance is
constant, regardless of the input size.
O(log n) - Logarithmic time complexity: The algorithm's
performance grows logarithmically with the input size.
O(n) - Linear time complexity: The algorithm's performance is
directly proportional to the input size.
O(n log n) - Linearithmic time complexity: Common in many
efficient sorting algorithms like merge sort and heap sort.
O(n^2), O(n^3), ... - Polynomial time complexity: The performance
of the algorithm is a polynomial function of the input size.
O(2^n) - Exponential time complexity: The performance doubles
with each additional element in the input.
Growth of Algorithm
Examples: Finding the maximum element in an array
Growth of Algorithm
Examples: Finding the maximum element in an array
• In Example 1 (find_max_linear), the algorithm iterates through
each element in the array once. The time it takes to find the
maximum element is directly proportional to the size of the
array (n). Therefore, the time complexity is O(n) - linear time
complexity.
• find_max_linear has a linear time complexity (O(n)).
• In Example 2 (find_max_constant), the built-in max function is
used, which can find the maximum element in the array in a
single pass, regardless of the array size. The time complexity is
constant, O(1), because the time it takes to find the maximum
does not depend on the size of the input array.
• find_max_constant has a constant time complexity (O(1)).
Concept of Asymptotes
Basic Concept:
• Asymptotes are lines or curves that a graph approaches but
never quite reaches.
• They are used to describe the behavior of a function as the
input values get extremely large or extremely small.
Types of Asymptotes:
• Horizontal Asymptote (HA):
• Vertical Asymptote (VA)
• Slant or Oblique Asymptote
Asymptotic Notations
• Big O Notation (O)
• Omega Notation (Ω)
• Theta Notation (Θ)
Concept of Asymptotes
Horizontal Asymptote: The asymptote is a horizontal asymptote when x tends
to infinity or –infinity, and the curve approaches some constant value b.
Vertical Asymptote: The asymptote is a vertical asymptote when x approaches
some constant value c from left to right, and the curve tends to infinity or -
infinity.
Oblique/Slant Asymptote: When x moves towards infinity or –infinity and the
curve moves towards a line y = mx + b. Here, m is not zero as in horizontal
asymptote.
Asymptotic Notations
Big O Notation (O):
• Represents an upper bound on the growth rate of a function. It
describes the worst-case scenario.
• Example: f(n)=O(g(n)) means that the growth of f(n) is at most
proportional to the growth of g(n)
Omega Notation (Ω):
• Represents a lower bound on the growth rate of a function. It
describes the best-case scenario.
• Example: f(n)=Ω(g(n)) means that the growth of f(n) is at least
proportional to the growth of g(n).
Theta Notation (Θ):
• Represents both upper and lower bounds on the growth rate of
a function. It describes the tight bound.
• Example: f(n)=Θ(g(n)) means that the growth of f(n) is exactly
proportional to the growth of g(n).
Asymptotic Notations
Theta Notation (Θ):
• Theta notation encloses the function from above and below.
• It represents the upper and the lower bound of the running
time of an algorithm.
• It is used for analyzing the average-case complexity of an
algorithm.
• Theta (Average Case) You add the running times for each
possible input combination and take the average in the average
case.
Asymptotic Notations
Theta Notation (Θ): Example
• Let g and f be the function from the set of natural numbers to
itself.
• The function f is said to be Θ(g), if there are constants c1, c2 > 0
and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for
all n ≥ n0
Asymptotic Notations
Big-O Notation (O-notation):
• Big-O notation represents the upper bound of the running time
of an algorithm. Therefore, it gives the worst-case complexity
of an algorithm.
• It is the most widely used notation for Asymptotic analysis.
• It specifies the upper bound of a function.
• The maximum time required by an algorithm or the worst-case
time complexity.
• It returns the highest possible output value(big-O) for a given
input.
• Big-Oh(Worst Case) It is defined as the condition that allows an
algorithm to complete statement execution in the longest
amount of time possible.
Asymptotic Notations
Big-O Notation (O-notation): Example
• If f(n) describes the running time of an algorithm, f(n) is O(g(n))
if there exist a positive constant C and n0 such that, 0 ≤ f(n) ≤
cg(n) for all n ≥ n0
• It returns the highest possible output value (big-O)for a given
input.
Asymptotic Notations
Omega Notation (Ω-Notation):
• Omega notation represents the lower bound of the running
time of an algorithm.
• Thus, it provides the best-case complexity of an algorithm.
• The execution time serves as a lower bound on the algorithm’s
time complexity.
• It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of time.
Asymptotic Notations
Omega Notation (Ω-Notation): Example
• Let g and f be the function from the set of natural numbers to
itself. The function f is said to be Ω(g), if there is a constant c >
0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Therefore, C=4 and k=1 are witnesses to ǧ�ỵ����ỵ̧ �̧ x�̦ being O�x̧ �
Example # 1
•Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �
Observation:
We want to find witnesses C and k such that:
̥ ���x̧ ��̧ x�̦ ���C�x̧ ��ǧộs�x ��k
Alternative Set of Witnesses:
Consider C�̨ and k�̧ .
Show that ̥ �x̧ �̧ x�̦ ���̨ x̧ �for x�̦ :
Therefore, C=3 and k=2 are witnesses to ǧ�ỵ����ỵ̧ �̧ x�̦ being O�x̧ �
Example # 1
•Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �
Observation:
We want to find witnesses C and k such that:
̥ ���x̧ ��̧ x�̦ ���C�x̧ ��ǧộs�x ��k
Generalization:
The concept can be generalized, stating that if f(x) is O(g(x)) then it's also
true that g(x) is O(f(x))
This follows from the inequality x̧ ���x̧ �̧ x�̦ �for all nonnegative real
numbers x.
Complexity of Algorithm
• There are always more than one way to solve a problem in
computer science with different algorithms.
• Therefore, it is highly required to use a method to
compare the solutions in order to judge which one is more
optimal.
• The method must adhere the following properties:
• Independent of the machine and its configuration, on
which the algorithm is running on.
• Shows a direct correlation with the number of inputs.
• Can distinguish two algorithms clearly without
ambiguity.
Complexity of Algorithm
What is meant by the Time Complexity of an Algorithm?
• Time Complexity of an algorithm/code is not equal to the actual
time required to execute a particular code, but the number of
times a statement executes.
For Example:
• Write code in C/C++ or any other language to find the maximum
between N numbers, where N varies from 10, 100, 1000, and
10000.
You will get surprising results i.e.:
• For N = 10: you may get 0.5 ms time,
• For N = 10,000: you may get 0.2 ms time.
• Also, you will get different timings on different machines. Even if
you will not get the same timings on the same machine for the
same code, the reason behind that is the current network load.
Complexity of Algorithm
Now, the question arises if time complexity is not the actual time
required to execute the code, then what is it?
• In the above code “Hello World” is printed only once on the screen.
• So, the time complexity is constant: O(1) i.e. every time a constant
amount of time is required to execute code, no matter which operating
system or which machine configurations you are using.
Complexity of Algorithm
Example – 2
#include <iostream>
using namespace std;
int main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}
• Time Complexity: In the above code “Hello World !!!” is printed only n
times on the screen, as the value of n can change.
• So, the time complexity is linear: O(n) i.e. every time, a linear amount of
time is required to execute code.
The code contains a loop that prints "Hello World !!!" and doubles the value of i in each iteration until i
becomes greater than n. The loop condition is i <= n, and in each iteration, i is multiplied by 2 (i = i * 2).
Complexity of Algorithm
Example – 3 – Why O (Log n)
because the loop in the code iterates in a way that is proportional to the
logarithm of the input size n. Let's break down why the complexity is
O(logn):
Complexity of Algorithm
Example – 4
#include <iostream>
• For this one, the complexity is a polynomial
using namespace std; equation (quadratic equation for a square
int main() matrix)
{
int n = 3; • Matrix of size n*n => Tsum = a.n2 + b.n + c
int m = 3;
int arr[][3]
• Since Tsum is in order of n2, therefore Time
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } }; Complexity = O(n2)
int sum = 0;
Complexity of Algorithm
Example - 7 Why (n!)
Factorial Time Complexity (n!) Recursive Structure:
#include <iostream> • The function factorial is defined recursively, and it calls itself with a
using namespace std; smaller value (n - 1) until it reaches the base case (n <= 1).
• This recursive structure is a characteristic of algorithms with factorial
long factorial(int n) { time complexity.
if (n <= 1) { Work at Each Level:
return 1; • At each level of recursion, the function performs a constant amount
} of work, which is the multiplication operation (n× the result of the
return n * factorial(n - 1); recursive call).
} • However, the number of levels is proportional to n, as it decrements
by 1 in each recursive call until reaching the base case. This leads to
int main() { n levels of recursion.
int n = 5; Multiplicative Nature:
cout << factorial(n) << endl; • The work done at each level is multiplied by the number of recursive
return 0; calls at that level.
} • Since each level has n recursive calls, the total work becomes
n×(n−1)×(n−2)×…×2×1, which is the definition of n!.
Complexity of Algorithm
Example - 8 Question:
Factorial Time Complexity (n!) Is it necessary to have recursion for n! kind of complexity
#include <iostream> Answer:
using namespace std; • No, it's not strictly necessary to use recursive functions to achieve
factorial time complexity (O(n!)).
long factorial(int n) { • t's possible to implement algorithms with similar complexity using
long result = 1; iterative approaches or other constructs.
for (int i = 1; i <= n; ++i) { • The key characteristic is the factorial growth in the number of
result *= i; operations with the input size.
} • As shown in example - 8
return result;
}
int main() {
int n = 5;
cout << factorial(n) << endl;
return 0;
}