0% found this document useful (0 votes)
34 views35 pages

DS Lect 02 (Complexity of Algo)

Uploaded by

Muhammad Bilal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views35 pages

DS Lect 02 (Complexity of Algo)

Uploaded by

Muhammad Bilal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Lecture 01: Introduction CS221 Data Structures & Algo.

Finding the Complexity

M. Qasim Riaz

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Algorithm

• We will study the number of operations used by these


algorithms.
• We will estimate the number of comparisons used by the linear
and binary search algorithms to find an element in a sequence
of n elements.
• We will also estimate the number of comparisons used by the
bubble sort and by the insertion sort to sort a list of n
elements.
• The time required to solve a problem depends on more than
only the number of operations it uses.
• The time also depends on the hardware and software used to
run the program that implements the algorithm.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Algorithm
Concept of Big O Notation
• Big O notation is a mathematical notation that describes the
limiting behavior of a function when the argument approaches
infinity.
• In the context of algorithms, Big O notation is used to check
efficiency of an algorithm by expressing the upper bound on its
time / space complexity in terms of the input size.
• The notation O(f(n)) represents the upper bound of the growth
rate of an algorithm's time or space complexity as a function of
the input size (n).
• It provides a way to describe how the runtime or space
requirements of an algorithm scale with the size of the input.
• Big O notation helps in comparing algorithms with bothering
the exact details of the hardware, programming language, or
constant factors.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Algorithm
Common Big O complexities include:
O(1) - Constant time complexity: The algorithm's performance is
constant, regardless of the input size.
O(log n) - Logarithmic time complexity: The algorithm's
performance grows logarithmically with the input size.
O(n) - Linear time complexity: The algorithm's performance is
directly proportional to the input size.
O(n log n) - Linearithmic time complexity: Common in many
efficient sorting algorithms like merge sort and heap sort.
O(n^2), O(n^3), ... - Polynomial time complexity: The performance
of the algorithm is a polynomial function of the input size.
O(2^n) - Exponential time complexity: The performance doubles
with each additional element in the input.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Algorithm
Examples: Finding the maximum element in an array

# Example 1: Linear time complexity (O(n))


def find_max_linear(arr):
max_element = arr[0]
for element in arr:
if element > max_element:
max_element = element
return max_element

# Example 2: Constant time complexity (O(1))


def find_max_constant(arr):
return max(arr)

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Algorithm
Examples: Finding the maximum element in an array
• In Example 1 (find_max_linear), the algorithm iterates through
each element in the array once. The time it takes to find the
maximum element is directly proportional to the size of the
array (n). Therefore, the time complexity is O(n) - linear time
complexity.
• find_max_linear has a linear time complexity (O(n)).
• In Example 2 (find_max_constant), the built-in max function is
used, which can find the maximum element in the array in a
single pass, regardless of the array size. The time complexity is
constant, O(1), because the time it takes to find the maximum
does not depend on the size of the input array.
• find_max_constant has a constant time complexity (O(1)).

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Concept of Asymptotes
Basic Concept:
• Asymptotes are lines or curves that a graph approaches but
never quite reaches.
• They are used to describe the behavior of a function as the
input values get extremely large or extremely small.
Types of Asymptotes:
• Horizontal Asymptote (HA):
• Vertical Asymptote (VA)
• Slant or Oblique Asymptote
Asymptotic Notations
• Big O Notation (O)
• Omega Notation (Ω)
• Theta Notation (Θ)

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Concept of Asymptotes
Horizontal Asymptote: The asymptote is a horizontal asymptote when x tends
to infinity or –infinity, and the curve approaches some constant value b.
Vertical Asymptote: The asymptote is a vertical asymptote when x approaches
some constant value c from left to right, and the curve tends to infinity or -
infinity.
Oblique/Slant Asymptote: When x moves towards infinity or –infinity and the
curve moves towards a line y = mx + b. Here, m is not zero as in horizontal
asymptote.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Big O Notation (O):
• Represents an upper bound on the growth rate of a function. It
describes the worst-case scenario.
• Example: f(n)=O(g(n)) means that the growth of f(n) is at most
proportional to the growth of g(n)
Omega Notation (Ω):
• Represents a lower bound on the growth rate of a function. It
describes the best-case scenario.
• Example: f(n)=Ω(g(n)) means that the growth of f(n) is at least
proportional to the growth of g(n).
Theta Notation (Θ):
• Represents both upper and lower bounds on the growth rate of
a function. It describes the tight bound.
• Example: f(n)=Θ(g(n)) means that the growth of f(n) is exactly
proportional to the growth of g(n).

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Theta Notation (Θ):
• Theta notation encloses the function from above and below.
• It represents the upper and the lower bound of the running
time of an algorithm.
• It is used for analyzing the average-case complexity of an
algorithm.
• Theta (Average Case) You add the running times for each
possible input combination and take the average in the average
case.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Theta Notation (Θ): Example
• Let g and f be the function from the set of natural numbers to
itself.
• The function f is said to be Θ(g), if there are constants c1, c2 > 0
and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for
all n ≥ n0

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Big-O Notation (O-notation):
• Big-O notation represents the upper bound of the running time
of an algorithm. Therefore, it gives the worst-case complexity
of an algorithm.
• It is the most widely used notation for Asymptotic analysis.
• It specifies the upper bound of a function.
• The maximum time required by an algorithm or the worst-case
time complexity.
• It returns the highest possible output value(big-O) for a given
input.
• Big-Oh(Worst Case) It is defined as the condition that allows an
algorithm to complete statement execution in the longest
amount of time possible.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Big-O Notation (O-notation): Example
• If f(n) describes the running time of an algorithm, f(n) is O(g(n))
if there exist a positive constant C and n0 such that, 0 ≤ f(n) ≤
cg(n) for all n ≥ n0
• It returns the highest possible output value (big-O)for a given
input.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Omega Notation (Ω-Notation):
• Omega notation represents the lower bound of the running
time of an algorithm.
• Thus, it provides the best-case complexity of an algorithm.
• The execution time serves as a lower bound on the algorithm’s
time complexity.
• It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of time.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Asymptotic Notations
Omega Notation (Ω-Notation): Example
• Let g and f be the function from the set of natural numbers to
itself. The function f is said to be Ω(g), if there is a constant c >
0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation
• Imagine you have two functions, f(x) and g(x), that describe the
time it takes for two different algorithms to solve a problem of
size x.
We Remember Big O Definition:
• We say that f(x) is O(g(x)) if there are constants C and k such
that ∣f(x)∣ ≤ C∣g(x)∣ whenever x > k.
• In simpler terms, f(x) is Big O of g(x) if, after a certain point f(x)
doesn't grow faster than a constant multiple of g(x).
Intuition behind this idea:
• This means that g(x) acts as an upper bound for f(x) after a
certain point.
• If you can find constants C and k where ∣f(x)∣ is always less than
or equal to C∣g(x)∣ for all x>k, then f(x) is O(g(x)).

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation
Concept of Witness:
•C and k are called witnesses. They are used to demonstrate that
the relationship f(x) is O(g(x)) is true.
•You only need one pair of witnesses to establish f(x) is O(g(x)).
•There can be infinite pairs of witnesses. If C and k are witnesses,
then any C ′ and k′ where C < C ′ and k < k′ are also witnesses.
Example: Let's say you have two functions:
ǧ�ỵ��̧ ỵ̧ �̨ ỵ�̦ ���� ĝ�ỵ��ỵ̧
Now, we want to show that f(x) is O(g(x))
•Step 1: Find Witnesses:
Choose C=3 and k=1.
•Step 2: Show Relationship:
∣̧ x̧ �̨ x�̦ ∣���̨ ∣x̧ ∣ whenever x�̦ .
This is true because after x�̦ ��̧ x̧ �̨ x�̦ �doesn't grow
faster than ̨ x̧ . So, in this case, ǧ�ỵ� is Ô�ĝ�ỵ�� with
witnesses C=3 and k=1.
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi
Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation
Example # 1
•Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �
Observation:
We want to find witnesses C and k such that:
̥ ���x̧ ��̧ x�̦ ���C�x̧ ��ǧộs�x ��k
First Set of Witnesses:
Consider C�̱ and k�̦ .
Show that ̥ �x̧ �̧ x�̦ ���̱ x̧ �for x�̦ :

̥ ���ỵ̧ �̧ x�̦ ����ỵ̧ ��̧ �ỵ̧ ���ỵ̧ ����̱ �ỵ̧ �

Therefore, C=4 and k=1 are witnesses to ǧ�ỵ����ỵ̧ �̧ x�̦ being O�x̧ �

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation

Example # 1
•Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �
Observation:
We want to find witnesses C and k such that:
̥ ���x̧ ��̧ x�̦ ���C�x̧ ��ǧộs�x ��k
Alternative Set of Witnesses:
Consider C�̨ and k�̧ .
Show that ̥ �x̧ �̧ x�̦ ���̨ x̧ �for x�̦ :

̥ ���ỵ̧ �̧ x�̦ ����ỵ̧ ���ỵ̧ ���ỵ̧ ����̨ �ỵ̧ �

Therefore, C=3 and k=2 are witnesses to ǧ�ỵ����ỵ̧ �̧ x�̦ being O�x̧ �

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation

Example # 1
•Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �
Observation:
We want to find witnesses C and k such that:
̥ ���x̧ ��̧ x�̦ ���C�x̧ ��ǧộs�x ��k
Generalization:
The concept can be generalized, stating that if f(x) is O(g(x)) then it's also
true that g(x) is O(f(x))

This follows from the inequality x̧ ���x̧ �̧ x�̦ �for all nonnegative real
numbers x.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Growth of Function for Big – O


Notation
Example # 1
• Show that: f�x��x̧ �̧ x�̦ �îș�O�x̧ �

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Understanding Time Complexity


We have different types of complexities that includes:
O(1) - Constant time complexity: The algorithm's performance is
constant, regardless of the input size.
O(log n) - Logarithmic time complexity: The algorithm's
performance grows logarithmically with the input size.
O(n) - Linear time complexity: The algorithm's performance is
directly proportional to the input size.
O(n log n) - Linearithmic time complexity: Common in many
efficient sorting algorithms like merge sort and heap sort.
O(n^2), O(n^3), ... - Polynomial time complexity: The performance
of the algorithm is a polynomial function of the input size.
O(2^n) - Exponential time complexity: The performance doubles
with each additional element in the input.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
• There are always more than one way to solve a problem in
computer science with different algorithms.
• Therefore, it is highly required to use a method to
compare the solutions in order to judge which one is more
optimal.
• The method must adhere the following properties:
• Independent of the machine and its configuration, on
which the algorithm is running on.
• Shows a direct correlation with the number of inputs.
• Can distinguish two algorithms clearly without
ambiguity.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
What is meant by the Time Complexity of an Algorithm?
• Time Complexity of an algorithm/code is not equal to the actual
time required to execute a particular code, but the number of
times a statement executes.
For Example:
• Write code in C/C++ or any other language to find the maximum
between N numbers, where N varies from 10, 100, 1000, and
10000.
You will get surprising results i.e.:
• For N = 10: you may get 0.5 ms time,
• For N = 10,000: you may get 0.2 ms time.
• Also, you will get different timings on different machines. Even if
you will not get the same timings on the same machine for the
same code, the reason behind that is the current network load.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Now, the question arises if time complexity is not the actual time
required to execute the code, then what is it?

“Instead of measuring actual time required in executing each statement in


the code, Time Complexity considers how many times each statement
Example
executes.”
int main()
{
cout << "Hello World";
return 0; Time Complexity
}.

• In the above code “Hello World” is printed only once on the screen.
• So, the time complexity is constant: O(1) i.e. every time a constant
amount of time is required to execute code, no matter which operating
system or which machine configurations you are using.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Example – 2
#include <iostream>
using namespace std;

int main()
{

int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}

• Time Complexity: In the above code “Hello World !!!” is printed only n
times on the screen, as the value of n can change.
• So, the time complexity is linear: O(n) i.e. every time, a linear amount of
time is required to execute code.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.
Complexity of Algorithm Example – 3
Step-wise Analysis int main()
1. Initialization: Let's determine how many {
int i, n = 8; initializes two integer iterations the loop will have: int i, n = 8;
variables i and n. This is a constant- for (i = 1; i <= n; i=i*2) {
time operation, denoted as O(1). Initially, i = 1. cout << "Hello World !!!\n";
2. Loop Condition: In the first iteration, i is }
doubled to become 2. return 0;
The loop condition is i <= n. The
}
loop will continue as long as i is In the second iteration, i is
less than or equal to n. doubled to become 4.
3. Loop Body: In the third iteration, i is
Inside the loop, there is a single doubled to become 8.
statement that prints "Hello World In the fourth iteration, i is
!!!". This is a constant-time doubled to become 16.
operation, denoted as O(1). The loop will terminate after
4. Loop Update: the fourth iteration because i
i = i * 2 doubles the value of i in becomes greater than n (8).
each iteration.
Time Complexity Analysis:

The code contains a loop that prints "Hello World !!!" and doubles the value of i in each iteration until i
becomes greater than n. The loop condition is i <= n, and in each iteration, i is multiplied by 2 (i = i * 2).

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Example – 3 – Why O (Log n)
because the loop in the code iterates in a way that is proportional to the
logarithm of the input size n. Let's break down why the complexity is
O(logn):

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Example – 4
#include <iostream>
• For this one, the complexity is a polynomial
using namespace std; equation (quadratic equation for a square
int main() matrix)
{
int n = 3; • Matrix of size n*n => Tsum = a.n2 + b.n + c
int m = 3;
int arr[][3]
• Since Tsum is in order of n2, therefore Time
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } }; Complexity = O(n2)
int sum = 0;

// Iterating over all 1-D arrays in 2-D array


for (int i = 0; i < n; i++) {
• Time Complexity: O(n*m)
• The program iterates through all the elements
// Printing all elements in ith 1-D array
for (int j = 0; j < m; j++) { in the 2D array using two nested loops. The
// Printing jth element of ith row outer loop iterates n times, and the inner loop
}
sum += arr[i][j];
iterates m times for each iteration of the outer
} loop. Therefore, the time complexity of the
cout << sum << endl;
return 0; program is O(n*m).
}

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.
Complexity of Algorithm
Example – 5
Step-wise Analysis
int list_Sum(int A[], int n)
1. Initialization:
{
int sum = 0; initializes the sum
int sum = 0; // cost=1 no of times=1
variable. This is a constant-time
for(int i=0; i<n; i++) // cost=2 no of times=n+1 (+1 for the end
operation, denoted as O(1).
false condition)
The loop initialization for(int i=0;
sum = sum + A[i] ; // cost=2 no of times=n
i<n; i++) also contributes to the
return sum ; // cost=1 no of times=1
initialization cost.
}
2. Loop Condition:
The loop condition i <= n is Summary:
checked n+1 times (including the • The time complexity of
last check when i=n and the loop the list_Sum function is
terminates). O(n), where n is the size of
3. Loop Body: the array A[].
Inside the loop, sum = sum + A[i]; • The code has a linear time
performs a constant-time complexity, as the running
operation for each iteration of the time grows linearly with
loop. The loop body is executed n the size of the input array.
times.
4. Return Statement:
return sum is a constant-time In Big-O notation, we drop the constant factors and lower-order
operation. terms. Therefore, the time complexity of the provided code is O(n).
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi
Lecture 01: Introduction CS221 Data Structures & Algo.
Complexity of Algorithm
Step-wise Analysis Example – 6
1. Initialization: int main()
int i, n = 8; initializes two integer {
int i, n = 8;
variables, i and n. This is a for (i = 2; i <= n; i=pow(i,2)) {
constant-time operation, denoted cout << "Hello World !!!\n";
as O(1). }
Big (O) Complexity
The loop initialization for(int i=0; return 0;
O(log2(log2(n)))
i<n; i++) also contributes to the }
initialization cost. Summary
2. Loop Condition: The loop performs
The loop condition is i <= n This is log (log (n))+1 iterations due to
2 2
checked log2(log2(n))+1 times the way i is updated. The code has
because i is updated using i = a linear time complexity, as the
pow(i, 2) in each iteration. running time grows linearly with
The log2(log2(n)) comes from the the size of the input array.
fact that i is raised to the power of The first logarithm represents
2 in each iteration. the squaring operation in each
3. Loop Body: iteration, second represents the
Inside the loop, there is a single number of iterations required for
statement that prints "Hello World the first logarithmic growth to
!!!". This is a constant-time exceed n.
operation, denoted as O(1). 4.
Loop Update:
i = pow(i, 2) updates the value
Ghulam ofKhan
Ishaq i Institute of Engineering Sciences and Technology, Topi
to its square in each iteration.
Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Example - 7 Why (n!)
Factorial Time Complexity (n!) Recursive Structure:
#include <iostream> • The function factorial is defined recursively, and it calls itself with a
using namespace std; smaller value (n - 1) until it reaches the base case (n <= 1).
• This recursive structure is a characteristic of algorithms with factorial
long factorial(int n) { time complexity.
if (n <= 1) { Work at Each Level:
return 1; • At each level of recursion, the function performs a constant amount
} of work, which is the multiplication operation (n× the result of the
return n * factorial(n - 1); recursive call).
} • However, the number of levels is proportional to n, as it decrements
by 1 in each recursive call until reaching the base case. This leads to
int main() { n levels of recursion.
int n = 5; Multiplicative Nature:
cout << factorial(n) << endl; • The work done at each level is multiplied by the number of recursive
return 0; calls at that level.
} • Since each level has n recursive calls, the total work becomes
n×(n−1)×(n−2)×…×2×1, which is the definition of n!.

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.

Complexity of Algorithm
Example - 8 Question:
Factorial Time Complexity (n!) Is it necessary to have recursion for n! kind of complexity
#include <iostream> Answer:
using namespace std; • No, it's not strictly necessary to use recursive functions to achieve
factorial time complexity (O(n!)).
long factorial(int n) { • t's possible to implement algorithms with similar complexity using
long result = 1; iterative approaches or other constructs.
for (int i = 1; i <= n; ++i) { • The key characteristic is the factorial growth in the number of
result *= i; operations with the input size.
} • As shown in example - 8
return result;
}
int main() {
int n = 5;
cout << factorial(n) << endl;
return 0;
}

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction CS221 Data Structures & Algo.
Complexity of Algorithm
Example - 9 Question:
Recursion Liner Time Complexity (n) Can Recursion have linear time
#include <iostream> complexity
using namespace std; Answer:
• Yes, it's possible
void printElements(int arr[], int index, int size) { • As shown in example - 9
if (index < size) {
cout << arr[index] << " ";
printElements(arr, index + 1, size);
}
}
int main() {
int arr[] = {1, 2, 3, 4, 5};
int size = sizeof(arr) / sizeof(arr[0]);

cout << "Elements in the array: ";


printElements(arr, 0, size);
return 0;
}

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi


Lecture 01: Introduction Complexity of AlgorithmCS221 Data Structures & Algo.
Example - 10 Question:
O(Log(n)) Complexity Can Recursion have Log(n) time
#include <iostream> complexity
using namespace std; Answer:
• Yes, it’s possible
int binarySearch(const int arr[], int low, int high, int target) { • This binary search implementation
if (low <= high) { has a logarithmic time complexity
int mid = low + (high - low) / 2; because, with each recursive call, it
divides the search space in half.
if (arr[mid] == target) {
return mid; // Element found
} else if (arr[mid] < target) {
return binarySearch(arr, mid + 1, high, target); // Search
right half
} else {
return binarySearch(arr, low, mid - 1, target); // Search
left half
}
}

return -1; // Element not found


}

Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi

You might also like