0% found this document useful (0 votes)
27 views26 pages

08 - Searching and Complexity

The document discusses algorithm efficiency, complexity, and various searching algorithms, focusing on time complexity and the Big O notation. It explains how to determine the complexities of algorithms through examples, including iterative and recursive linear searches, as well as binary search. The document concludes that binary search operates in logarithmic time, making it more efficient than linear search methods.

Uploaded by

adam-diab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views26 pages

08 - Searching and Complexity

The document discusses algorithm efficiency, complexity, and various searching algorithms, focusing on time complexity and the Big O notation. It explains how to determine the complexities of algorithms through examples, including iterative and recursive linear searches, as well as binary search. The document concludes that binary search operates in logarithmic time, making it more efficient than linear search methods.

Uploaded by

adam-diab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CSC 245

Objects and Data Abstraction


Topic 8 – Searching and Complexity

Dr. Rasha Alkhansa

Slides adapted from the lectures of Dr. Nadine Abbass


Outline

• Efficiency of Algorithms
• Algorithm Complexity
• Searching Algorithms

2
Efficiency of Algorithms
• What is an Algorithm?
• In mathematics and computer
science, an algorithm is a finite
sequence of well-defined,
computer-implementable
instructions, typically to solve
a class of problems or to
perform a computation
(Merriam-Webster Online
Dictionary)

Example Algorithm:
The Euclidean method to find the GCD 3
Complexity of an Algorithm
• The complexity is related to the amount of resources that
an algorithm uses to execute (e.g., CPU time, memory
space, etc.)
• In what follows, we focus on time complexity, but similar
concepts apply to memory (or space) complexity.
• The Big O notation: describes how fast a function f(n)
grows with respect to its input n, i.e., as n grows
• The worst-case time complexity of an algorithm with
respect to an input n is often described using the big O
notation:
• f(n) represents the number of basic operations in the
algorithm
• The larger the growth of f(n), the slower the algorithm (i.e.
larger number of operations and so the algorithm needs
more time to finish) 4
The Big-O Notation

• Describes the upper bound of a function's growth


rate. In other words, it gives an upper limit on
how quickly the function grows, ignoring lower-
order terms.

• Given a function 𝑔(𝑛), we say that f(n) = O(g(n))


if there exists a constant 𝑐 and a value 𝑛0 such
that f(n) ≤ c * g(n) for all 𝑛 ≥ 𝑛0. This means that
f(n) grows no faster than g(n), up to a constant
factor.

5
Examples
• Complexity examples ordered from fastest to slowest
algorithms, assuming f(n) represents the number of
basic operations executed by the algorithm:

6
Examples

7
Difference between O(n), Ω(n), & Θ(n)?
• Big O gives an upper bound, Big Ω gives the lower
bound, and Big Theta gives both lower and upper
bounds (i.e., exact growth rate).

• E.g., if f(n) = 2n + 3, then:


• f(n) = O(n), but also f(n) = O(n2) and f(n) = O(n3),
etc. (all are upper bounds).
• On the other hand, f(n) = Ω(n) (Ω(g(n)) is the lower
bound, defined as f(n) ≥ c1 * g(n) for n > n0)
• Hence, f(n) = Θ(n) (i.e., both lower and upper
bounds)
8
How to Determine the Complexities?
• In general, how can you determine the running time of a
block of code?
• Answer: the time required by a function/procedure is
proportional to the number of basic operations that it
performs.

• A Basic Operation takes a constant time (independent of


the input size)
• Examples of basic operations:
• One arithmetic operation (e.g., +, *)
• One assignment (e.g., int x = 2;)
• One test (e.g., x==y)
• One read (of a primitive type, e.g., [Link]())
9
Simple Statements
Statement 1;
Statement 2;

Statement k;

• The total time is found by adding the times for all


statements:
total time = time(statement 1) + time(statement 2) + … +
time(statement k)
• If each statement is “simple” (only involves basic
operations), then the time for each statement is
constant, and therefore the total time will also be
constant: O(1)
10
If-Else
if(condition) {
block 1 (sequence of statements)
} else {
block 2 (sequence of statements)
}
• One of two possible blocks of code will execute
→ worst-case running time is the slower of the two
possibilities:
Running time = max(time(block 1), time(block 2))
• Example, if block 1 takes O(1) time and block 2 takes O(n),
then the if-else block would take O(N) time.
11
Loops
𝐑𝐮𝐧𝐧𝐢𝐧𝐠 𝐭𝐢𝐦𝐞 𝐨𝐟 𝐚 𝐥𝐨𝐨𝐩
= 𝐧𝐮𝐦𝐛𝐞𝐫 𝐨𝐟 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 × 𝐭𝐢𝐦𝐞 𝐟𝐨𝐫 𝐨𝐧𝐞 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧

Examples:

? ?

? ?

12
Loops
𝐑𝐮𝐧𝐧𝐢𝐧𝐠 𝐭𝐢𝐦𝐞 𝐨𝐟 𝐚 𝐥𝐨𝐨𝐩
= 𝐧𝐮𝐦𝐛𝐞𝐫 𝐨𝐟 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 × 𝐭𝐢𝐦𝐞 𝐟𝐨𝐫 𝐨𝐧𝐞 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧

Examples:

13
Loops
• Consider the loop: for (int i= 0; I < N-1; i++)
{
for(int j = i+1; j < N; j++)
{
3 simple statements
}
}

• First time through outer loop, inner loop is executed N-1 times;
next time N-2, and the last time once.

• Accordingly, the total number of


times the sequence of statements
executes is:
T(n) = 3(n –1) + 3(n –2) + … + 3 or
T(n) = 3(n –1 + n –2 + … + 1)
14
Loops

• (n –1) + (n –2) + … + 1 is a well-known series that has


a value of n x (n–1) / 2
• We can reduce the expression to:
T(n) = 3 n x (n–1) / 2 = 1.5n2 – 1.5n
• This polynomial is zero when n is 1
• For values greater than 1, 1.5n2 is always greater than
1.5n2 – 1.5n
• Therefore, we can use 1 for n0 and 1.5 for c to
conclude that T(n) is O(n2)

15
Statements with method calls
When a statement involves a method call, the complexity of
the statement includes the complexity of the method.

Example:
• Assume method f(k) takes constant time and method g(k)
takes time proportional to (linear in) the value of its
parameter k, then:
• f(k) takes O(1) time
• g(k) takes O(k) time

• Now consider the loop below inside the main() method.


What’s its running time?
for(int i=0; i<n; i++) {
f(i); g(i);
} 16
General Comments
• It is enough to know in a rough manner how the
number of operations is affected by input size to
calculate the big-O notation.
• For example, If the largest term in a formula is no
more than a CONSTANT times 𝑛2, then the
algorithm is said to be “big-O of 𝑛2”, written
𝑂(𝑛2), and the algorithm is called quadratic.
• Similarly for linear and logarithmic algorithms.
• Multiplicative constants are ignored in big-O
notation.
17
Iterative Linear Search
• Problem: given an array of elements, and a target:
• Search for the target in the array
• Return the index where target is found
• Return -1 if not found

• Iterative solution:
int iterativeLinearSearch (int[] A, int a) {
for (int i = 0; i < [Link]; i++) {
if (A[i] == a)
return i;
}
return -1;
}
• Running time: O([Link]) 18
Recursive Linear Search

int recursiveLinearSearch (int[] Array, int a,


int currentPosition)
{
if(currentPosition == [Link])
return -1;
if(Array[currentPosition] == a)
return currentPosition;
Return recursiveLinearSearch (Array, a,
currentPosition + 1 );
}

• Similarly, running time is O(n) where n = [Link]

19
Binary Search
• If the given array is sorted, we can exploit this fact to
search faster.
• Algorithm:
• Check the middle element at index mid in the array
• If target found at mid, we are done
• If target is larger than A[mid], move right:
• i.e., update search space so that we search next only
in the right half of the array (right side of mid)
• If target is smaller than A[mid], move left
• Repeat steps above until target is found (return
index), or until the search range becomes empty
(return -1).
20
Iterative Binary Search
int iterativeBinarySearch (int[] A, int a) {
int start= 0;
int end= [Link] - 1;
while (start <= end) {
int mid= (start + end)/2; //integer division
if (A[mid] == a)
return mid;
else if (A[mid] < a)
start = mid + 1;
else
end = mid - 1;
}
return-1;
} 21
Complexity of Binary Search
• Let n = [Link]
• In terms of n, how many iterations does the loop
execute in the worst case (i.e., the maximum
possible number of iterations in case the target is
not found)?

22
Complexity of Binary Search
• Find the value of k, such that search
space size becomes indivisible
(cannot be divided further, i.e., of
size 1):
𝑛
• i.e., find k, such that: 𝑘 = 1
2
• Solution: k = log𝟐(𝐧)
• Solution is actually intuitive:
• Example: how many times do we
need to divide 64 by 2 until we reach
1?
• Answer: log2(64) = 6 times:
• 64 → 32 → 16 → 8 → 4 → 2 → 1 23
Complexity of Binary Search

• Conclusion: binary search takes logarithmic


running time: O(log(n)) which is faster than O(n)

• Why: based on the analysis in the previous slide,


the loop repeats at most log(n) times, and the
operations inside and outside the loop are all
basic operations that take constant time

24
Recursive Binary Search
int recursiveBinarySearch (int[] Array, int a,
int start, int end)
{
if (start <= end) {
int mid = (start + end)/2; //integer division
if (Array[mid] < a)
return recursiveBinarySearch (Array, a, mid + 1,
end);
else if (Array[mid] > a)
return recursiveBinarySearch (Array, a, start,
mid - 1);
else // if (Array[mid] ==a)
return mid; // Base case – target found
} else
return-1; // Another base case - Not Found
}
• Similarly executes in log(n) time 25
End of Topic 8

26

You might also like