0% found this document useful (0 votes)
47 views15 pages

Daa - Unit 1 Notes

Uploaded by

mihirvacharyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views15 pages

Daa - Unit 1 Notes

Uploaded by

mihirvacharyaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Design and Analysis of Algorithm V TH SEM

Design and Analysis of algorithm

Contents 52 Hrs
Introduction: What is an Algorithm? Fundamentals of Algorithmic problem solving, 10
Fundamentals of the Analysis of Algorithm Efficiency, Analysis Framework, Measuring the
input size, Units for measuring Running time, Orders of Growth, Worst-case, Best- case
and Average-case efficiencies

Asymptotic Notations and Basic Efficiency classes, Informal Introduction, O-notation, 10


Ω-notation, θ-notation, mathematical analysis of non-recursive algorithms, mathematical
analysis of recursive algorithms.
Brute Force & Exhaustive Search: Introduction to Brute Force approach, 11
Selection Sort and Bubble Sort, Sequential search, Exhaustive Search- Travelling
Salesman Problem and Knapsack Problem, Depth First Search, Breadth First
Search
Decrease-and-Conquer: Introduction, Insertion Sort, Topological Sorting 11
Divide-and-Conquer: Introduction, Merge Sort, Quick Sort, Binary Search, Binary Tree
traversals and related properties.
Greedy Technique: Introduction, Prim’s Algorithm, Kruskal’s Algorithm, Dijkstra’s 10
Algorithm, Lower-Bound Arguments, Decision Trees, P Problems, NP Problems, NP-
Complete Problems, Challenges of Numerical Algorithms.

1
Design and Analysis of Algorithm V TH SEM

UNIT – 1
Introduction: What is an Algorithm? Fundamentals of Algorithmic problem solving,
Fundamentals of the Analysis of Algorithm Efficiency, Analysis Framework, Measuring the
input size, Units for measuring Running time, Orders of Growth, Worst-case, Best case and
Average-case efficiencies.

Introduction
What is an algorithm? Explain with a simple example (5)
Algorithm
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required
output for any legitimate input in a finite amount of time.
OR
An algorithm is a set of guidelines or rules used to solve a problem or carry out a task.
This definition can be illustrated by a simple diagram.

Figure : The notion of the algorithm.


Observe the following activities to see how an algorithm is used to produce the desired result:
 The solution to a given problem is expressed in the form of an algorithm.
 The algorithm is converted into a program.
 The program when it is executed, accept the input and produces the desired output

Characteristics of an Algorithm
The following are the characteristics of an algorithm:
Input: An algorithm can have zero or more inputs.
Output: An algorithm should produce at least one or more outputs.

2
Design and Analysis of Algorithm V TH SEM
Definiteness: By definiteness, it is meant that the instructions should be clear and unambiguous without any
confusion. For example, division by zero is not a well-defined instruction.
Uniqueness: The order of the instructions of an algorithm should be well defined as a change in the order of
execution leads to wrong result or uncertainty.
Correctness: The algorithm should be correct and yield expected results.
Effectiveness: The algorithm should be traceable.
Finiteness: An algorithm should have finite number of steps and should terminate.

Example: Computing GCD


The greatest common divisor(GCD) of two nonnegative integers m and n (not-both-zero),denoted gcd(m, n), is
defined as the largest integer that divides both m and n evenly, i.e., with a remainder of zero. GCD of two
numbers is defined only for positive integers but, not defined for negative integers and floating point numbers.

Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m mod n),where m mod n is
the remainder of the division of m by n, until m mod n is equal to 0. Since gcd(m,0) = m, the last value of m is
also the greatest common divisor of the initial m and n.gcd(60, 24) can be computed as follows:gcd(60, 24) =
gcd(24, 12) = gcd(12, 0) = 12.

Euclid’s algorithm for computing gcd(m, n) in simple steps


Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step 2 Divide m by n and assign the value of the remainder to r.
Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.

Euclid’s algorithm for computing gcd(m, n) expressed in pseudocode


ALGORITHM Euclid_gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n
m←n
n←r
return m

Describe the fundamentals of Algorithmic Problem Solving (5)


Algorithm Design and Analysis Process:
3
Design and Analysis of Algorithm V TH SEM
A sequence of steps involved in designing and analyzing an algorithm is shown in the figure below.
 Understand the problem
 Decide on Computational Device Exact Vs Approximate Algorithms
 Algorithm Design Techniques
 Design an algorithms
 Prove Correctness
 Analyze the Algorithm
 Code the Algorithm

 Understanding the Problem


 This is the first step in designing of algorithm.
 Read the problem’s description carefully to understand the problem statement completely.
 Ask questions for clarifying the doubts about the problem.
 Identify the problem types and use existing algorithm to find solution.
 Input (instance) to the problem and range of the input get fixed.

 Decision making
The Decision making is done on the following:
4
Design and Analysis of Algorithm V TH SEM
(a) Ascertaining the Capabilities of the Computational Device
 In random-access machine (RAM), instructions are executed one after another (The central assumption is
that one operation at a time). Accordingly, algorithms designed to be executed on such machines are called
sequential algorithms.
 In some newer computers, operations are executed concurrently, i.e., in parallel. Algorithms that take
advantage of this capability are called parallel algorithms.
 Choice of computational devices like Processor and memory is mainly based on space and time efficiency.

( b) Choosing between Exact and Approximate Problem Solving


 The next important step is to decide whether to solve the problem exactly or approximately.
 An algorithm used to solve the problem exactly and produce correct result is called an exact algorithm.
 If the problem is so complex and not able to get exact solution, then we have to choose an algorithm called
an approximation algorithm. i.e., produces an approximate answer.
 E.g.: Extracting square roots, solving nonlinear equations, and evaluating definite integrals.

(c)Algorithm Design Techniques:


 An algorithm design technique (also called a strategy or paradigm) is a general method used to solve many types
of problems using algorithms.
 Algorithms+ Data Structures = Programs
 Even though Algorithms and Data Structures are independent, but they are combined together to develop
program. Hence the choice of proper data structure is required before designing the algorithm.
 Implementation of algorithm is possible only with the help of Algorithms and Data Structures Algorithmic
strategy / technique / paradigm is a general approach by which many problems can be solved
algorithmically. E.g., Brute Force, Divide and Conquer, Dynamic Programming, Greedy Technique and so
on.

 Design an Algorithm:
Develop the step-by-step procedure to solve the problem based on the chosen design technique. Clearly
define the algorithm's input, output, and the sequence of operations to achieve the desired result

 Proving an Algorithm’s Correctness


 Provide a mathematical or logical proof that the algorithm produces the correct output for any valid input.
 This step ensures the algorithm reliably solves the problem it was designed limit.

 Analysing an Algorithm
The most important thing about an algorithm is its efficiency.
There are two types of efficiency:
1. Time Efficiency – How fast the algorithm runs.
5
Design and Analysis of Algorithm V TH SEM
2. Space Efficiency – How much extra memory (RAM) the algorithm uses.
The efficiency of an algorithm is determined by measuring both time efficiency and space efficiency.
So factors to analyze an algorithm are:
 Time efficiency of an algorithm(fast)
 Space efficiency of an algorithm(less memory)
 Simplicity of an algorithm(easy to write and understand)
 Generality of an algorithm(Useful for many situations)

 Code the Algorithm


 The coding / implementation of an algorithm is done by a suitable programming language like C, C++,
JAVA.
 Make sure your code matches the steps of the algorithm exactly. Then, test it with different inputs to
check if it works correctly and efficiently.

Analysis Framework
 Efficiency of an algorithm can be in terms of time or space. Thus, checking whether the algorithm is
efficient or not means analyzing the algorithm.
 There is a systematic approach that has to be applied for analyzing any given algorithm. This
systematic approach is modelled by a framework called as Analysis Framework.
 The efficiency of an algorithm can be decided by measuring the performance of an algorithm.
 There are two kinds of efficiencies to analyze the efficiency of any algorithm. They are:
1. Time efficiency, indicating how fast the algorithm runs, and
2. Space efficiency, indicating how much extra memory it uses.
1. Time efficiency
 The time efficiency of an algorithm is measured purely on how fast a given algorithm is executed. Since the
efficiency of an algorithm is measured using time, the word time complexity is often associated with an
algorithm.
 The time efficiency of the algorithm depends on various factors that are shown below:
Components that affect time efficiency
 Speed of the computer
 Choice of the programming language
 Compiler used
 Choice of the algorithm
 Number (Size) of inputs/Outputs
 Since we do not have any control over speed of the computer, programming language and compiler, let
us concentrate only on next two factors such as

6
Design and Analysis of Algorithm V TH SEM
o Choice of an algorithm
o Number (size) of inputs

2. Space efficiency
 The space efficiency of an algorithm is the amount of memory required to run the program completely and
efficiently.
 If the efficiency is measured with respect to the space (memory required), the word space complexity is
often used. The space complexity of an algorithm depends on following factors:
Components that affect space
1. Program space
2. Data space
3. Stack space
 Program space: The space required for storing the machine program generated by the compiler or
assembler is called program space.
 Data space: The space required to store the constants, variables etc., is called data
 Stack space: The space required to store the return address along with parameters that are passed to the
function, local variables etc., is called stack space.

Note: The new technological innovations have improved the computer’s speed and memory size by many
orders of magnitude. Now a days, space requirement for an algorithm is not a concern and hence, we are not
concentrating on space efficiency. Let us concentrate only on time efficiency.

Measuring an Input’s Size


 An algorithm's efficiency is defined as a function of some parameter n indicating the algorithm's input size.
In most cases, selecting such a parameter is quite straightforward.
 For example, it will be the size of the list for problems of sorting, searching.
 Sorting a list
 Input: [4, 1, 7, 2]
 Input size = 4 (number of elements)

Units for measuring Running time


 Some standard unit of time measurement such as a second, or millisecond, and so on can be used to
measure the running time of a program after implementing the algorithm.
 The running time of algorithms can be measured in different ways, depending on how detailed the analysis
is and what the situation requires.
 Here are common units for measuring running time:

7
Design and Analysis of Algorithm V TH SEM
Seconds (s):
 The most straightforward unit, representing the actual time in seconds that an algorithm takes to run on a
specific machine.
 Suitable for measuring real-world performance but can vary due to external factors like the Processor speed,
Background processes, System load, Hardware differences.

Milliseconds (ms):
 A smaller unit than seconds, measuring time in thousandths of a second.
 Useful for more precise measurement when algorithms have relatively fast execution times.

Microseconds (μs):
 An even smaller unit than milliseconds, measuring time in millionths of a second.
 Appropriate for very fast algorithms or when extremely precise timing is required.

Nanoseconds (ns):
 A unit smaller than microseconds, measuring time in billionths of a second.
 Common in the context of measuring operations at the hardware level or very low-level algorithmic
optimizations.

Operations or Basic Steps:


 Instead of measuring time, algorithms can be analyzed based on counting how many basic steps or
operations they perform.
 This gives a general idea of how complex the algorithm is, without depending on the computer or
software used.

Big O Notation (O()):


 Represents the upper bound of an algorithm's time complexity in terms of a mathematical
function, typically based on the input size.
 Provides a theoretical measure of efficiency, ignoring constant factors and lower- order terms.

Instruction Count:
 It count the number of machine instructions executed by an algorithm.
 This is helpful for low-level analysis and making the program faster on a specific computer.

Comparisons or Swaps:

8
Design and Analysis of Algorithm V TH SEM
 In sorting algorithms, counting how many times elements are compared or swapped helps us understand
how efficient the algorithm is.
 This is important for algorithms that mainly work by comparing and rearranging data.

Basic operation
 The operation that contributes most towards the running time of the algorithm is called basic operation.
 A statement that executes maximum number of times in a function is also called basic operation. The number
of times basic operation is executed depends on size of the input.
 The basic operation is the most time consuming operation in the algorithm.
For example:
o a statement present in the innermost loop in the algorithm
o addition operation while adding two matrices, since it is present in innermost loop
o multiplication operation in matrix multiplication since it is present in innermost loop
Now, let us see "How to compute the running time of an algorithm using basic operation or how time
efficiency is analyzed?" The time efficiency is analyzed by determining the number of times the basic
operation is executed. The running time T(n) is given by:

T(n) = b*C(n)

 T is the running time of the algorithm


 n is the size of the input
 b execution time for basic operation.( time taken to check one element)
 C represent number of times the basic operation is executed(number of times this comparison
happens.)

Problem: Search for an element in an array.


 Basic Operation: Compare the target element with an array element.
 Input Size: nnn (number of elements in the array)
 Best Case: Element is at the first position → C(n)=1
 Worst Case: Element is not found or at the end → C(n)=n
 Running Time:
T(n)=b×C(n)
=b×n

Describe different orders of growth (5)


Order of growth

9
Design and Analysis of Algorithm V TH SEM
Order of growth describes how the running time or space used by an algorithm increases as the input size n
increases. This is usually expressed using Big-O notation.

1. Constant Time – O(1)


 Execution time is independent of the input size(number of steps required to execute the problem is
constant).
 Time does not vary if the increase in input size
 Fastest and most efficient.
Example: Accessing an element in an array arr[5], All arithmetic operation

2 . Logarithmic Time – O(log n)


 The time increases slowly as input size increases.
 Usually occurs when the problem size is cut in half each time.
 Very efficient even for large n.
Example: Binary Search

3. Linear Time – O(n)


 No of steps required to execute the problem depends on input size
 If the input size increases the time taken to execute will also increase
 OK for moderate input sizes.
Example: Linear Search, traversing an array

4. Linear Logarithmic Time – O(n log n)


 Problem is divided into smaller parts using a divide-and-conquer approach.
 With each division, the problem size is reduced by half (logarithmic reduction).
 After solving the small parts, all results are combined (collaborated) to get the final solution.
 The time complexity grows slightly faster than linear (O(n)), but much slower than quadratic (O(n²)).
 This time complexity is efficient for sorting large datasets.
Example: Merge Sort, Quick Sort (avg case)

5. Quadratic Time – O(n²)


 When the running time grows proportionally to the square of the input size.
 If the input size doubles, the time taken becomes four times longer (because 2² = 4).
 The algorithms normally will have two loops
 It becomes slow as input size grows
Example: Bubble Sort, Insertion Sort, Selection Sort

6. Cubic Time – O(n³)

10
Design and Analysis of Algorithm V TH SEM
 Time grows with the cube of the input size.
 If the input becomes 2 times bigger, the time needed becomes 8 times more (because 2 × 2 × 2 = 8).
 This usually happens when there are three nested loops in the algorithm.
 Very inefficient for large data.
Example: matrix multiplication, algorithm to solve simultaneous equations using gauss-elimination
method will have this running time
[

7. Exponential Time – O(2ⁿ)


 The running time doubles with each additional input.
 If you increase the input size by 1, the time taken becomes twice as much.
 Only usable for very small inputs.
Example: Recursive Fibonacci, Subset problems

8. Factorial Time – O(n!)


 The running time increases extremely fast.
 If the input size increases a little, the time taken increases a lot even faster than exponential time.
 Not suitable for large inputs even n = 10 can take a long time!
Example: Brute-force solution to Travelling Salesman Problem (TSP)

11
Design and Analysis of Algorithm V TH SEM

What are the Best case, Average case and Worst case efficiencies? Explain. (5)
Worst-case, Best-case and average case efficiencies
For some of the problems, the time complexity will not depend on the number of inputs alone. For example,
while searching for a specific item in an array of n elements using linear search, we have following three
situations:
 An item we are searching for may be present in the very first location itself. In this case only one item is
compared and this is the best case.
 The item may be present somewhere in the middle which definitely takes some time. Running time is more
when compared to the previous case for the same value of n. Since we do not know where the item is we
have to consider the average number of cases and hence this situation is an average case.
 The item we are searching for may not be present in the array requiring n number of comparisons and
running time is more than the previous two cases. This may be considered as the worst case.

12
Design and Analysis of Algorithm V TH SEM
Worst case efficiency
 The efficiency of an algorithm for the input of size n for which the algorithm takes longest time to execute
among all possible inputs is called worst case efficiency
 For example, if we use linear search and the item to be searched is no present in the array, then it is an
example of worst case efficiency.
 In the worst case, the algorithm runs for the longest duration among all possible inputs for a given size.
Here, maximum number of steps is executed for that input.

Best case efficiency


 The efficiency of an algorithm for the input of size n for which the algorithm takes least time during
execution among all possible inputs of that size is called best case efficiency.
 In the best case, the algorithm runs fastest for the input specified. For example, in linear search, if the item
to be searched is present in the beginning, then it is an example of the best case efficiency.

Average case efficiency


 The Average case efficiency lies between best case and worst case. To analyze the algorithm's average case
efficiency, we must make some assumptions about possible inputs of size n.
 Note: In the average case efficiency, the average number of basic operations executed will be considered.
This is required only for the randomized input.

Example: Searching for an Element in an Array


Suppose you have an array of numbers, and you want to find a specific element in that array.
def search_element(arr, target):
for num in arr:
if num == target:
return True
return False
Now, let's analyze the efficiency of this algorithm in terms of worst-case, best-case, and average-case scenarios.

Worst-case efficiency:
• This is the scenario where the algorithm takes the maximum amount of time to complete.
• In our example, the worst-case scenario occurs when the target element is at the end of the array or is not
present in the array.
• The worst-case time complexity is O(n), where n is the number of elements in the # Worst-case scenario
arr = [1, 2, 3, 4, 5]
target = 5

13
Design and Analysis of Algorithm V TH SEM
result = search_element(arr, target)
# In this case, the algorithm has to go through the entire array to find the target element.

Best-case efficiency:
• This is the scenario where the algorithm takes the minimum amount of time to complete.
• In our example, the best-case scenario occurs when the target element is at the beginning of the array.
• The best-case time complexity is O(1), as the algorithm may find the target element in the first iteration.

# Best-case scenario
arr = [1, 2, 3, 4, 5]
target = 1
result = search_element(arr, target)
# In this case, the algorithm finds the target element in the first iteration.

Average-case efficiency:
• This is the scenario where the algorithm takes an average amount of time to complete, considering all
possible inputs.
• The average-case time complexity is often more challenging to analyze and may involve statistical analysis.
• In our example, if the target element is equally likely to be anywhere in the array, the average-case time
complexity is O(n/2), which simplifies to O(n).

# Average-case
scenarioarr = [1, 2, 3, 4, 5]
target = 3
result = search_element(arr, target)
# On average, the algorithm might need to check half of the array element

What are the sequences, selection and iterative constructs of an algorithm?


1.Sequence Construct
 The sequence construct means that the steps in an algorithm are executed one after the other, in the order
they are written.
 All steps are executed in sequence.
Example:
a=5
b = 10
sum = a + b
print(sum)

2 Selection Construct
14
Design and Analysis of Algorithm V TH SEM
 The selection construct allows the algorithm to make a decision and choose between two or more paths
based on a condition.
 The flow depends on the condition. This is done using if, if-else, or switch-case.
Example:
if age >= 18:
print("Eligible to vote")
else:
print("Not eligible to vote")

3 Iterative Construct (Looping)


The iterative construct is used when a set of steps needs to be repeated multiple times.
This is done using loops like for, while, or do-while.
Example:
for i in range(5):
print("Hello")
The message "Hello" is printed 5 times.

Difference Between Time Complexity and Space Complexity

15

You might also like