Performance metrics for evaluation of an algorithm:
🧮 Main Performance Metrics:
1. Time Complexity
o Measures how fast an algorithm runs as input size increases.
o Usually expressed using Big-O notation (e.g., O(n), O(n²), O(log
n)).
o We don’t measure actual time in seconds, because that depends
on hardware — instead, we count how the number of basic
operations grows with n.
Example:
o Linear search → O(n) (time grows linearly with size).
o Binary search → O(log n) (time grows much slower).
2. Space Complexity
o Measures how much extra memory an algorithm needs
besides the input.
o Important when memory is limited (e.g., in embedded systems,
mobile apps).
o Example: Merge Sort requires extra space O(n); Quick Sort uses
O(log n) (because of recursion stack).
3. Correctness
o Checks whether the algorithm always gives the right output for
all valid inputs.
o A fast algorithm is useless if it produces wrong results.
4. Scalability
o Tells how well an algorithm performs when the problem size or
number of users increases.
o Example: Some algorithms work well for small data but become
slow or crash when the input grows large.
Why do we ignore constants in time complexity?
When we write Big O notation (like O(n), O(n²)), we ignore constant
multipliers and smaller terms.
For example, O(5n + 20) is written simply as O(n).
🧠 Reasoning behind it:
1. Big O describes growth rate, not exact time.
o Big O shows how running time grows as n → ∞.
o Constants like 5, 10, or +20 don’t matter when n becomes huge.
Example:
o For n = 1,000,000 →
5n + 20 = 5,000,020
n = 1,000,000
Both grow roughly the same; the “+20” is meaningless at that
scale.
2. Hardware differences make constants unreliable.
o Suppose algorithm A does 5n operations and algorithm B does 2n
operations.
o On a faster machine, the actual times can flip — constants
depend on CPU, compiler, and language.
o That’s why we only care about the rate of growth, not the
constant speed factor.
3. Simplifies comparison.
o O(3n² + 7n + 10) → dominated by n² term, so we just say O(n²).
o This allows easy classification: linear (O(n)), quadratic (O(n²)),
logarithmic (O(log n)), etc.
✅ In short:
We ignore constants because they don’t change the trend of how running
time grows with input size.
We focus only on the dominant term (the one that grows fastest).
A equation will be given in asympotatic notation! We will see what is written
on the other side of the (belongs to) of the equation, if there is the
big0 (worst case): then we will use f(n) <= c.g(n)
omega(best case): then we will use f(n)>= c.g(n)
and if thetha (Average case): then we will use
Best Case (Omega)<= Equation <= BigO (Worst case)
Best ≤ Average ≤ Worst
And if the equation has BigO means the worst case, we will take n 2 in the
place of g(n), and if it has omega means the best case, we will take n in the
place of g(n) . And in case of constants , we always look at the symbol! And
we take the constant that is with the highest power of n and according to the
symbol, we put it!
Like,
5n2−3n+4 (belongs to) BigO
5n2−3n+4 <= c.g(n)
5n2−3n+4 <= 6n2
And then we will put the values of n from 1, upto so on! We will take 5 to 10
values and check, if it the equation holds true or false!!
OR:
For any asymptotic proof (Big O, Ω, Θ),
you always pick the dominant term (the term that grows fastest as n→∞)
as your g(n).