Analysis of Performance Using Big O Notation
Analysis of Performance Using Big O Notation
Analysis of Performance Using Big O Notation
Introduction
Last lesson you have learnt about Hashing. This lesson introduces the Big O Notation, as another
important topic. Big O Notation is widely used for calculating the performance of the
algorithm/program.
Learning outcome
On successful completion of this lesson, you would be able to define Big-O notation, describe
properties of notation and types of Big-O functions and analysis of performance of an algorithm
using Big-O functions.
Define Big-O notation
Properties of Big-O
Types of Big-O functions
Analysis performance of an algorithm using Big -O
Let us look at several properties of Big –O. As Big O involves with numbers, let's start by
considering the positive numbers. There is a notion of order: when x is lower than y we write as
1
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
X ≤ Y. We can add and multiply numbers, and the order relation respect these
operations:
X ≤ Y, Z ≤ W ⟹ X + Z ≤ Y + W, XZ ≤ YW.
1) Scaling – this feature tells us that, when there is a scalar in the function f(n),Big-of
f(.) can ignore the scale. It has been presented in the following Lemma 01
Lemma 01
For all constant factors c > 0, the function c f(n) is O(f(n)), or in shorthand notation
cf is O(f).
The proof:
c f(n) < (c + ε)f(n) holds for all n > 0 and ε > 0. Then
Constant factors are ignored.
Only the powers and functions of n should be exploited
It is this ignoring of constant factors that motivates for such a notation! In
particular, f is O(f).
E,g, 50n ∈ O(n) , 0.05n ∈ O(n) , 50, 000, 000n ∈ O(n), 0.0000005n ∈ O(n)
2) Trasitivity: If function h(n) grows at most as fast as g(n), which grows at most as
fast as f(n), then h(n) grows at most as fast as f(n).
3) Rule of Sums
2
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
Informally, the rule of sums says that, the sum of functions grows as its fastest-growing
term. Therefore
Examples:
4) Rule of Products
If g1 ∈ O(f1) and g2 ∈ O(f2), then g1g2 ∈ O(f1f2).
This can be proved as follows.
3
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
1) O(1) – This describes an algorithm that will always execute in the same time (or space)
regardless of the size of the input data set. I.e. irrespective of changing the size of the
input, time stays the same. Let look at the following Java program.
return true;
return false;
Here, in this program, we take an array of string as an input and check whether the length
of the array is greater than 100. If yes, it returns true otherwise it returns false. Thus, no
matter how big the array is, we do only one check i.e. whether the length is greater than
100. Hence the program or algorithm will always execute in the same time. This is also
4
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
2) O(n) – This function indicates that, as you increase the size of an input, the time taken
to complete operations scales linearly with that size. Let us see how this happens.
Consider the following Java program.
The example above demonstrates how Big O favours the worst-case performance scenario;
a matching string could be found during any iteration of the for loop and the function
would return ‘True’ early, but Big O notation will always assume the upper limit where
5
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
return false;
}
It you carefully look at this algorithm, you can see that it goes through the same array
twice in two iteration, i.e. in nested iterations. In the worst case, the FOR loop needs to
be executed n*n times. Thus, it makes O(n2) performance of the algorithm.
Binary search is a technique used to search in sorted data sets. You will learn this under searching
algorithms. It works by selecting the middle element of the data set, essentially the median, and
compares it against a target value. If the values match it will return success. If the target value is
higher than the value of the probe element (i.e. the middle value) it will take the upper half of the
data set and perform the same operation against it. Likewise, if the target value is lower than the
value of the probe element it will perform the operation against the lower half. It will continue
to halve the data set with each iteration until the value has been found or until it can no longer
split the data set.
This type of algorithm is described as O(log n). The iterative halving of data sets described in
the binary search example produces a growth curve that peaks at the beginning and slowly
flattens out as the size of the data sets increase e.g. an input data set containing 10 items takes
one second to complete, a data set containing 100 items takes two seconds, and a data set
containing 1000 items will take three seconds. Doubling the size of the input data set has little
effect on its growth as after a single iteration of the algorithm the data set will be halved and
therefore on a par with an input data set half the size. This makes algorithms like binary search
extremely efficient when dealing with large data sets.
6
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
Thus, the efficiency of the algorithm, in other words time or space required to execute
this algorithm can be stated as O(log n) where n is the number of input data. This also
called as logarithmic time algorithms.
Figure 1 compares the complexity of several big O notations. You can see that a
performance of an algorithm in which performance function lie in the RED or ORANGE
areas are terrible i.e. its computational efficiency is very low. Any performance function
lie in GREEN area is amazing. Hence we you are designing an algorithm you need to try
to stick either to YELLOW or GREEN areas.
7
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
Activity 9.1
8
Big – O Notations
ITE 2142 – Data Structures & Algorithms Week 09
Summary
During this lesson you learnt about Big O notation. You also saw that different Big O notations
can be introduced based on the nature of the algorithm implemented. You also learnt that how
to compute the performance of an algorithm in their worst case execution. Moreover, you got
to know about algorithms which are good in their performance and also algorithms which are
worst.
9
Big – O Notations