AoA Part3 Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Order-of-Magnitude Analysis and Big O Notation

• If Algorithm A requires time proportional to f(n), Algorithm A is said to be order


f(n), and it is denoted as O(f(n)).
• The function f(n) is called the algorithm’s growth-rate function.
• Since the capital O is used in the notation, this notation is called the Big O notation.
• If Algorithm A requires time proportional to n2, it is O(n2).
• If Algorithm A requires time proportional to n, it is O(n).

Big O notation characterizes functions according to their growth rates: different functions
with the same growth rate may be represented using the same O notation. The letter O is used
because the growth rate of a function is also referred to as order of the function. A description
of a function in terms of big O notation usually only provides an upper bound on the growth
rate of the function. Associated with big O notation are several related notations, using the
symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.

Definition of the Order of an Algorithm / Asymptotic Notations

big-O notation

Definition: A theoretical measure of the execution of an algorithm, usually the time or


memory needed, given the problem size n, which is usually the number of items. Informally,
saying some equation f(n) = O(g(n)) means it is less than some constant multiple of g(n). The
notation is read, "f of n is big oh of g of n".

Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that

0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must
not depend on n.

Also known as O.

• The Big oh notation is denoted by ‘O’.


• It is a method for denoting the upper bound of an algorithm’s running time.
• Big oh gives us the longest amount of time taken by the algorithm to complete.
• Definition of big oh:
• Let F(n) and g(n) be two non-negative functions.
• Let k and c be two integers such that k denotes some value of input and n >= k.
• Similarily c is some constant such that c > 0.
• We can write F(n) <= c* g(n) for all n >= k
• The F(n) is big oh of g(n).
• It is also denoted as F(n) Є O (g(n)).
• In other words, F(n) is less than g(n) if g(n) is multiple of some constant c
The growth rate of f(N) is less than or equal to the growth rate of g(N)
g(N) is an upper bound on f(N)

Note: As an example, n² + 3n + 4 is O(n²), since n² + 3n + 4 < 2n² for all n > 10 (and many
smaller values of n). Strictly speaking, 3n + 4 is O(n²), too, but big-O notation is often
misused to mean "equal to" rather than "less than". The notion of "equal to" is expressed by
Θ(n).

The importance of this measure can be seen in trying to decide whether an algorithm is
adequate, but may just need a better implementation, or the algorithm will always be too
slow on a big enough input. For instance, quicksort, which is O(n log n) on average, running
on a small desktop computer can beat bubble sort, which is O(n²), running on a
supercomputer if there are a lot of numbers to sort. To sort 1,000,000 numbers, the quicksort
takes 20,000,000 steps on average, while the bubble sort takes 1,000,000,000,000 steps! See
Jon Bentley, Programming Pearls: Algorithm Design Techniques, CACM, 27(9):868,
September 1984 for an example of a microcomputer running BASIC beating a supercomputer
running FORTRAN.

Algorithm A is order f(n) – denoted as O(f(n)) –


if constants c and k exist such that A requires
no more than c*f(n) time units to solve a problem
of size n ³ k.

• The requirement of n ³ k in the definition of O(f(n)) formalizes the notion of


sufficiently large problems.
– In general, many values of k and c can satisfy this definition.

An Example:

If an algorithm requires n2–3*n+10 seconds to solve a problem size n. If constants c and k


exist such that
c*n2 > n2–3*n+10 for all n ³ k . (put diff values of c and n)
the algorithm is order n2 (In fact, c is 3 and k is 2)
3*n2 > n2–3*n+10 for all n ³ 2 .
Thus, the algorithm requires no more than c*n2 time units for n ³ k ,
So it is O(n2)
An Example:

Consider function F(n) = 2n + 2


And g(n) = n2
Prove that F(n) Є O (g(n)).
Soln.
We have to find some constants c and k such that F(n) <= c.g(n)
• F(n)=2n+2 g(n)=n2
• Put n=1
• F(n)=2(1)+2=4, g(n)=12=1 So here F(n) is not less than g(n)
• Put n=2
• F(n)=2(2)+2=6, g(n)=22. So here F(n) is not less than g(n)
• Put n=3
• F(n)=2(3)+2=8, g(n)=32. So here F(n) is less than g(n)
• So F(n) <= g(n) for all n>= 3 So K=3
• So we have proved that F(n) Є O (g(n)).

Steps to find Big-oh Notation


1) Make coefficient of each term is 1
2) Discard all other terms and keep only the highest order term.
Example:
F(n)= 8.023 n2 + 7n
Soln.
Making coeff of each term as 1, we get:
F(n)=n2 + n
Discarding all other terms and only keeping the highest order term n2.
F(n)=n2
So F(n) is O(n2)
A Comparison of Growth-Rate Functions

A Comparison of Growth-Rate Functions (cont.)


Growth-Rate Functions

O(1) Time requirement is constant, and it is independent of the problem’s size.


O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.

• If an algorithm takes 1 second to run with the problem size 8, what is the time
requirement (approximately) for that algorithm with the problem size 16?
• If its order is:
O(1) è T(n) = 1 second
O(log2n) è T(n) = (1*log216) / log28 = 4/3 seconds
O(n) è T(n) = (1*16) / 8 = 2 seconds
O(n*log2n) è T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
O(n2) è T(n) = (1*162) / 82 = 4 seconds
O(n3) è T(n) = (1*163) / 83 = 8 seconds
O(2n) è T(n) = (1*216) / 28 = 28 seconds = 256 seconds
Properties of Growth-Rate Functions

1. We can ignore low-order terms in an algorithm’s growth-rate function.


– If an algorithm is O(n3+4n2+3n), it is also O(n3).
– We only use the higher-order term as algorithm’s growth-rate function.

2. We can ignore a multiplicative constant in the higher-order term of an algorithm’s


growth-rate function.
– If an algorithm is O(5n3), it is also O(n3).

3. O(f(n)) + O(g(n)) = O(f(n)+g(n))


– We can combine growth-rate functions.
– If an algorithm is O(n3) + O(4n), it is also O(n3 +4n2) è So, it is O(n3).
– Similar rules hold for multiplication.

You might also like