Unit II - Asymptotic notations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

UNIT -II

Asymptotic Notations

Asymptotic Notations and Basic Efficiency classes, Informal Introduction, O-notation, Ω-


notation, θ-notation, mathematical analysis of non-recursive algorithms, mathematical
analysis of recursive algorithms.

2.1 Asymptotic notation

➢ Asymptotic notation is a way to describe the running time or space complexity of an


algorithm based on the input size.
➢ It is commonly used in complexity analysis to describe how an algorithm performs as the
size of the input grows.
➢ The three most commonly used notations are
1)Big O notation (O)
2)Big Omega notation (Ω)
3)Theta notation (Θ)

Big O notation (O)

DEFINITION
A function t (n) is said to be in O(g(n)), denoted t (n) ∈O(g(n)),if t (n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t (n) ≤ cg(n) for all n ≥ n0.

This notation provides an upper bound on the growth rate of an algorithm’s running
time or space usage. It represents the worst-case scenario, i.e., the maximum amount of time
or space an algorithm may need to solve a problem. For example, if an algorithm’s running time
is O(n), then it means that the running time of the algorithm increases linearly with the input
size n or less.

Big Omega notation (Ω)

DEFINITION
A function t (n) is said to be in _(g(n)), denoted t (n) ∈_(g(n)), if t (n) is bounded below
by some positive constant multiple of g(n) for all large n (i.e) if there exist some positive
constant c and some nonnegative integer n0 such that t (n) ≥ cg(n) for all n ≥ n0.

This notation provides a lower bound on the growth rate of an algorithm’s running time
or space usage. It represents the best-case scenario, i.e., the minimum amount of time or space
an algorithm may need to solve a problem. For example, if an algorithm’s running time is Ω(n),
then it means that the running time of the algorithm increases linearly with the input size n or
more.

1
Theta notation (Θ)

DEFINITION
A function t (n) is said to be in (g(n)), denoted t (n) ∈(g(n)),if t (n) is bounded both above
and below by some positive constant multiples ofg(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some nonnegative integer n0 such that c2g(n) ≤ t (n) ≤ c1g(n)
for all n ≥ n0.
This notation provides both an upper and lower bound on the growth rate of an
algorithm’s running time or space usage. It represents the average-case scenario, i.e., the
amount of time or space an algorithm typically needs to solve a problem. For example, if an
algorithm’s running time is Θ(n), then it means that the running time of the algorithm increases
linearly with the input size n.

There are two more notations called little o and little omega.
Little o provides a strict upper bound (equality condition is removed from Big O)
Little omega provides strict lower bound (equality condition removed from big omega)

In general, the choice of asymptotic notation depends on the problem and the specific algorithm
used to solve it. It is important to note that asymptotic notation does not provide an exact
running time or space usage for an algorithm, but rather a description of how the algorithm
scales with respect to input size. It is a useful tool for comparing the efficiency of different
algorithms and for predicting how they will perform on large input sizes.

2.1.1 Big-O Notation (O-notation)

Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.

➢ It is the most widely used notation for Asymptotic analysis.


➢ It specifies the upper bound of a function.
➢ The maximum time required by an algorithm or the worst-case time complexity.
➢ It returns the highest possible output value(big-O) for a given input.
➢ Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant
C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0

It returns the highest possible output value (big-O)for a given input.


The execution time serves as an upper bound on the algorithm’s time complexity.

2
Mathematical Representation of Big-O Notation

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the Insertion
sort is O(n2).

The Big-O notation is useful when we only have an upper bound on the time complexity
of an algorithm. Many times we easily find an upper bound by simply looking at the
algorithm. For example,

{ 100 , log (2000) , 10^4 } belongs to O(1)


U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)

Note: Here, U represents union, we can write it in these manner because O provides exact or
upper bounds .

2.1.1 Omega Notation (Ω-Notation)

Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

3
Mathematical Representation of Omega notation

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can
be written as Ω(n), but it is not very useful information about insertion sort, as we are generally
interested in worst-case and sometimes in the average case.

Examples

{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)


U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)

Note: Here, U represents union, we can write it in these manner because Ω provides exact or
lower bounds.

2.1.3. Theta Notation (Θ-Notation)

Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analysing
the average-case complexity of an algorithm.

Theta (Average Case) we can add the running times for each possible input combination
and take the average in the average case.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be
Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0

4
Mathematical Representation of Theta notation

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0}

The above expression can be described as if f(n) is theta of g(n), then the value f(n) is
always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also
requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s time
complexity. It exist as both, most, and least boundaries for a given input value.

A simple way to get the Theta notation of an expression is to drop low-order terms and
ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 = Θ(n3), the
dropping lower order terms is always fine because there will always be a number(n) after which
Θ(n3) has higher values than Θ(n2) irrespective of the constants involved. For a given function
g(n), we denote Θ(g(n)) is following set of functions.

Examples
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)

Note: Θ provides exact bounds.


Properties of Asymptotic Notations

1. General Properties

If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.

Example
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).

5
Similarly, this property satisfies both Θ and Ω notation. We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.

2. Transitive Properties

If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).

Example

If f(n) = n, g(n) = n² and h(n)=n³


n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation. We can say,

If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .


If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

3. Reflexive Properties
Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.

Example
f(n) = n² ; O(n²) i.e O(f(n))

Similarly, this property satisfies both Θ and Ω notation. We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).

4. Symmetric Properties

If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).

Example

If(n) = n² and g(n) = n²


then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.

5. Transpose Symmetric Properties

If f(n) is O(g(n)) then g(n) is Ω (f(n)).


Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
This property only satisfies O and Ω notations.

6
Basic Efficiency Classes

Even though the efficiency analysis framework puts together all the functions whose
orders of growth differ by a constant multiple, there are still infinitely many such classes. For
example, the exponential functions an have different orders of growth for different values of
base a.

Therefore, it may come as a surprise that the time efficiencies of a large number of algorithms
fall into only a few classes. These classes are listed below in increasing order of their orders of
growth, along with their names.

Classifying algorithms by their asymptotic efficiency would be of little practical use since the
values of multiplicative constants are usually left unspecified. This leaves open the possibility of
an algorithm in a worse efficiency class running faster than an algorithm in a better efficiency
class for inputs of realistic sizes. For example, if the running time of one algorithm is n3while the
running time of the other is 106n2, the cubic algorithm will out performthe quadratic algorithm
unless n exceeds 106. A few such anomalies are known. Fortunately, multiplicative constants
usually do not differ that drastically.

Basic asymptotic efficiency classes


class Name Comments
1 constant Short of best-case efficiencies, very few reasonable examples can be given
since an algorithm’s running time typically goes to infinity when its input
size grows infinitely large.
log n logarithmic Typically, a result of cutting a problem’s size by a constant factor on each
iteration of the algorithm . Note that a logarithmic algorithm cannot take
into account all its input or even a fixed fraction of it: any algorithm that
does so will have at least linear running time.
n linear Algorithms that scan a list of size n (e.g., sequential search) belong to this
class.
n log n linearithmic Many divide-and-conquer algorithms including mergesort and Quicksort in
the average case, fall into this category.
n2 quadratic Typically, characterizes efficiency of algorithms with two embedded loops
(see the next section). Elementary sorting algorithms and certain operations
on n × nmatrices are standard examples.
n3 cubic Typically, characterizes efficiency of algorithms with three embedded loops
(see the next section). Several nontrivial algorithms from linear algebra fall
into this class.
n
2 exponential Typical for algorithms that generate all subsets of an n-element set. Often,
the term “exponential” is used in a broader sense to include this and larger
orders ofgrowth as well.
n! factorial Typical for algorithms that generate all permutations of an n-element set.

7
2.2 Mathematical Analysis of Non- Recursive Analysis

General Plan for Analyzing the Time Efficiency of Non recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation. (As a rule, it is located in the innermost loop.)
3. Check whether the number of times the basic operation is executed depends only on the
size of an input. If it also depends on some additional property, the worst-case, average-case,
and, if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed form formula for
the count or, at the very least, establish its order of growth.

EXAMPLE 1
Consider the problem of finding the value of the largest element in a list of n numbers

ALGORITHM MaxElement(A[0..n − 1])


//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ← A[0]
for i ← 1 to n − 1 do
if A[i] > maxval
maxval ← A[i]
return maxval

EXAMPLE 2

Consider the element uniqueness problem: check whether all the elements in a given array of
n elements are distinct.

ALGORITHM UniqueElements(A[0..n − 1])


//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i ← 0 to n − 2 do
for j ← i + 1 to n − 1 do
if A[i] = A[j ] return false
return true

8
2.3 Mathematical Analysis of Recursive Analysis

General Plan for Analyzing the Time Efficiency of Recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must
be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

EXAMPLE 1
Compute the factorial function F (n) = n! for an arbitrary nonnegative integer n.

n!= 1 *2*. .. * (n − 1) *n for n ≥ 1

ALGORITHM F(n)
// Computes n! recursively
// Input: A nonnegative integer n
// Output: The value of n!
if n = 0
return 1
else
return F (n − 1) ∗ n

EXAMPLE 2

Consider the following recursive algorithm for computing the sum of the first n cubes:
S(n) = 13 + 23 + ... + n3.

ALGORITHM S(n)
//Input: A positive integer n
//Output: The sum of the first n cubes

if n = 1
return 1
else
return S(n − 1) + n ∗ n ∗ n

You might also like