Algorithm Analysis New
Algorithm Analysis New
Algorithm Analysis New
2. Algorithm Analysis
What is Algorithm?
Definition: An algorithm is a finite set of unambiguous instructions which, if
followed, can accomplish a certain task.
Analysis of Algorithm
The analysis of algorithm deals with the amount of space and time consumed by it.
In order to begin our study of algorithms analysis, let us see at some important
points.
What is a "good" algorithm?
There are two important meanings: Qualitative and Quantitative
1. Qualitative
User’s point of view
- User friendliness
- Good I/O
- Good termination
Programmer’s point of view
- Modularity
- Documentation
- Generality
- Machine and operating system independence (portability)
1)
gt=0;
for(i=0; i<n; i++)
{
sum[i]=0;
for(j=0; j<n; j++) total time requirement
{ 2n2+n
sum[i]=sum[i]+a[i][j];
gt=gt+a[i][j];
}
}
2)
gt=0;
for(i=0; i<n; i++) total time requirement
{ n2+2n
sum[i]=0;
for(j=0; j<n; j++)
sum[i]=sum[i]+a[i][j];
gt=gt+sum[i];
}
From the above algorithms the second algorithm is seemingly guaranteed to execute
faster than the first algorithm for any non trivial value of n. But note that “faster”
here may not have much significance in the real world of computing. Thus, because
of the phenomenal execution speeds and very large amounts of available memory on
modern computers, proportionately small differences between algorithms usually
have little practical impact. Such considerations have led computer scientists toward
devising a method of algorithm classification which makes more precise the notation
of order of magnitude as it applies to time and space considerations. This method of
classification is typically referred to as Big-O notation.
1. How can one determine the function f(n) which categorizes a particular
algorithm?
It is generally the case that, by analyzing the loop structure of an algorithm,
we can estimate the number of run-time operations (or amount of memory
units) required by sum of several terms, each dependent on n (the number of
items being processed by the algorithm). That is, typically we are able to
express the number of run time operations (or amount of memory) as a sum
of the form
f1(n)+f2(n)+…+fk(n)
Moreover, it is also typical that we identify one of the terms in the expression
as the dominant term. A dominant term is one which, for bigger values of n,
becomes so large that it will allow us to ignore all the other terms from a
Big-O perspective. For instance, suppose that we had an expression
involving two terms such as
n2+6n
Here, the n2 term dominates the 6n term since, for n ≥ 6, we have
n2+6n ≤ n2+n2 =2n2
Thus, n2+6n is expression which would lead to an O(n2) categorization
because of the dominance of the n2 term. In general the problem of Big-O
categorization reduces to finding the dominant term in an expression
representing the number of operations or amount of memory required by an
algorithm.
Exercise:
Perform a Big-O Analysis for those statements inside each of the following nested
loop constructs
a) for(i=0; i<n; i++)
for(j=0; j<n; j++)
…..
b)
for(i=0; i<n; i++)
{
j=n;
while(j>0)
{
…
j=j/2;
}
}
c)
i=1;
do
{
j=1;
do
{
…
j=2*j
}while(j<n);
i=i++;
}while(i<n);
Ex1:
f(n) = n2 +100n
f(n) = n2 + 100n ≤ n2 + n2
=2 n2
C g(n) for n ≥ 100 n0 (C=2 and n0=100)
f(n) = O(g(n)) = O(n2)
This same f(n) is also O(n3), since n2 + 100n is less than or equal to 2n3 for all n
greater than or equal to 8.
Given a function f(n), there may be many functions g(n) such that f(n) is O(g(n)).
If f(n) is O(g(n)), "eventually" (that is, for n ≥ n0 ) f(n) becomes permanently
smaller or equal to some multiple of g(n). In a sense we are saying that f(n) is
bounded by g(n) from above, or that f(n) is a "smaller" function than g(n). Another
formal way of saying this is that f(n) is asymptotically bounded by g(n). Yet another
interpretation is that f(n) grows more slowly than g(n), since, proportionately (that
is, up to the factor of C), g(n) eventually becomes larger.
Ex2:
f(n) = 8n + 128
f(n) = 8n + 128 ≤ 8n + n, for n ≥ 128
= 9n
C g(n)
f(n) = O(g(n)) = O(n) ,for n ≥ 128
We wish to show that f(n) = O(n2). According to the definition of Big-O, in order to
show this we need to find an integer n0 and a constant C>0 such that for all integers
n ≥ n0 , f (n) Cn2.
It does not matter what the particular constants are as long as they exist! For example
if we take C = 1. Then
0 (n-16)(n+8)
Since (n+8) > 0 for all values of n ≥ 0, we conclude that (n0-16) ≥ 0. That is, n0=16
So we have that for C=1 and n0=16, f(n) Cn2 for all integers n ≥ n0 . Hence,
f(n) = O(n2) . From the above graph one can see that the function f(n) = n 2 is greater
than the function f(n) = 8n + 128 to the right of n = 16. Of course, there are many
other values of C and n0 that will do.
Although a function may be asymptotically bounded by many other functions, we
usually look for an asymptotic bound that is a single term with a leading coefficient
of 1 and that is as "close a fit" as possible.
Ex3: f(n) = n
f(n) ≤ n + n
=2n
C g(n) for n ≥ 1 n0 ( C = 2 and n0 = 1)
f(n) = O(g(n)) = O(n)
Ex4: f(n) = n2 + 3n + 10
f(n) = n2 + 3n ≤ n2 + n2
= 2 n2
C g(n) for n ≥ 3 n0 (C = 2 and n0 = 3 )
f(n) = O(g(n)) = O(n2)
3.1. Conventions for Writing Big O
Expressions
Certain conventions have evolved which concern how Big O expressions are
normally written:
Certain Big-O expressions occur so frequently that they are given names.
The following table lists some of the commonly occurring Big-O expressions and
the useful name given to each of them
Expression Name
O(1) Constant
O(log n) Logarithmic
O(log 2n) Log squared
O(n) Linear
O(n log n) N log n
O(n2) Quadratic
O(n3) Cubic
O(2n) Exponential