Module 2
Module 2
Module 2
In the analysis of the algorithm, it generally focused on CPU (time) usage, Memory usage,
Disk usage, and Network usage. All are important, but the most concern is about the CPU
time.
Performance: How much time/memory/disk/etc. is used when a program is run. This
depends on the machine, compiler, etc. as well as the code we write.
Complexity: How do the resource requirements of a program or algorithm scale, i.e. what
happens as the size of the problem being solved by the code gets larger.
Algorithm Analysis:
Algorithm analysis is an important part of computational complexity theory, which provides
theoretical estimation for the required resources of an algorithm to solve a specific
computational problem. Analysis of algorithms is the determination of the amount of time
and space resources required to execute it.
Why Analysis of Algorithms is important?
Time complexity
he time complexity of an algorithm quantifies the amount of time taken by an algorithm
to run as a function of the length of the input. Note that the time to run is a function of
the length of the input and not the actual execution time of the machine on which the
algorithm is running on.
Space Complexity:
The space complexity of an algorithm quantifies the amount of space taken by an
algorithm to run as a function of the length of the input.
In the worst-case analysis, we calculate the upper bound on the running time of an
algorithm. We must know the case that causes a maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched (x in
the above code) is not present in the array. When x is not present, the search() function
compares it with all the elements of arr[] one by one. Therefore, the worst-case time
complexity of linear search would be Θ(n). Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all
of the inputs. Sum all the calculated values and divide the sum by the total number of
inputs. We must know (or predict) the distribution of cases. For the linear search problem,
let us assume that all cases are uniformly distributed (including the case of x not being
present in the array). So we sum all the cases and divide the sum by (n+1). Following is the
value of average-case time complexity.
In the best case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations to be
executed. In the linear search problem, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not dependent on n). So
time complexity in the best case would be Θ(1)
Asymptotic Analysis
is the big idea that handles issues in analyzing algorithms. In Asymptotic Analysis, we
evaluate the performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an algorithm
increases with the input size.
Asymptotic Notations
Asymptotic notations are mathematical tools to represent the time complexity of
algorithms for asymptotic analysis
Time function of an algorithm is represented by T(n) where n-- is input size
We have some asymptotic notations
Ω Notation (BIG OMEGA )
O (big oh)
Θ(BIG theta)
Big oh – O
The Big O notation defines an upper bound of an algorithm, it
bounds a function only from above
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}
Ω big omega
Just as Big O notation provides an asymptotic upper bound on a
function, Ω notation provides an asymptotic lower bound.
Ω Notation can be useful when we have a lower bound on the time
complexity of an algorithm. As discussed in the previous post, the best
case performance of an algorithm is generally not useful, the Omega
notation is the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Θ(BIG theta)
The theta notation bounds a function from above and below, so it defines
exact asymptotic behaviour. A simple way to get the Theta notation of
an expression is to drop low-order terms and ignore leading constants.
Little ο
Big-Ο is used as a tight upper-bound on the growth of an algorithm’s effort (this effort is
described by the function f(n)), even though, as written, it can also be a loose upper-bound.
“Little-ο” (ο()) notation is used to describe an upper-bound that cannot be tight.
Definition : Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if for any real constant c > 0, there
exists an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).
Little ω
Definition : Let f(n) and g(n) be functions that map positive integers
to positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n)))
if for any real constant c > 0, there exists an integer constant n0 ≥ 1
such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.
Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values on
smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described
by the recurrence.
T (n) = θ (1) if n=1
2TDAA Recurrence Relation + θ (n) if n>1
There are four methods for solving Recurrence:
1.Substitution Method
2.Iteration Method
3.Recursion Tree Method
4.Master Method
1. Substitution Method:
The Substitution Method Consists of two main steps:
1. Guess the Solution.
2. Use the mathematical induction to find the boundary condition and shows that
the guess is correct.
T (n) = T +n
We have to show that it is asymptotically bound by O (log n).
Solution:
For T (n) = O (log n)
We have to show that for some constant c
T (n) ≤c logn.
Put this in given Recurrence Equation.
2.Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.
Example1: Consider the Recurrence
T (n) = 1 if n=1
= 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 2 2T (n-2)
= 4[2T (n-3)] = 2 3T (n-3)
= 8[2T (n-4)] = 2 4T (n-4) (Eq.1)
T (n) = 2 i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2 n-1 T (1)
= 2 n-1 .1 {T (1) =1 .....given}
= 2 n-1
3. Recursion Tree Method
4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represents the cost of a single subproblem.
5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs
and then sum all pre-level costs to determine the total cost of all
levels of the recursion.
The Master Method is used for solving the following types of recurrence
T (n) = a T(n/b) + f (n) with a≥1 and b≥1 be constant & f(n) be a function and n/b can
be interpreted as
Master Theorem:
It is possible to complete an asymptotic tight bound in these three cases:
T (n) = Θ
Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a
Analysis
T(n)={c7xT(n2)+dxn2ifn=1otherwiseT(n)={cifn=17xT(n2)+dxn2otherwise wher
e c and d are constants
Using this recurrence relation, we get T(n)=O(nlog7)T(n)=O(nlog7)
Hence, the complexity of Strassen’s matrix multiplication algorithm
is O(nlog7)O(nlog7).