0% found this document useful (0 votes)
2 views34 pages

Algorithm Analysis1

The document provides a comprehensive overview of algorithm analysis, focusing on time and space efficiency, and the theoretical and empirical approaches to analyze them. It discusses concepts such as best-case, average-case, and worst-case scenarios, as well as asymptotic notations like Big-O, Big-Theta, and Big-Omega for classifying algorithm efficiency. Additionally, it outlines methods for analyzing both non-recursive and recursive algorithms, including examples and references for further reading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views34 pages

Algorithm Analysis1

The document provides a comprehensive overview of algorithm analysis, focusing on time and space efficiency, and the theoretical and empirical approaches to analyze them. It discusses concepts such as best-case, average-case, and worst-case scenarios, as well as asymptotic notations like Big-O, Big-Theta, and Big-Omega for classifying algorithm efficiency. Additionally, it outlines methods for analyzing both non-recursive and recursive algorithms, including examples and references for further reading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

INTRODUCTION

Algorithm Analysis
Analysis of algorithms
• How good is the algorithm?
– time efficiency
– space efficiency

• Does there exist a better algorithm?


– lower bounds
– optimality

2
Analysis of algorithms
• Issues:
– time efficiency
– space efficiency

• Approaches:
– theoretical analysis
– empirical analysis

3
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input
size

• Basic operation: the operation that contributes the most


towards the running time of the algorithm
T(n) ≈ copC(n)
Number of times
running time execution time basic operation is
for basic operation
executed
or cost

4
Empirical analysis of time efficiency
• Select a specific (typical) sample of inputs

• Use physical unit of time (e.g., milliseconds)


or
Count actual number of basic operation’s executions

• Analyze the empirical data

5
Best-case, average-case, worst-case

For some algorithms, efficiency depends on form of input:

• Worst case: Cworst(n) – maximum over inputs of size n

• Best case: Cbest(n) – minimum over inputs of size n

• Average case: Cavg(n) – “average” over inputs of size n

6
Example: Sequential search

n key comparisons
• Worst case
1 comparisons
• Best case

• Average case (n+1)/2, assuming K is in A


7
Order of growth
• Most important: Order of growth within a constant multiple
as n→∞

• Example:
– How much faster will algorithm run on computer that is
twice as fast?

– How much longer does it take to solve problem of


double input size?

8
Values of some important functions as n →

9
Asymptotic order of growth
A way of comparing functions that ignores constant factors and
small input sizes (because?)

• O(g(n)): class of functions f(n) that grow no faster than g(n)

• Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

• Ω(g(n)): class of functions f(n) that grow at least as fast as


g(n)

10
Big-oh

11
Big-omega

12
Big-theta

13
Establishing order of growth using the
definition
Definition: f(n) is in O(g(n)), denoted f(n) ∈ O(g(n)), if order
of growth of f(n) ≤ order of growth of g(n) (within
constant multiple), i.e., there exist positive constant c
and non-negative integer n0 such that
f(n) ≤ c g(n) for every n ≥ n0

Examples:
• 10n is in O(n2) O(n) is a tighter,
more accurate

• 5n+20 is in O(n)

14
Ω-notation
• Formal definition
– A function t(n) is said to be in Ω(g(n)), denoted t(n)
∈ Ω(g(n)), if t(n) is bounded below by some
constant multiple of g(n) for all large n, i.e., if there
exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≥ cg(n) for all n ≥ n0

• Exercises: prove the following using the above


definition
– 10n2 ∈ Ω(n2)
– 0.3n2 - 2n ∈ Ω(n2)
– 0.1n3 ∈ Ω(n2)
15
Θ-notation
• Formal definition
– A function t(n) is said to be in Θ(g(n)), denoted t(n)
∈ Θ(g(n)), if t(n) is bounded both above and below
by some positive constant multiples of g(n) for all
large n, i.e., if there exist some positive constant c1
and c2 and some nonnegative integer n0 such that
c2 g(n) ≤ t(n) ≤ c1 g(n) for all n ≥ n0

• Exercises: prove the following using the above


definition
– 10n2 ∈ Θ(n2)
– 0.3n2 - 2n ∈ Θ(n2)
– (1/2)n(n+1) ∈ Θ(n2)

16
>=
Ω(g(n)), functions that grow at least as fast as g(n)

=
Θ(g(n)), functions that grow at the same rate as g(n)
g(n)

<=
O(g(n)), functions that grow no faster than g(n)

17
Theorem
• If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then
t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
– The analogous assertions are true for the Ω-notation and Θ-
notation.

• Implication: The algorithm’s overall efficiency will be determined by


the part with a larger order of growth, i.e., its least efficient part.
– For example, 5n2 + 3nlogn ∈ O(n2)

Proof. There exist constants c1, c2, n1, n2 such that


t1(n) ≤ c1*g1(n), for all n ≥ n1
t2(n) ≤ c2*g2(n), for all n ≥ n2
Define c3 = c1 + c2 and n3 = max{n1,n2}. Then
t1(n) + t2(n) ≤ c3*max{g1(n), g2(n)}, for all n ≥ n3

18
Some properties of asymptotic order of growth

• f(n) ∈ O(f(n))

• f(n) ∈ O(g(n)) iff g(n) ∈Ω(f(n))

• If f (n) ∈ O(g (n)) and g(n) ∈ O(h(n)) , then f(n) ∈ O(h(n))

Note similarity with a ≤ b

• If f1(n) ∈ O(g1(n)) and f2(n) ∈ O(g2(n)) , then


f1(n) + f2(n) ∈ O(max{g1(n), g2(n)})

Also, Σ1≤i≤n Θ(f(i)) = Θ (Σ1≤i≤n f(i))

19
Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞
∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2

20
L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn→∞ f(n) = limn→∞ g(n) = ∞ and


the derivatives f´, g´ exist, then

f(n) f ´(n)
lim = lim
n→∞
g (n ) n→∞
g ´(n)

Example: log n vs. n

Stirling’s formula: n! ≈ (2πn)1/2 (n/e)n

21
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log
log--n

n2 quadratic

n3 cubic

2n exponential

n! factorial
22
Time efficiency of nonrecursive algorithms

General Plan for Analysis

• Decide on parameter n indicating input size

• Identify algorithm’s basic operation

• Determine worst, average, and best cases for input of


size n

• Set up a sum for the number of times the basic operation


is executed

• Simplify the sum using standard formulas and rules

23
Useful summation formulas and rules

Σl≤i≤n1 = 1+1+…+1 = n - l + 1
In particular, Σl≤i≤n1 = n - 1 + 1 = n ∈ Θ(n)

Σ1≤i≤n i = 1+2+…+n = n(n+1)/2 ≈ n2/2 ∈ Θ(n2)

Σ1≤i≤n i2 = 12+22+…+n2 = n(n+1)(2n+1)/6 ≈ n3/3 ∈ Θ(n3)

Σ0≤i≤n ai = 1 + a +…+ an = (an+1 - 1)/(a - 1) for any a ≠ 1


In particular, Σ0≤i≤n 2i = 20 + 21 +…+ 2n = 2n+1 - 1 ∈ Θ(2n )

Σ(ai ± bi ) = Σai ± Σbi Σcai = cΣai Σl≤i≤uai = Σl≤i≤mai +


Σm+1≤i≤uai
24
Example 1: Maximum element

T(n) = Σ1≤i≤n-1 1 = n
n--1 = Θ(n) comparisons

25
Example 2: Element uniqueness problem

T(n) = Σ0≤i≤n-2 (Σi+1≤j≤n-1 1)


= Σ0≤i≤n-2 n-i-1 = (n-1+1)(n-1)/2
= Θ( n2 ) comparisons

26
Example 3: Matrix multiplication

T(n) = Σ0≤i≤n-1 Σ0≤i≤n-1 n


2
= Σ0≤i≤n-1 Θ( n )
3
= Θ( n ) multiplications

27
Example 5: Counting binary digits

28
Plan for Analysis of Recursive Algorithms

• Decide on a parameter indicating an input’s size.

• Identify the algorithm’s basic operation.

• Check whether the number of times the basic op. is


executed may vary on different inputs of the same size. (If
it may, the worst, average, and best cases must be
investigated separately.)

• Set up a recurrence relation with an appropriate initial


condition expressing the number of times the basic op. is
executed.

• Solve the recurrence (or, at the very least, establish its


solution’s order of growth) by backward substitutions or
another method.
29
Example 1: Recursive evaluation of n!

Definition: n ! = 1 ∗ 2 ∗ … ∗(n-1) ∗ n for n ≥ 1 and


0! = 1

Recursive definition of n!: F(n) = F(n-1) ∗ n for n ≥ 1


and
F(0) = 1

n
Size: multiplication
Basic operation: M(n) = M(n-1) + 1
30
Recurrence relation: M(0) = 0
Solving the recurrence for M(n)

M(n) = M(n-1) + 1, M(0) = 0


M(n) = M(n-1) + 1
= (M(n-2) + 1) + 1 = M(n-2) + 2
= (M(n-3) + 1) + 2 = M(n-3) + 3

= M(n-i) + i
= M(0) + n
=n
The method is called backward substitution.

31
Example 2: The Tower of Hanoi Puzzle

Recurrence for number of moves:

M(n) = 2M(n-1) + 1

32
Solving recurrence for number of moves

M(n) = 2M(n-1) + 1, M(1) = 1


M(n) = 2M(n-1) + 1
= 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0
= 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0
=…
= 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0
= 2^n -1

33
References
1. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford
Stein, Introduction to Algorithms, Third Edition, Prentice-Hall, 2012
(unit – I, II, III)
2. Jeff Edmonds, How to Think about Algorithms, Cambridge University
Press, 2008 (unit – IV, V)
3. Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman, Data Structures
and Algorithms, Pearson Education, Reprint 2006.
4. Robert Sedgewick and Kevin Wayne, ALGORITHMS, Fourth Edition,
Pearson Education.
5. S.Sridhar, Design and Analysis of Algorithms, First Edition,
OxfordUniversity Press. 2014

34

You might also like