Basic Concepts in Algorithmic Analysis

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 59

CSE 353 :

DESIGN AND ANALYSIS


OF ALGORITHMS

Chapter # 1 :
Basic Concepts in Algorithmic Analysis

Dr. Muhammad Akhlaq


Assistant Professor, University of Hafr Al-Batin (UHB), College of Computer Science and Engineering (CCSE)
Hafr Al-Batin 39524, Saudi Arabia. Email: akhlaq[at]uhb.edu.sa

Slides adopted from M. Alsuwaiyel, Algorithms: Design Techniques and Analysis


the textbook: (revised edition), World Scientific Publishing Co Pte Ltd, 2016.
Course Goals
2

 To provide the students with the following:

 The fundamentals of algorithms and algorithmic techniques,


 The ability to decide on the suitability of a specific technique
for a given problem,
 The ability to analyze the complexity of a given algorithm,
 The ability to design efficient algorithms for new situations,
using as building blocks the techniques learned, and
 Introducing the concept of NP-complete problems.
Course Contents
3

 Basic Concepts in Algorithmic Analysis, Chapter1


 Mathematical preliminaries, Chapter 2 +
 Review of Data Structures, Chapter 3
 Advanced Data Structures, Chapter 4
 Induction, Chapter 5
 Divide and Conquer, Chapter 6
 Dynamic Programming, Chapter 7
 Greedy Algorithms, Chapter 8
 Graph Traversal, Chapter 9
 NP- Complete Problems, Chapter 10
OUTLINE
4

1. What is an algorithm?
2. Linear Search Algorithm
3. Binary Search
4. Merging Two Sorted Lists
5. Selection Sort
6. Insertion Sort
7. Bottom-Up Merge Sort
8. Time Complexity
9. Space Complexity
10. Estimating the Running Time of an
Algorithm
11. Recurrence Relations
12. Average Case Analysis
13. Amortized Analysis
14. Some Useful Formulas
1. What is an algorithm?
5

 An algorithm is defined as a finite set of steps, each of


which may require one or more operations and if
carried out on a set of inputs, will produce one or
more outputs after a finite amount of time.
 Examples of Algorithms
 Computing the least common multiple of a pair of integers.
 Finding the shortest path between a pair of nodes in a finite graph.
 Examples of computations that are not algorithms
 Halting Problem: Given a description of a Turing machine and its initial
input, determine whether the program, when executed on this input, ever halts
(completes).
 State Entry Problem: …………will it enter some state ‘q'
Properties of Algorithms
6

 Definiteness: It must be clear what should be done.


 Effectiveness: Each step must be such that it can, at
least in principle, be carried out by a person using
pencil and paper in a finite amount of time. E.g.
integer arithmetic.
 An algorithm produces one or more outputs and
may have zero or more externally supplied inputs.
 Finiteness: Algorithms should terminate after a
finite number of operations.
Our Objective
7

 Find the most efficient algorithm for solving a


particular problem.
 In order to achieve the objective, we need to

determine:
 How can we find such algorithm?
 What does it mean to be an efficient algorithm?
 How can one tell that it is more efficient than other
algorithms?
 In Chapter-1, we’ll answer the above questions based on
some easy-to-understand searching and sorting algorithms
that we may have seen earlier.
Searching Problem
8

 Assume A is an array with n elements A[1], A[2], …


A[n]. For a given element x, we must determine
whether there is an index j; 1 ≤ j ≤ n, such that x =
A[j]

 Two algorithms, among others, address this problem


 Linear Search
 Binary Search
2. Linear Search Algorithm
9

Algorithm: LINEARSEARCH
Input: array A[1..n] of n elements and an element x.
Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise.
1. j  1
2. while (j < n) and (x A[j])
3. j  j + 1 Repeat until
not found
4. end while
5. if x = A[j] then return j else return 0
Analyzing Linear Search
10

 One way to measure efficiency is to count how many


statements get executed before the algorithm terminates
 One should keep an eye, though, on statements that are
executed “repeatedly”.
 What will be the number of “element” comparisons if x
 First appears in the first element of A
1
 First appears in the middle element of A
n/2
 First appears in the last element of A
n
 Doesn’t appear in A.
n
 So, the complexity of LINEARSEARCH algorithm is
O(n).
Element comparisons: comparisons involving objects in the input data only …
(exclude those needed for the implementation of loops etc)
3. Binary Search
11

 We can do “better” than linear search if we knew that


the elements of A are sorted, say in non-decreasing
order.

 The idea is that you can compare x to the middle


element of A, say A[middle].
 If x < A[middle] then you know that x cannot be an
element from A[middle+1], A[middle+2], …A[n]. Why?
 If x > A[middle] then you know that x cannot be an
element from A[1], A[2], …A[middle-1]. Why?
Binary Search Algorithm
12

Algorithm: BINARYSEARCH
Input: An array A[1..n] of n elements sorted in
nondecreasing order and an element x.
Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise.
1. low  1; high  n; j  0
2. while (low ≤ high) and (j = 0)
3. mid  (low + high)/2 j is index of
element to search…
4. if x = A[mid] then j  mid 0 means not found

5. else if x < A[mid] then high  mid - 1


6. else low  mid + 1
7. end while
8. return j
Worst Case Analysis of Binary Search
13

 What to do: Find the maximum number of element


comparisons
 How to do:
 The number of “element” comparisons is equal to the number
of iterations of the while loop in steps 2-7. HOW?
 How many elements of the input do we have in the
 First iteration n
 Second iteration n/2
 Third iteration n/4
 … …
⌊ n / 2 j-1⌋
 Jth iteration
 The last iteration occurs when the size of input we have = ? 1
Theorem
14

 The number of
comparisons performed
by Algorithm
BINARYSEARCH on a
sorted array of size n is
at most log n  1
 Alternatively,
if BINARYSEARCH Algorithm
is described in terms of decision or
binary search tree of height ⌊log n⌋,
then maximum number of
comparisons is height+1, i.e., ⌊log n⌋
+1
 So, what is the complexity ? O(log n)
4. Merging Two Sorted Lists
15

 Problem Description: Given two lists (arrays) that are


sorted in non-decreasing order, we need to merge
them into one list sorted in non-decreasing order.
 Example:
Input

3 7 9 12 1 2 4 13 14
n1 n2
Output n = n1 + n2
1 2 3 4 7 9 12 13 1415
Algorithm MERGE
16

Algorithm: MERGE
Input: An array A[1..m] of elements and three indices p,
q and r, with 1 ≤ p ≤ q <r ≤ m, such that both the
subarrays A[p..q] and A[q + 1..r] are sorted
individually in nondecreasing order.
Output: A[p..r] contains the result of merging the two
subarrays A[p..q] and A[q + 1..r].
Comment: B[p..r] is an auxiliary array… (temporary or
helping)
Algorithm MERGE (Cont.)
17

1. s ← p; t ← q + 1; k ← p
2. while s ≤ q and t ≤ r
3. if A[s] ≤ A[t] then
4. B[k] ← A[s]
5. s←s+1
6. else
12. if (s = q + 1) then
7. B[k] ←A[t]
B[k..r] ← A[t..r]
8. t←t+1
13. else
9. end if
B[k..r] ← A[s..q]
10. k ← k + 1
14. end if
11. end while 15. A[p..r] ← B[p..r]
Analyzing MERGE
18

 Assuming arrays A[p,q] and A[q+1,r]


 The least number of comparisons is which
occurs when n1 to n-1 when n1 ≤ n2

 The most number of comparisons is which


occurs when ⌊ n/2⌋ to n-1 when ⌊ n/2⌋, ⌈n/2⌉

 The number of element assignments (involving


2n
input data) performed is (note that it copies A[] to B[], then B[]
to A[])
 So, what is the complexity ? O(n)

The number of comparisons by MERGE is at least n1 and at most n-1.


(where n1 is the size of 1st array and n is the size of output array)
5. Selection Sort
19

 First, find the


minimum element
and store it in A[1].
 Next, find the
minimum of the
remaining n – 1
elements and store it
in A[2].
 We continue this
way until the second
largest element is
stored in A[n – 1] and
the largest in A[n].
Selection Sort
20

Algorithm: SELECTIONSORT
Input: An array A[1..n] of n elements.
Output: A[1..n] sorted in nondecreasing order.
1. for i  1 to n - 1
2. k  i
3. for j  i + 1 to n
{Find the index of the ith smallest element}

4. if A[j] < A[k] then k  j


5. end for
6. if k  i then interchange A[i] and
A[k]
7. end for
Selection Sort Example
21

i k 5 2 9 8 4

1 2 2 5 9 8 4

2 5 2 4 9 8 5

3 5 2 4 5 8 9

4 4 2 4 5 8 9
Analyzing Selection Sort
22
 We need to find the number of comparisons carried
out in line #4:
 For each iteration of the outer for loop, how many times
is line #4 executed? In total, line #4 is executed

 The number of element Interchanges (swaps):


 Minimum: 0
 Maximum: n – 1
 The number of element assignments is 3 times the
number of element interchanges, i.e, between 0 and 3(n -
1).
 So, what is the complexity ? O(n2)
6. Insertion Sort
Assume left part (one element) as sorted and right part
23 as unsorted. Take elements one-by-one from right and
INSERT at correct position in the left part.
 We begin with the
subarray of size 1, A[1],
which is already sorted.
 Next, A[2] is inserted
before or after A[1]
depending on whether it
is smaller than A[1] or
not.
 Continuing this way, in
the ith iteration, A[i] is
inserted in its proper
position in the sorted
subarray A[1.. i – 1].
Insertion Sort
24

Algorithm: INSERTIONSORT
Input: An array A[1..n] of n elements.
Output: A[1..n] sorted in nondecreasing order.
1. for i  2 to n
2. x  A[i]
3. j  i - 1
4. while (j > 0) and (A[j] > x)
5. A[j + 1]  A[j]
6. j  j - 1
7. end while
8. A[j + 1]  x Shift elements to right
9. end for … to make space for
element to be inserted
Insertion Sort Example
25

x=2 5 2 9 8 4

x=9 2 5 9 8 4

x=8 2 5 9 8 4

x=4 2 5 8 9 4

2 4 5 8 9
Analyzing Insertion Sort
26

 The min number of element comparisons is …


which occurs when …. ?
n – 1 , when the array is already sorted
 The maximum number of element comparisons is
... occurs when … ?
n (n – 1)/2 , when array is already sorted
in decreasing order and all elements are distinct.
 The number of element assignments is … ?
Between n – 1 and n (n – 1)/2 , plus n – 1.
 So, what is the complexity ? O(n2)
7. Bottom-Up Merge Sort
27

 Informally, the algorithm does the following


1. Divide the array into pairs of elements (with
possibly single elements in case the number of
elements is odd )
2. Merge each pair in non-decreasing order (with
possibly a single “pair” left)
3. Repeat step 2 until there is only one “pair” left.
Bottom-Up Merge Sort Example
28
5 2 9 8 4 12 7 1 3 6 10
1 2 3 4 5 6 7 8 9 10 12
1 2 4 5 7 8 9 12 3 6 10
2 5 8 9 1 4 7 12 3 6 10
2 5 8 9 4 12 1 7 3 6 10
5 2 9 8 4 12 7 1 3 6 10
Algorithm BOTTOMUPSORT
29

Algorithm: BOTTOMUPSORT
Input: An array A[1..n] of n elements.
Output: A[1..n] sorted in nondecreasing order.
1. t ← 1
2. while t < n Array, left, middle,
3. s ← t; t ← 2s; i ← 0 right

4. while i + t ≤ n
5. MERGE(A, i + 1, i + s, i + t)
6. i ← i + t
7. end while Merge the remaining
one element, if any
8. if i + s < n then
9. MERGE(A, i + 1, i+ s, n)
10. end while
Analyzing Algorithm BOTTOMUPSORT
30

 Assume that the size of the array, n, is a power of 2.


 In the first iteration, we have n/2 pairs that are
merged using n/2 element comparisons.
 In the second iteration, we have n/4 pairs that are
merged using n/2 to 3n/4 element comparisons.
 ….
 In the jth iteration, we have ⌊ n/2J⌋ pairs that are
merged using between (n/2J)2J-1 and (n/2J)(2J-1)
element comparisons.
 ………………………………………………………………………………………………………………………………………………..

 The outer while loop is executed log n times.


 Therefore ? O(n log n)
 What about the # of element assignments: 2n log n
Analyzing Algorithm BOTTOMUPSORT
31

 So, what is the complexity ? O(n log n)


8. Time Complexity
32

 One way of measuring the performance of an algorithm is


how fast it executes. The question is how to measure this
“time”?
 Is having a digital stop watch suitable?

 E.g.: Assume that an element comparison takes 10 -6 sec on


a machine. Then, time for n=28, and n=220 :
 BOTTOMUPSORT: n log n - n+1 = 0.0008 sec, and 20 sec
 SELECTIONSORT: n(n-1)/2 = 0.008 sec, and 6.4 days

 So, the time is an extremely precious resource to be


investigated in the analysis of algorithms.
Order of Growth
33

 As measuring time is subjective to many factors, we


look for a more “objective” measure, i.e. the number
of operations.
 Since counting the exact number of operations is
cumbersome, sometimes impossible, we can always
focus our attention to asymptotic analysis, where
constants and lower-order terms are ignored.
 E.g. n3, 1000n3, and 10n3+10000n2+5n-1 are all “the same”
 Reason: we are always interested in comparing different
algorithms for arbitrary large number of inputs.

Asymptote: a straight line that is the limiting value of a curve; can be considered as tangent at infinity.
Cont…
34

Function’s form Named as Example(s)


c Constant Odd/even
logk n Logarithmic Binary search
cn Linear Linear search, max
cn2 Quadratic Bubble sort
cn3 Cubic 3 variables equation solver

n c
< 1)
, n c
log k
n ( 0 < c Sublinear
n log n , n1.5 Sub-quadratic Merge sort

cnx Polynomial
an Exponential Find all subsets
n! Factorial All permutations of a
string
nn … explosive ?
Example
35

Growth rate for some function


Growth rate for same functions for larger input
( for smaller input sizes )
sizes
Running Times for Different Sizes of
Inputs of Different Functions
36

* Assuming that each operation takes one nanosecond


Different Asymptotic Notations (1/3)
37
‘O’ Big-oh notation:
O(g(n))={ f(n): if there exist constants C and n0 such that
f(n) ≤ Cg(n) , for n ≥ n0 }
For example:
f(n) = 5 n2 + 2 n + 1 O(n2)
g(n) = n2
C = 8 (5+2+1) f(n) ≤ 8g(n) , for n ≥ 1(n≠0)
n0 = 1
Meaning: For all data sets
big enough (i.e., n>n0), the
UPPER
algorithm always executes
BOUND in less than or equal to
cg(n) steps in any [best,
average, worst] case.
Different Asymptotic Notations (2/3)
38
‘Ω’ Omega notation:
Ω(g(n))={ f(n): there exist constants C and n0 such that
Cg(n) ≤ f(n) , for n ≥ n0 }
For example:
f(n) = 5 n2 + 2 n + 1 Ω(n2)
g(n) = n2
C=5 5n2 ≤ f(n) , for n ≥ 0
n0 = 0

LOWER Meaning: For all data sets big


enough (i.e., n > n0), the
BOUND algorithm always executes in
more than or equal to cg(n)
steps.
Different Asymptotic Notations (3/3)
39
‘Ө’ Theta notation:
Ө(g(n))={ f(n): there exist constants C1 , C2 and n0 such that
C1g(n) ≤ f(n) ≤ C2g(n) , for n ≥ n0 }
For example:
f(n) = 5 n2 + 2 n + 1 Ө(n2)
g(n) = n2
C1 = 5 5n2 ≤ f(n) ≤ 8g(n) , for n ≥ 1
C2 = 8
n0 = 1 Meaning: For all data sets
big enough (i.e., n > n0),
the algorithm always
TIGHT executes in more than or
equal to c1g(n) steps but
BOUND less than or equal to
c2g(n) steps in any [best,
average, worst] case.
Complexity Classes and small-oh (o())
40
 Using () notation, one can divide the functions into
different equivalence classes, where f(n) and g(n)
belong to the same equivalence class if f(n) =
(g(n)). For example, all polynomials of degree 2
belong to the same complexity class n2. To show
that two functions belong to different equivalence
classes, the small-oh notation has been introduced
 Definition: Let f(n) and g(n) be two functions from
the set of natural numbers to the set of non-negative
real numbers. f(n) is said to be in o(g(n)) if for every
constant c > 0, there is a positive integer n0 such that
f(n) < cg(n) for all n  n0.
Very Useful Simplifying Rule
41
 Let f(n) and g(n) be two functions from the set of
natural numbers to the set of non-negative real
numbers such that:
f ( n)
0  L  lim 
n  g ( n)

Then
1. If L = c, c ∈ R+ then f(n) = Θ(g(n))
2. If L ≤ c, c ∈ R (c can be 0) then f(n) = O(g(n))
3. If L = 0, then f(n) = O(g(n)) and g(n) = O(f(n))
4. If L ≥ c, c ∈ R (c can be ∞) then f(n) = Ω(g(n))
5. If L = ∞, then f(n) = Ω(g(n))and g(n) = Ω(f(n))
41
Rules for using big-O
42
 For large values of input n, the constants and terms with lower
degree of n are ignored.

1. Multiplicative Constants Rule: Ignoring constant factors.


O(c f(n)) = O(f(n)), where c is a constant;
Example: O(20 n3) = O(n3)

2. Addition Rule: Ignoring smaller terms.


If O(f(n)) < O(h(n)) then O(f(n) + h(n)) = O(h(n)).
Example: O(n2 log n + n3) = O(n3)
O(2000 n3 + 2n! + n800 + 10n + 27n log n + 5) = O(n !)

3. Multiplication Rule: O( f(n) * h(n) ) = O(f(n)) * O(h(n))


Example: O( (n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2) ) = O(n5)
Exercise :
43

 f(n) can be O(n), Ө(n), Which one is Better


algorithm?
Ω(n)
n3 + 2n2 100n2 + 1000
n log n = O(n ) 2

n log n = (n log n) n0.1 log n


n log n = (n)
n + 100n0.1 2n + 10 log n
 Likewise 5n5 n!
O(n2) = O(n3)
n-152n/100 1000n15
(n3) = (n2)
82log n 3n7 + 7n Green is better
9. Space Complexity
44

 Space complexity refers to the number of memory cells


needed to carry out the computational steps of algorithm
excluding memory cells needed to hold the input.
 Compare additional space needed to carry out SELECTIONSORT to
that of BOTTOMUPSORT if we have an array of 2 million elements!

 Examples : What is the space complexity for


 Linear search Ө(1)
 Binary search Ө(1)
 Selection sort Ө(1)
 Insertion sort Ө(1)
 Merge (that merges two sorted lists) Ө(n)
 Bottom up merge sort Ө(n)
10. Estimating the Running Time of
Algorithm
45

 As mentioned earlier, we need to focus on counting


those operations which represent, in general, the
behavior of the algorithm. This is achieved by
1. Counting the frequency of basic operations.
 The basic operations are the one with highest
frequency among all other elementary operations.
 elementary operations - constant time
Above all, use your brain!

 consecutive statements - sum of times


 conditionals - sum of branches, condition
 loops - sum of iterations
 function calls - cost of function body

2. Recurrence Relations – solve the recursive equation


Example :
46

CODE COST REPEATITION


sumOfArray(A[ ] , size)
{
total=0 1 1
for i= 0 to n-1 and i=i+1 2 n+1
{
total = total + A[i] 2 n
}
return total 1 1
}

TsumOfArray = 1+ 2(n+1) + 2n +1
= 4n + 4
(OR) T(n) = Cn + C’ Where C and C’ are constants
•So, TsumOfArray = Cn + C’  O(n)

Note:- We can use C1, C2, C3 and C4 for costs. As they are constants, they will be ignored at the end.
Therefore, it does not matter if we use a number or a symbol for constant.
Examples:
How to determine complexity of code structures
(1/6)
47

Loops: for, while, and do-while:


Complexity is determined by the number of iterations in the
loop times the complexity of the body of the loop.
Examples:
for (int i = 0; i < n; i++)
sum = sum - i; O(n)

for (int i = 0; i < n * n; i++) O(n2)


sum = sum + i;

i=1;
while (i < n) {
sum = sum + i; O(log n)
i = i*2
}
Examples:
How to determine complexity of code structures
(2/6)
48

Nested Loops:
Examples:
sum = 0
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) O(n2)
sum += i * j ;

i = 1;
while(i <= n) {
j = 1;
while(j <= n){
statements of constant complexity O(n log n)
j = j*2;
}
i = i+1;
}
Examples:
How to determine complexity of code structures
(3/6)
49

Sequence of statements: Use Addition rule


O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk)
= O(max(s1, s2, s3, . . . , sk))

Example:
for (int j = 0; j < n * n; j++)
sum = sum + j;
for (int k = 0; k < n; k++)
sum = sum - l;
System.out.print("sum is now ” + sum);

Complexity is : O(n2) + O(n) +O(1) = O(n2)


Examples:
How to determine complexity of code structures
(4/6)
50

Switch: Take the complexity of the most expensive case


char key;
int[] X = new int[n];
int[][] Y = new int[n][n];
........
switch(key) {
case 'a':
for(int i = 0; i < X.length; i++) o(n)
sum += X[i];
break;
case 'b':
for(int i = 0; i < Y.length; j++)
for(int j = 0; j < Y[0].length; j++)
o(n2)
sum += Y[i][j];
break; Overall Complexity: O(n2)
Examples:
How to determine complexity of code structures
(5/6)
51

If Statement: Take the complexity of the most expensive case :


char key;
int[][] A = new int[n][n];
int[][] B = new int[n][n];
int[][] C = new int[n][n];
........
if(key == '+') {
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++)
C[i][j] = A[i][j] + B[i][j]; Overall
} // End of if block O(n2) complexity
O(n3)
else if(key == 'x')
C = matrixMult(A, B); O(n3)
else
System.out.println("Error! Enter '+' or 'x'!"); O(1)
Examples:
How to determine complexity of code structures
(6/6)
52
 Sometimes if-else statements must be carefully checked:
O(if-else) = O(Condition)+ Max[O(if), O(else)]
int[] integers = new int[n];
........
if(hasPrimes(integers) == true)
integers[0] = 20; O(1)
else
integers[0] = -20; O(1)

public boolean hasPrimes(int[] arr) {


for(int i = 0; i < arr.length; i++)
..........
.......... O(n)
} // End of hasPrimes()
O(if-else) = O(Condition) = O(n)
Exercise : Find the Running Time for each
53

sum = 0; count := 0;
for (j=1; j<=n; j++) while n >= 1 do
for (i=1; i<=j; i++) for j := 1 to n do
sum++; execute_algorithm_x;
for (k=0; k<n; k++) count := count + 1;
A[k] = k; end for
n := n / 2;
for j := 1 to n do end while
sum[j] := 0; return count;
for i := 1 to j2 do sum1 = 0;
sum[j]:= sum[j]+ i; for (k=1; k<=n; k*=2)
end for; for (j=1; j<=n; j++)
end for; sum1++; //n replaced by k?
return sum[1..n];
count := 0;
count := 0; for i := 1 to n do
for i := 1 to n do j := 2;
m :=  n/i  while j <= n do
for j := 1 to m do j := j2;
count:= count+ 1 ; count := count + 1;
end for; end while
end for; end for;
return count; return count;
11. Recurrence Relations
54

 The number of operations can be represented as a


recurrence relation… e.g… T(n) = 2T(n/2)+n
 There are many techniques, other than expanding
the recurrence relation, which we will study in next
lectures in order to solve these recurrences.
 What is the call to sort an array  Recursive Merge Sort
with n elements?
 Let us assume that the overall cost MergeSort(A,p,r)
of sorting n elements is T(n), if p < r then
q := (p+r)/2;
assuming that n is a power of 2.
MergeSort(A,p,q);
 If n = 1, do we know T(n)? MergeSort(A,q+1,r);
 What is the cost of MergeSort(A,p,q)? Merge(A,p,q,r);
 What is the cost of MergeSort(A,q+1,r)? end if;
 What is the cost of Merge(A,p,q,r)?
Example:
55
 We may express the number of comparisons done by
BINARYSEARCH algorithm using the recurrence:

 The solution to this recurrence reduces to a summation


as follows:

(substitution)

 That is,
C(n) ≤ ⌊log n⌋ + 1. It follows that C(n) = O(log n).
12. Average Case Analysis
56
 Probabilities of all inputs is an important piece of
prior knowledge in order to compute the number of
operations on average.
 Usually, average case analysis is lengthy and
complicated, even with simplifying assumptions.
 Average Running Time: The running time is taken
to be the average time over all inputs of size n.
 Assume k inputs, where each input costs Ci
operations, and each input can occur with probability
Pi, 1  i  k, the average running time is given by


k
(Probability of input i * Cost of input i)  PC
for each input i
 i i
i 1
Example:
Average Case Analysis of Linear
57
Search
 Assume that the probability that key x appears in any
position in the array (1, 2, …, n) or does not appear in
the array is equally likely
 A total of n different inputs, each with probability1/n
 Average number of comparisons for each input? (n+1)/2

 Therefore, average running time of linear search = Θ(n)

-----------

k
How to
find it?  PC
i 1
i i 

Question:-What is the total number of comparisons on the average for Insertion Sort ?
13. Amortized analysis
58

 Amortized analysis averages out the time taken by an


operation throughout the execution of algorithm.
 When to use Amortized analysis?
 O-notation is sometimes pessimistic. The algorithm may be
much faster than our estimate.
 Consider an algorithm in which an operation is executed
repeatedly with the property that its running time fluctuates
throughout the execution of the algorithm. If this operation takes a
large amount of time occasionally and runs much faster most
of the time, then this is an indication that amortized analysis
should be employed, assuming that an exact bound is too hard, if
not impossible.
14. Some Useful Formulas
(Mathematical & Logarithmic)
59
n
 n 

i m
c  c   1  c .  n  m  1
 i m 
n n  n  1 2n  1 Sum of square of


i 1
i 
2

6
natural numbers
from 1 to n

 n  n  1 
2
n  n  1
n Sum of cube of


n
i 
3

i   natural numbers

 2 
from 1 to n

i 1 2 i 1
n
a n 1  1

Sum of natural
numbers from 1 to n a 
i
,a  1
i 0 a 1

59
X

You might also like