02_Algorithm_Analysis
02_Algorithm_Analysis
02_Algorithm_Analysis
Learning Outcomes
To be able to:
Carry out simple asymptotic analysis of algorithms
Explain the use of big O, omega, and theta notation to
describe the efficiency of an algorithm
Use big O, omega, and theta notation to give asymptotic
upper, lower, and tight bounds on time and space complexity
of algorithms
Determine the time and space complexity of simple
algorithms
2
Scope
Efficiency goals
3
Introduction
An algorithm is a clearly specified set of simple instructions to be followed to
solve a problem.
An algorithm that solves a problem but requires a year is hardly of any use.
Likewise, an algorithm that requires hundreds of gigabytes of main memory is
not (currently) useful on most machines.
5
Computational Complexity
The same problem can frequently be solved with different algorithms which
differ in efficiency.
It is not useful to measure how fast the algorithm runs as this depends on
which particular computer, OS, programming language, compiler, and kind
of inputs are used in testing
Instead,
Method call (Note: the execution time of the method itself may depend on the value
of parameter and it may not be constant)
Memory Access
Note: To simplify complexity analysis we will not consider memory access (fetch or
store) operations. 8
Problem Size
For every algorithm we want to analyze, we need to define the size of the
problem
The dishwashing problem has a size n – number of dishes to be
washed/dried
For a search algorithm, the size of the problem is the size of the search pool
For a sorting algorithm, the size of the program is the number of elements to
be sorted
The majority of algorithms varies its number of steps based on the size of
instance
The efficiency of an algorithm is always stated as a function of the problem
size
We generally use the variable n to represent the problem size
Typically the size of the input (n) is the main consideration
9
Problem/Input size matters!
Some example algorithms and their expected running times based on
the input size
10
Growth Functions
11
Algorithm Complexity
12
Algorithm Complexity (cont…)
We are usually interested in the worst case complexity: what are the most
operations that might be performed for a given problem size.
13
Algorithm Complexity (cont…)
14
Running Time Analysis
15
Asymptotic Complexity
16
Asymptotic Complexity (cont…)
We approximate f(n) by a function g(n) in a way that does not substantially
change the magnitude of f(n). --the function g(n) is sufficiently close to f(n)
for large values of the input size n.
Thus the asymptotic complexity measure does not give the exact number
of operations of an algorithm, but it shows how that number grows with the
size of the input.
This gives us a measure that will work for different operating systems,
compilers and CPUs.
Theta(expression) consist of all the functions that lie in both O(expression) and
Omega(expression). When the upper and lower bounds are the same within a constant
factor, we indicate this by using θ(big-Theta) notation. 18
Asymptotic Notations (cont…)
19
Big-Oh Notation
The most commonly used notation for specifying asymptotic complexity is the big-O
notation.
The coefficients and the lower order terms become increasingly less relevant as n
increases
Two algorithms in the same category are generally considered to have the same
efficiency, but that doesn't mean they have equal growth functions or behave exactly
the same for all values of n
20
Big-O Notation: Definition
Big O:
T(n) = O(f(n)) if there are positive constants c and N
such that T(n) ≤ c f(n) when n ≥ N
This says that function T(n) grows at a rate no faster than f(n) ; thus f(n) is
an upper bound on T(n).
Another way:
c g(n)
N n
22
Big-Ω Notation: Definition
Big Omega:
T(n) = Ω(f(n)) if there are positive constants c
and N such that T(n) ≥ c f(n) when n ≥ N
This says that function T(n) grows at a rate no slower than f(n) ;
thus f(n) is a lower bound on T(n).
Another way:
23
Big-Ω Notation: illustration
T(n) = Ω(f(n))
Big-θ Notation
When the upper and lower bounds are the same within a
constant factor, we indicate this by using Θ (big-Theta) notation.
25
Big-θ Notation: Definition
Big Theta:
This says that function T(n) grows at the same rate as f(n) .
Another way:
26
Big-θ Notation: illustration
c2 g(n)
f(n)
c1 g(n)
N n
Big-Oh Categories
28
Rules for using big-O
For large values of input n, the constants and terms with lower
degree of n are ignored.
Example:
29
Rules for using big-O (cont…)
Example 1:
30
Rules for using big-O (cont…)
3. Multiplication Rule:
Example:
31
Rules for using big-O (cont…)
T(n) = O(nk)
Example:
32
Comparing Algorithms
Establish a relative order among different algorithms, in terms of their relative rates
of growth
The rates of growth are expressed as functions, which are generally in terms of the
number of inputs n
34
Comparing Growth Functions (cont…)
35
Comparing Growth Functions (cont…)
36
Comparing Growth Functions (cont…)
37
How to determine complexity of code structures
NOTE : In general,
doing something with every item in one dimension is linear,
doing something with every item in two dimensions is quadratic,
and dividing the working area in half is logarithmic.
38
Analyzing Loop Execution
Loops: for, while, and do-while:
First determine the order of the body of the loop, then multiply that by the
number of times the loop will execute
for (int count = 0; count < n; count++)
// some sequence of O(1) steps
N loop executions times O(1) operations results in a O(n) efficiency
Consider the following loop:
count = 1;
while (count < n)
{
count *= 2;
// some sequence of O(1) steps
}
i=1;
while (i < n) {
sum = sum + i; O(log n)
i = i*2
}
40
Analyzing Loop Execution (cont…)
We start by considering how to count operations in for-loops.
First of all, we should know the number of iterations of the loop; say it is x.
Then the loop condition is executed x + 1 times.
Example:
int sum (int n) Time Units to Compute:
{ ------------------------
• 1 for the assignment
int partial_sum = 0; • Loop Statement: 1 assignment, n+1 tests, and
int i; n increments
• Loop Body: n loops of 3 units for:
for (i = 1; i <= n; i++) (an assignment, an addition, and multiplications)
partial_sum = partial_sum + (i * i); • 1 for the return statement
------------------------
return partial_sum; Total: 1 + (1 + n + 1 + n) + 3n + 1
} = 5n + 4 = O(n)
41
Analyzing Loop Execution (cont…)
42
Analyzing Loop Execution (cont…)
43
Analyzing Loop Execution (cont…)
Loop example:
Find the exact number of basic operations in the following program fragment:
double x, y;
x = 2.5 ; y = 3.0;
for(int i = 0; i < n; i++){
a[i] = x * y;
x = 2.5 * x;
y = y + a[i];
}
an assignment (i = 0) 1 operation
the loop body that has three assignments, two multiplications, and an addition 6 n operations
Thus the total number of basic operations is 6 * n + 2 * n + (n + 1) + 3 = 9n + 4
44
Analyzing Loop Execution (cont…)
45
Analyzing Loop Execution (cont…)
Analyzing Nested Loops:
When loops are nested, we multiply the complexity of the outer loop
by the complexity of the inner loop
for (int count = 0; count < n; count++)
46
Analyzing Loop Execution (cont…)
Examples:
sum = 0
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) O(n2)
sum += i * j ;
i = 1;
while(i <= n) {
j = 1;
while(j <= n){
statements of constant complexity
O(n log n)
j = j*2;
}
i = i+1;
}
47
Analyzing Loop Execution (cont…)
Example:
Example:
48
Analyzing Sequence of Statements
Consecutive statements: Use Addition rule
These just add, and the maximum is the one that counts
Example:
Example:
50
Analyzing Sequence of Statements (cont…)
Example:
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++) O(n2)
sum = sum + i + j;
n2 + + 1 + n + n = O(n2 + 2n + 1) = O(n2)
51
Analyzing If Execution
If Statement: Take the complexity of the most expensive case :
char key;
int[][] A = new int[n][n];
int[][] B = new int[n][n];
int[][] C = new int[n][n];
........
if(key == '+') {
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) Overall
C[i][j] = A[i][j] + B[i][j]; O(n2)
complexity
} // End of if block O(n3)
else if(key == 'x')
C = matrixMult(A, B);
O(n3)
else
System.out.println("Error! Enter '+' or 'x'!"); O(1)
52
Analyzing If Execution (cont…)
if (test) s1 else s2
The running time is never more than the running time of the test plus the larger
of the running times of s1 and s2
Example:
if (test == 1) O(1)
for (i = 1; i <= n; i++)
sum = sum + i; O(n)
else
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++) O(n2)
sum = sum + i + j;
55
Analyzing If Execution (cont…)
Note: Sometimes a loop may cause the if-else rule not to be applicable.
Consider the following loop:
while (n > 0) {
if (n % 2 == 0) {
System.out.println(n);
n = n / 2;
}
else {
System.out.println(n);
System.out.println(n);
n = n – 1;
}
}
The else-branch has more basic operations; therefore one may conclude
that the loop is O(n). However the if-branch dominates. For example, if n is
60, then the sequence of n is: 60, 30, 15, 14, 7, 6, 3, 2, 1, and 0. Hence the
loop is logarithmic and its complexity is O(log n)
56
Analyzing Switch Execution
Switch: Take the complexity of the most expensive case
char key;
int[] X = new int[n];
int[][] Y = new int[n][n];
........
switch(key) {
case 'a':
for(int i = 0; i < X.length; i++) o(n)
sum += X[i];
break;
case 'b':
for(int i = 0; i < Y.length; j++) o(n2)
for(int j = 0; j < Y[0].length; j++)
sum += Y[i][j];
break;
} // End of switch block
58
Analyzing Method Calls (cont…)
Loop example:
Suppose n is a multiple of 2. Determine the number of basic operations performed by of the
method myMethod():
static int myMethod(int n){ static int helper(int n){
int sum = 0; int sum = 0;
for(int i = 1; i < n; i = i * 2) for(int i = 1; i <= n; i++)
sum = sum + i + helper(i); sum = sum + i;
return sum; return sum;
} }
is log2n
Recursion:
Analyze from the inside (or deepest part) first and work outwards. If there are
function calls, these must be analyzed first.
Example:
long factorial (int n) { Time Units to Compute:
if (n <= 1) ------------------------
return 1; 1 for the test
else 1 for the multiplication statement
return n * factorial (n – 1);
} What about the function call?
61
Summary
Software must make efficient use of resources such as CPU time and memory.
A growth function shows time or space utilization relative to the problem size.
The order of an algorithm is found by eliminating constants and all but the dominant term in the
algorithm’s growth function.
The order of an algorithm provides an upper bound to the algorithm’s growth function.
If the algorithm is inefficient, a faster processor will not help in the long run.
The time complexity of a loop is found by multiplying the complexity of the body of the loop by
how many times the loop will execute.
The analysis of nested loops must take into account both the inner and outer loops.
If and Switch statements: take the complexity of the most expensive case.
62
References
Java Software Structures, - Designing and Using Data Structures,4th edition, Lewis and Chase
Data Structures and Problem Solving Using Java, 4th edition, Weiss
A Practical Introduction to Data Structures and Algorithm Analysis, 3rd edition, Shaffer
Data Structures and Algorithms in Java, 6th edition, Goodrich, Tamassia and Goldwasser
63
Any Question
???
64