Pmscs 623p Lecture 2

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 80

Data Structure and

Algorithm
(PMSCS 623P)

Lecture 2
Asymptotic Notations

• O notation: asymptotic “upper bound”:

•  notation: asymptotic “lower bound”:

•  notation: asymptotic “tight bound”:


Asymptotic Notations
• O-notation (Big Oh)

O(g(n)) is the set of functions


with smaller or same order of
growth as g(n)

Examples:
T(n) = 3n2+10nlgn+8 is O(n2), O(n2lgn), O(n3),
O(n4), … O(n2lgn), O(n3), O(n4),
T’(n) = 52n2+3n2lgn+8 is

Loose upper
bounds
Asymptotic Notations
•  - notation (Big Omega)

(g(n)) is the set of functions


with larger or same order of
growth as g(n)
Examples:
T(n)=3n2+10nlgn+8 is Ω(n2), Ω(nlgn), Ω(n),
Ω(lgn),Ω(1)
T’(n) = 52n2+3n2lgn+8 is Ω(n2lgn), Ω(n2), Ω(n), …
Asymptotic Notations
• -notation (Big Theta)

(g(n)) is the set of functions with the same


order of growth as g(n)

* f(n) is both O(g(n)) & (g(n)) ↔ f(n) is (g(n))

Examples:
T(n) = 3n2+10nlgn+8 is Θ(n2)
T’(n) = 52n2+3n2lgn+8 is Θ(n2lgn)
Big O
• Informally, O (g(n)) is the set of all functions
with a smaller or same order of growth as g(n),
within a constant multiple
• If we say f(n) is in O(g(n)), it means that g(n) is
an asymptotic upper bound of f(n)
– Intuitively, it is like f(n) ≤ g(n)
• What is O(n2)?
– The set of all functions that grow slower than or in
the same order as n2
Big-O
f n  Og n : { there exist positive constants c and n0 such that
0  f n  cg n for all n n0 }
• What does it mean?
– If f(n) = O(n2), then f(n) can be larger than n2 sometimes, but…
• We can choose some constant c and some value n0 such
that for every value of n larger than n0 : f(n) ≤ cn2
• That is, for values larger than n0, f(n) is never more than a
constant multiplier greater than n2
• Or, in other words, f(n) does not grow more than a constant
factor faster than n2
Visualization Big-O
cg(n)

f(n)

n0
Examples
• Show that 30n+8 is O(n).
– Show c,n0: 30n+8  cn, n ≥ n0 .

– Let c=31, n0=8


cn = 31n = 30n + n ≥ 30n+8,
– so 30n+8  cn.
Big-O example, graphically
• Note 30n+8 isn’t
less than n
anywhere (n>0). cn =
31n 30n+8

Value of function 
• It isn’t even
less than 31n
everywhere.
• But it is less than
30n+8
n
31n everywhere to O(n)
the right of n=8.
n>n0=8 
Increasing n 
Example: Exponential-Time Algorithm
Example: Quadratic-Time Algorithm
 Once upon a time there was an Indian
king who wanted to reward a wise man
for his excellence. The wise man asked
for nothing but some wheat that would
fill up a chess board.

 But here were his rules: in the first tile he wants 1 grain of wheat, then 2 on
the second tile, then 4 on the next one…each tile on the chess board
needed to be filled by double the amount of grains as the previous one.
The naïve king agreed without hesitation, thinking it would be a trivial
demand to fulfill, until he actually went on and tried it…

 So how many grains of wheat does the king owe the wise man? We know
that a chess board has 8 squares by 8 squares, which totals 64 tiles, so the
final tile should have 2⁶⁴ grains of wheat. If you do a calculation online, you
will end up getting 1.8446744*10¹⁹, that is about 18 followed by 18 zeroes.
Assuming that each grain of wheat weights 0.01 grams, that gives us
184,467,440,737 tons of wheat. And 184 billion tons is quite a lot, isn’t it?
9 iterations
So
30 Operations
Some Examples
Determine the time complexity for the following algorithm.
count = 0;
for(i=0; i<10000; i++)
count++;
Some Examples
Determine the time complexity for the following algorithm.
count = 0; O (1)
for(i=0; i<10000; i++)
count++;
Some Examples
Determine the time complexity for the following algorithm.
count = 0;
for(i=0; i<n; i++)
count++;
Some Examples
Determine the time complexity for the following algorithm.
count = 0; O (n)
for(i=0; i<n; i++)
count++;
Some Examples
Determine the time complexity for the following algorithm.
sum = 0;
for(i=0; i<n; i++)
for(j=0; j<n; j++)
sum += arr[i][j];
Some Examples
Determine the time complexity for the following algorithm.
sum = 0;
O (n2)
for(i=0; i<n; i++)
for(j=0; j<n; j++)
sum += arr[i][j];
Some Examples
Determine the time complexity for the following algorithm.
sum = 0;
for(i=1; i<=n; i=i*2)
sum += i;
Some Examples
Determine the time complexity for the following algorithm.
sum = 0; O (lg
n)
for(i=1; i<=n; i=i*2)
sum += i;
WHY? Show mathematical analysis to
prove that it is indeed O (lg n)
Some Examples
Determine the time complexity for the following algorithm.
sum = 0;
for(i=1; i<=n; i=i*2)
for(j=1; j<=n; j++)
sum += i*j;
Some Examples
Determine the time complexity for the following algorithm.
sum = 0; O (n lg n)
for(i=1; i<=n; i=i*2)
for(j=1; j<=n; j++)
sum += i*j;

Why? Outer for loop runs O(lg n) times (prove it!) and for
each iteration of outer loop, the inner loop runs O(n) times
Big-O Notation

→ Constant time

In general, O(nc1 (lgn)c2) time is called


→ Linear time polynomial time (here c1 and c2 are
constants). For E.g. O(n2), O(n3), O(1),
O(n lg n), O(n2 lg n), O(n3 (lg n)2), etc.
→ Quadratic time

• O(2n), O(4n), O(2n^2), etc., are called exponential times.


Polynomial & non-polynomial time algorithms
• Polynomial time algorithm: Algorithm whose worst-
case running time is polynomial
• Examples: Linear Search (in unsorted array): O(n),
Binary Search (in sorted array): O(lg n), etc.
• Non-polynomial time algorithm: Algorithm whose
worst-case running time is not polynomial
• Examples: an algorithm to enumerate and print all possible
orderings of n persons: O(n!), an algorithm to enumerate
and print all possible binary strings of length n: O(2n)
• Theoretically, polynomial algorithms are expected to be more
efficient than non-polynomial or exponential algorithms but this
is not always true in practice …
Are non-polynomial time algorithms always
worse than polynomial time algorithms?
• Theoretically, yes; but not always true in practice.
For e.g. consider algorithms A and B where
- A is an O(n1,000,000) algorithm
- B is an O(nlog log log n) algorithm
- A’s running time is theoretically polynomial, so we
may expect it to be more efficient than B which is an
exponential time algorithm. But practically A takes
much longer time to finish than B which is
theoretically exponential algorithm.
- So there may exist a non-polynomial time algorithm which
is better than a polynomial time algorithm
Visualizing Orders of Growth of Runtimes
• On a graph, as you go to the right, a faster
growing function eventually becomes larger.

fA(n)=30n+8
Running time

fB(n)=n2+1

Increasing n 
Growth of Functions
Complexity Graphs

log(n)
Complexity Graphs

n log(n)

log(n)
Complexity Graphs

n10 n3

n2
n log(n)
Complexity Graphs (log scale)
3n
nn
n20

2n

n10

1.1n
Types of Time Complexity Analysis
• Worst-case: (usually done)
– Running time on worst possible input

• Best-case: (bogus)
– Running time on best possible input

• Average-case: (sometimes done)


 We take all possible inputs and calculate computing time for all of the
inputs sum all the calculated values and divide the sum by total number of
inputs
 We must know (or predict) distribution of inputs to calculate this
 Often we compute it on an assumed distribution of inputs using
expectation, in which case it is called Expected running time
 This is typically calculated to show that although our algorithm’s worst case
running time is not better, its expected time is better than its competitors
We are typically interested in the runtime of an algorithm in the worst case
scenario. Because it provides us a guarantee that the algorithm won’t take any
longer than this, no matter which input is given.
Insertion Sort Algorithm Revisited
INSERTION-SORT (A, n)⊳ A[1 . . n]
for j ← 2 to n O(n-1)
do key ← A[ j] O(1)
i←j–1 O(1)
while i > 0 and A[i] > key
do A[i+1] ← A[i]
i←i–1 O(1)
A[i+1] = key O(1) O(1)

What is the time complexity?


Depends on arrangement of numbers in the input array.

How can you arrange the input numbers so that this


algorithm becomes most efficient (best case)?

How can you arrange the input numbers so that this


algorithm becomes most inefficient (worst case)?
Insertion Sort Algorithm Revisited
(best case analysis)
• Insertion sort performs two operations:
– it scans through the list.
– it moves elements if they are out of order.

• Each operation contributes to the running time of


the algorithm.
• If the input array is already sorted, then insertion
sort compares (n-1) elements and perform no
movements. So, in the best case, it runs in O(n) time.
Insertion Sort Algorithm Revisited
(worst case analysis)
• The worst case for insertion sort will occur when the input list is in
decreasing order.
• To insert the last element,
– we need at most (n-1) comparisons and

– at most n-1 movements.

• To insert the second last element, we need at most (n-2) comparisons


and at most n-2 movements.
• The number of operations needed to perform insertion sort is therefore:
2×(1+2+⋯+n−2+n−1) = n2-n
So, in the worst case, it runs in O(n2) time.
Insertion Sort: Running Time

Here tj = no. of times the condition of while loop is tested for the current value of j.
How can we simplify T(n)? Hint: compute the value of T(n) in the best/worst case
Insertion Sort: Running Time

Insertion sort running time

Insertion sort running time (best case)


Insertion Sort: Running Time
Insertion sort
running time

Insertion sort running time (worst case)


Insertion Sort: Running Time

Here tj = no. of times the condition of while loop is tested for the current value of j.
In the worst case (when input is reverse-sorted), in each iteration of the for loop, all the j-1
elements need to be right shifted, i.e., tj=(j-1)+1 = j :[1 is added to represent the last test].
Putting this in the above eq., we get: T(n) = An 2+Bn+C → O(n2), where A, B, C are constants.
What is T(n) in the best case (when the input numbers are already sorted)?

You might also like