Session+2
Session+2
Session+2
• Approaches:
– theoretical analysis
– empirical analysis
Constant time operations
• In practice, designing an efficient algorithm aims to lower the
amount of time that an algorithm runs. However, a single algorithm
can always execute more quickly on a faster processor. Therefore,
the theoretical analysis of an algorithm describes runtime in terms
of number of constant time operations, not nanoseconds.
• Part of the reason for this is that the floating point and integer
values have a fixed size. The table below summarizes operations
that are generally considered constant time operations.
Common constant time operations.
Operation Example
w = 10.4
Addition, subtraction, multiplication, and division of fixed x = 3.4
size integer or floating point values. y = 2.0
z = (w - x) / y
x = 1000
Assignment of a reference, pointer, or other fixed size data y = x
value. a = true
b = a
a = 100
Comparison of two fixed size data values. b = 200
if (b > a) { ... }
x = arr[index]
Read or write an array element at a particular index.
arr[index + 1] = x + 1
Growth of functions and complexity
• Upper and lower bounds
An algorithm with runtime complexity T(N) has a lower bound and an
upper bound.
– Lower bound: A function f(N) that is ≤ the best case T(N), for
all values of N ≥ 1.
– Upper bound: A function f(N) that is ≥ the worst case T(N), for
all values of N ≥ 1.
• Ex: An algorithm with best case runtime T(N)=7N+36 and worst
case runtime T(N)=3N^2+10N+17, has a lower bound f(N)=7N and
an upper bound f(N)=3N^2.
• These lower and upper bounds provide a general picture of the
runtime, while using simpler functions than the exact runtime.
Upper and lower bounds in the context of
runtime complexity
This section presents
upper and lower
bounds specifically in
the context of
algorithm complexity
analysis. The
constraint N ≥ 1 is
included because of
the assumption that
every algorithm
presented in this book
operates on a dataset
with at least 1 item.
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size
T(n) ≈ copC(n)
running time
Number of times
execution time basic operation is
for basic operation executed
Input size and basic operation examples
Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Empirical analysis of time efficiency
• Select a specific (typical) sample of inputs
• Worst case
• Best case
• Average case
Types of formulas for basic operation’s count
• Exact formula
e.g., C(n) = n(n-1)/2
• Example:
– How much faster will algorithm run on computer that is twice
as fast?
• Given a function that describes the running time of an algorithm, the Big O
notation for that function can be determined using the following rules:
– If f(N) is a sum of several terms, the highest order term (the
one with the fastest growth rate) is kept and others are
discarded.
– If f(N) has a term that is a product of several factors, all
constants (those that are not in terms of N) are omitted.
Determining Big O notation of a function.
Big-oh
Big O notation of composite functions
• The following rules are used to determine the Big O notation of
composite functions: c denotes a constant
• The table below shows the runtime to perform f(N) instructions for
different functions f and different values of N. For large N, the
difference in computation time varies greatly with the rate of
growth of the function f. The data assumes that a single instruction
takes 1 μs to execute.
Growth rates for different input sizes.
N=
Function N = 10 N = 50 N = 100 N = 1000 N = 10000 100000
MergeSort(numbers, i, k) {
j = 0
if (i < k) {
O(N log N) Linearithmic j = (i + k) / 2 // Find midpoint
MergeSort(numbers, i, j) // Sort left part
MergeSort(numbers, j + 1, k) // Sort right part
Merge(numbers, i, j, k) // Merge parts } }
Runtime complexities for various code
examples
SelectionSort(numbers, numbersSize) {
for (i = 0; i < numbersSize; ++i)
{ indexSmallest = i for (j = i + 1; j <
numbersSize; ++j) { if (numbers[j] <
O(N2) Quadratic
numbers[indexSmallest]) { indexSmallest = j
} } temp = numbers[i] numbers[i] =
numbers[indexSmallest]
numbers[indexSmallest] = temp } }
Fibonacci(N) { if ((1 == N) || (2 == N))
O(cN) Exponential { return 1 } return Fibonacci(N-1) +
Fibonacci(N-2) }
Big O complexity chart
• 1. O(1) has the least complexity
• 2. O(log(n)) is more complex than
O(1), but less complex than
polynomials
• 3. Complexity of polynomials
increases as the exponent increases
• 4. Exponentials have greater
complexity than polynomials as
long as the coefficients are positive
multiples of n
• 5. Factorials have greater
complexity than exponentials
• 6. Multiplying terms (When
multiplying, the complexity will be
greater than the original, but no more
than the equivalence of multiplying
something that is more complex.)
Omega Notation (Ω-notation)
Examples:
• 10n2 is O(n2)
• 5n+20 is O(n)
Solve more examples
• Visit:
https://math.libretexts.org/Courses/Saint_Mary's_College_Notre_Dam
e_IN/SMC%3A_MATH_339_-_Discrete_Mathematics_(Rohatgi)/Tex
t/4%3A_Algorithms/4.1%3A_Big-O_Notation
Try your hands on the excercises by clicking to see the next page
Some properties of asymptotic order of growth
• f(n) O(f(n))
Examples:
• 10n vs. n2
• n(n+1)/2 vs. n2
L’Hôpital’s rule and Stirling’s formula
L’Hôpital’s rule: If limn f(n) = limn g(n) = and
the derivatives f´, g´ exist, then
lim f(n) lim f ´(n)
=
n g(n) n g ´(n)
Example: log n vs. n
• All polynomials of the same degree k belong to the same class: aknk
+ ak-1nk-1 + … + a0 (nk)
• order log n < order n (>0) < order an < order n! < order nn
Basic asymptotic efficiency classes
1 constant
log n logarithmic
n linear
n log n n-log-n or linearithmic
n2 quadratic
n3 cubic
2n exponential
n! factorial
Time efficiency of nonrecursive algorithms
General Plan for Analysis
• Decide on parameter n indicating input size
• Solve the recurrence (or, at the very least, establish its solution’s
order of growth) by backward substitutions or another method.
Example 1: Recursive evaluation of n!
Definition: n ! = 1 2 … (n-1) n for n ≥ 1 and 0! = 1
Size:
Basic operation:
Recurrence relation:
Solving the recurrence for M(n)
n-1 n-1
1 1 1 1 1 1 1 1
Example 3: Counting #bits
Fibonacci numbers
Characteristic equation: