0% found this document useful (0 votes)
15 views

Analysis of Algorithms

Uploaded by

Kiyoushi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Analysis of Algorithms

Uploaded by

Kiyoushi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit 19: Data Structures and Algorithms

LO2: Specify abstract data types and algorithms in a formal notation.

LO4: Assess the effectiveness of data structures and algorithms.


Analysis of Algorithms
The Analysis Framework
• Time efficiency (time complexity)
• indicates how fast the algorithm runs

• is measured by counting the number of times the algorithm’s basic operation is


executed

• Space efficiency (space complexity)


• deals with the extra space it requires

• is measured by counting the number of extra memory units consumed by the


algorithm
Theoretical analysis of time efficiency
• Basic operation

• the operation that contributes the most to running time

• it is the most time-consuming operation in the algorithm’s innermost loop

• For example , most sorting algorithms, the basic operation is a key


comparison.

• Let cop be the execution time of an algorithm’s basic operation on a particular


computer, and

• Let C(n) be the number of times this operation needs to be executed for
algorithm.

• Running Time T(n) ≈ cop C(n)


Empirical analysis of time efficiency
• Select a specific sample of inputs

• Use physical unit of time (e.g., milliseconds) (or )

Count actual number of basic operation’s executions

• Analyze the empirical data


Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a list


Number of list’s items, i.e. n Key comparison
of n items

Multiplication of two Matrix dimensions or total Multiplication of two


matrices number of elements numbers

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Worst-Case, Best-Case, and Average-Case Efficiencies

• Reasonable to measure an algorithm’s efficiency as a function of a parameter


indicating the size of the algorithm’s input.

• Many algorithms for which running time depends not only on an input size but
also on the specifics of a particular input.

 For some algorithms efficiency depends on form of input:

• Worst case: Cworst(n) – maximum over inputs of size n

• Best case: Cbest(n) – minimum over inputs of size n

• Average case: Cavg(n) – “average” over inputs of size n


Worst-Case, Best-Case, and Average-Case Efficiencies

• The worst-case efficiency of an algorithm is its efficiency for the worst-case


input of size n, which is an input of size n for which the algorithm runs the
longest among all possible inputs of that size.

• The best-case efficiency of an algorithm is its efficiency for the best-case input
of size n, which is an input of size n for which the algorithm runs the fastest
among all possible inputs of that size.

• Neither the worst-case analysis nor its best-case counterpart yields the
necessary information about an algorithm’s behavior on a “typical” or “random”
input. This is the information that the average-case efficiency.
Worst-Case, Best-Case, and Average-Case Efficiencies
Worst-Case, Best-Case, and Average-Case Efficiencies

• Better performance: the item we search for is in the first position; examines
one position.

• Worst performance : item not in the array or in the last position; examine all
positions.

• Average performance: examines half of the array.

• Best case , Cbest (n) = 1

• Worst case , Cworst (n) = n

• Average case, Cavg (n) =(n+1)/2


Recapitulation of the Analysis Framework
• Both time and space efficiencies are measured as functions of the algorithm’s
input size.

• Time efficiency is measured by counting the number of times the algorithm’s


basic operation is executed. Space efficiency is measured by counting the
number of extra memory units consumed by the algorithm.

• The efficiencies of some algorithms may differ significantly for inputs of the
same size. For such algorithms, we need to distinguish between the worst-
case, average-case, and best-case efficiencies.
Asymptotic Analysis
• Asymptotic analysis refers to computing the running time of any operation in
mathematical units of computation.

• In Asymptotic analysis, evaluate the performance of an algorithm in terms of


input size.

• Calculate how the time (or space) taken by an algorithm increases with the
input size.
Asymptotic Notation

• Asymptotic notation is a way to describe the running time or space complexity


of an algorithm based on the input size.

• It is commonly used in complexity analysis to describe how an algorithm


performs as the size of the input grows.

• The three most commonly used notations are Big O, Big Omega, and Big
Theta.
Asymptotic Notations and Basic Efficiency Classes
A way of comparing functions that ignores constant factors and small input sizes.

• O(g(n)): class of functions t(n) that grow no faster than g(n)

• Ω(g(n)): class of functions t(n) that grow at least as fast as g(n)

• Θ(g(n)): class of functions t(n) that grow at same rate as g(n)

* g(n) will be some simple function to compare the count with


Big O-notation : Asymptotic Upper Bound
• A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded
above by some constant multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n0 such that

t (n) ≤ cg(n) for all n ≥ n0.

• O(g(n)) is the set of all functions with a lower or same order of growth as
g(n).
Big O-notation : Asymptotic Upper Bound

Figure: Big-oh notation:


t (n) ∈ O(g(n))
Big O-notation Example
Ω-notation : Asymptotic Lower Bound
• A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈ Ω(g(n)), if t (n) is bounded
below by some positive constant multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n0 such that

t (n) ≥ cg(n) for all n ≥ n0.

• Ω(g(n)), stands for the set of all functions with a higher or same order of growth as
g(n).

• Θ(g(n)) is the set of all functions that have the same order of growth as g(n).

• Thus, every quadratic function an2 + bn + c with a> 0 is in Θ(n2).


Ω-notation : Asymptotic Lower Bound

Figure: Big-omega notation:


t (n) ∈ Ω(g(n))
Big Ω-notation Example
Θ Notation : Asymptotic Tight Bound
• A function t (n) is said to be in Θ(g(n)), denoted t (n) ∈ Θ(g(n)), if t (n) is bounded both
above and below by some positive constant multiples of g(n) for all large n, i.e., if
there exist some positive constants c1 and c2 and some nonnegative integer n0 such
that

c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.

• Θ(g(n)) is the set of all functions that have the same order of growth as g(n).

• Thus, every quadratic function an2 + bn + c with a> 0 is in Θ(n2).


Θ Notation : Asymptotic Tight Bound

Figure: Big-theta notation:


t (n) ∈ Θ(g(n))
Big Θ Notation Example
Basic asymptotic efficiency classes
Order of Growth
Algorithm and Analysis of Bubble Sort

Time efficiency:
Algorithm and Analysis of Bubble Sort

= (n-1) + (n-2) + (n-3) + (n-4) + …… + 1

= (n-1+1) (n-1)
2
Time efficiency: θ(n2)
Algorithm and Analysis of Bubble Sort
• Best Case Time Complexity : T(n) = O(n)

• Worst Case Time Complexity : T(n) = O(n2)

• Average Case Time Complexity : T(n) = O(n2)

• Space Complexity : O(1)


Algorithm and Analysis of Selection Sort

Time efficiency: θ(n2)


Algorithm and Analysis of Selection Sort
• Best Case Time Complexity : C(n) = O(n2)

• Worst Case Time Complexity : C(n) = O(n2)

• Average Case Time Complexity : C(n) = O(n2)

• Space Complexity : O(1)


Insertion Sort Example
• The action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated.
Algorithm of Insertion Sort
• Consider an application of the decrease-by-one technique to sorting an
array A[0..n − 1].
Insertion Sort Analysis
• Best Case (if the array is already sorted)

• Worst Case (if the array is reverse-sorted):

• Average Case
Algorithm and Analysis of Binary Search
• Best Case : Item in the middle , one check , O(1)

• Worst Case : Item in the last possible division, the maximal number of times an
array of length N can be divided is log2 N , O(log N).

• Average Case : Item is found after performing half of the possible number of
divisions ½ log2 N, O(log N).
References
1. Robert Lafore, 2nd Edition, Data Structures and Algorithms in java.

2. Anany Levitin, 3rd Edition, Introduction to Design and Analysis of


Algorithm.

3. https://www.tutorialspoint.com/data_structures_algorithms/asymptotic_an
alysis.html

You might also like