0% found this document useful (0 votes)
3 views37 pages

Fundamentals of the Analysis of Algorithm Efficiency

The document discusses the fundamentals of analyzing algorithm efficiency, focusing on time and space complexity. It emphasizes the importance of measuring running time based on input size and introduces concepts like Big-O, Big-Omega, and Theta notation for describing algorithm performance. The analysis framework prioritizes time efficiency, particularly for large inputs, to determine the worst-case and best-case scenarios of algorithms.

Uploaded by

Adolf Hitler
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views37 pages

Fundamentals of the Analysis of Algorithm Efficiency

The document discusses the fundamentals of analyzing algorithm efficiency, focusing on time and space complexity. It emphasizes the importance of measuring running time based on input size and introduces concepts like Big-O, Big-Omega, and Theta notation for describing algorithm performance. The analysis framework prioritizes time efficiency, particularly for large inputs, to determine the worst-case and best-case scenarios of algorithms.

Uploaded by

Adolf Hitler
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Fundamentals of the

analysis of algorithm
efficiency
Algorithms
The Analysis Framework
• There are two kinds of efficiency: time efficiency and space efficiency.

• Time efficiency, also called time complexity, indicates how fast an


algorithm in question runs.

• Space efficiency, also called space complexity, refers to the amount


of memory units required by the algorithm in addition to the space
needed for its input and output.
The Analysis Framework
• In the early days of electronic computing, both resources—time and
space—were at a premium.

• Nowadays the amount of extra space required by an algorithm is


usually not of as much concern.

• Therefore, we will primarily concentrate on time efficiency.


The Analysis Framework
Measuring an Input’s Size
• It is common that almost all algorithms run longer on larger inputs.

• For example, it takes longer to sort larger arrays, multiply larger


matrices, and so on.

• Therefore, it is logical to investigate an algorithm’s efficiency as a


function of some parameter indicating the algorithm’s input size.
The Analysis Framework
Measuring an Input’s Size
• Some algorithms require more than one parameter to indicate the
size of their inputs.

• E.g., the number of vertices and the number of edges for algorithms
on graphs represented by their adjacency lists.
The Analysis Framework
Units for Measuring Running Time
• We can simply use some standard unit of time measurement—a
second, or millisecond, and so on.

• However, there are obvious drawbacks to such an approach, like


dependence on the speed of a particular computer, the compiler used
in generating the machine code etc.

• We should select a metric that does not depend on these extra factors.
The Analysis Framework
Units for Measuring Running Time
• We identify the most important operation of the algorithm, called the
basic operation, the operation contributing the most to the total
running time, and compute the number of times the basic operation
is executed.

• Most of the computers today work on the RAM model of


computation.
The Analysis Framework
Units for Measuring Running Time
• In the computer working under the RAM model

• Each simple/basic operation (+, *, –, =, comparison, if, call) takes


exactly one time step.

• Loops and subroutines are not considered simple operations. Instead,


they are the composition of many single-step operations.
The Analysis Framework
Units for Measuring Running Time
• It makes no sense for sort to be a single-step operation, since sorting
1,000,000 items will certainly take much longer than sorting 10 items.

• The time it takes to run through a loop or execute a subprogram


depends upon the number of loop iterations or the specific nature of
the subprogram.
The Analysis Framework
Units for Measuring Running Time
• Let be the execution time of an algorithm’s basic operation on a
particular computer and let be the number of times this operation
needs to be executed for this algorithm.

• Then the running time of a program implementing this algorithm on


that computer can be estimated by the formula
The Analysis Framework
Units for Measuring Running Time
• Assume
• The, how much longer will the algorithm run if we double its input
size?
The Analysis Framework
Units for Measuring Running Time
• Note that we were able to answer the last question without actually
knowing the value of : it was neatly cancelled out in the ratio.
• Also note that , the multiplicative constant in the formula for the
count , was also cancelled out.
• It is for these reasons that the efficiency analysis framework ignores
multiplicative constants and concentrates on the count’s order of
growth.
The Analysis Framework
Orders of Growth
• A difference in running times on small inputs is not what really
distinguishes efficient algorithms from inefficient ones.

• Instead, running time on very large inputs shows the efficiency of an


algorithm.

• For large values of , it is the function’s order of growth that matters.


The Analysis Framework
Orders of Growth
The Analysis Framework
Orders of Growth
• The function growing the slowest among the ones in the Table is the
logarithmic function.

• On the other end of the spectrum are the exponential function and
the factorial function Both these functions grow so fast that their
values become astronomically large even for rather small values of .
The Analysis Framework
Orders of Growth
• There is a tremendous difference between the orders of growth of the
functions and , yet both are often referred to as “exponential-growth
functions” (or simply “exponential”).
The Analysis Framework
The Analysis Framework
• Common used orders of growth
The Analysis Framework
Worst-case
• The worst-case efficiency of an algorithm is its efficiency for the worst-
case input of size , which is an input (or inputs) of size for which the
algorithm runs the longest among all possible inputs of that size.
• The worst-case efficiency guarantees that for any instance of size , the
running time will not exceed it (worst-case).
• E.g., value not found in linear search
The Analysis Framework
Best-case
• The best-case efficiency of an algorithm is its efficiency for the best-
case input of size , which is an input (or inputs) of size for which the
algorithm runs the fastest among all possible inputs of that size.
Big-O Notation
• Big-O represents the upper bound running complexity of an
algorithm.
• It is mathematically defined as
• Let and be two functions of , where denotes the input size, then

• If there exist positive constants , such that


for all
Big-O Notation
Big-O Notation
• Suppose we have two functions

• Now for to be big-o of there


should be some constant that
or
Big-O Notation
• Suppose and , then

• The condition holds true.


• It means that will grow faster or
more when and
• Try putting
Big-O Notation
• Now suppose and

• Here, the condition is not


satisfied.
• It indicates, that before some
value of constant , can be
greater than , but after a certain
point (value), will grow more
(faster).
Big-O Notation
• Example: Linear vs Binary search.

• If the value to be searched is at


the first index, then it would be
found on the first attempt in
linear search but not in binary
search.
• However, for other values (or in
general) binary search grows
slower (takes less time).
Big-O Notation
• Now suppose and try to put

• Here, the condition is true.


• It indicates, that if , then can be
for .
Big-O Notation
• So, here, we would say is Big-O
of and was .
• In other words, if

• Then its upper bound (or ) will


be .
• We could also say that upper
bound is because will be for all
Big-O Notation
• We could also say that upper bound is because will be for all .

• belongs to the class , so all the values greater (as highlighted in red
above) can be its upper bound.
• That would also be true, but we will choose the nearest value (or the
least upper bound) as the Big-O.
Big-Omega Notation
• Big-Omega represents the lower bound running complexity of an
algorithm.
• It is mathematically defined as
• Let and be two functions of , where denotes the input size, then

• If there exist positive constants , such that


for all
Big-Omega Notation
• Let’s take the same function as example

• Now, suppose and , then

• The condition holds true for all value of


• Try
Big-Omega Notation

• We could also say that is lower bound of , but we will select the
closest one (or the greatest lower bound), which is in this case.
• So we will say =
Theta Notation
• Theta notation gives the tight bound running complexity of an
algorithm.
• Let and be two functions of , where denotes the input size, then

• If there exist positive constants , and such that


Theta Notation
• Let’s take the same function as
example

• Now, suppose and , then


Theta Notation
• So, we would say that tight bound of f(n) is or

• Or in other words, it is neither less than nor greater than .


Big-O
• The Big O notation defines an upper bound of an algorithm.

• For example, consider the case of Insertion Sort. It takes linear time in
best case and quadratic time in worst case.

• We can safely say that the time complexity of Insertion sort is O(n^2).
Big-O
• The general step wise procedure for Big-O runtime analysis is as
follows:
1. Figure out what the input is and what represents.
2. Express the maximum number of operations, the algorithm performs in
terms of .
3. Eliminate all terms except the highest order terms.
4. Remove all the constant factors.

You might also like