0% found this document useful (0 votes)
4 views

analysisofAlgorithms-Basics

The document provides an overview of algorithms, defining them as finite sets of instructions with specific criteria such as input, output, definiteness, finiteness, and effectiveness. It discusses algorithm analysis, including a priori and a posteriori methods, as well as time and space complexity, detailing best, average, and worst-case scenarios. Additionally, it introduces asymptotic notations like Big O, Big Omega, and Theta, which are used to describe the performance and complexity of algorithms.

Uploaded by

jeff
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

analysisofAlgorithms-Basics

The document provides an overview of algorithms, defining them as finite sets of instructions with specific criteria such as input, output, definiteness, finiteness, and effectiveness. It discusses algorithm analysis, including a priori and a posteriori methods, as well as time and space complexity, detailing best, average, and worst-case scenarios. Additionally, it introduces asymptotic notations like Big O, Big Omega, and Theta, which are used to describe the performance and complexity of algorithms.

Uploaded by

jeff
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Analysis of Algorithms

Algorithm

Algorithm
An algorithm is a finite set of instructions which if followed accomplish a
particular task. Every algorithm must satisfy the following criteria
1) Input: There are zero or more quantities which are externally
supplied
2) Output: At least one quantity is produced
3) Definiteness: Each instruction must be clear & unambiguous
4) Finiteness: If we trace out the instructions of an algorithm, then for
all cases the algorithm will terminate after a finite number of steps
5) Effectiveness: Every instruction must be sufficiently basic
Analysis of Algorithms

Algorithm Analysis

Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −

A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is


measured by assuming that all other factors, for example, processor speed, are constant and
have no effect on the implementation.

A Posteriori Analysis − This is an empirical analysis of an algorithm. The selected algorithm


is implemented using programming language. This is then executed on target computer
machine. In this analysis, actual statistics like running time and space required, are collected.
Analysis of Algorithms

Algorithm Complexity

Time Complexity

Time complexity of an algorithm represents the amount of time required by the


algorithm to run to completion. Time requirements can be defined as a numerical
function T(n), where T(n) can be measured as the number of steps, provided each
step consumes constant time.

Space Complexity

Space complexity of an algorithm represents the amount of memory space required


by the algorithm in its life cycle.
Analysis of Algorithms
Algorithm Complexity

Time Complexity

Usually, the time required by an algorithm falls under three types −

Best Case − Minimum time required for program execution.

Average Case − Average time required for program execution.

Worst Case − Maximum time required for program execution.


Analysis of Algorithms

Time Complexity

Worst Case Complexity


Let Dn be the set of inputs of size n for the problem under consideration and let I be
an element of Dn. Let t(I) be the number of basic operations performed by the
algorithm on input I. Then worst case complexity is
W(n) = max {t(I)/ I ε Dn}

Average Case Complexity


Let Pr(I) be the probability that input I occurs. Let t(I) be the number of basic
operations performed by the algorithm on input I. Then average case complexity is
A(n) = ∑ Pr(I).t(I)
Analysis of Algorithms

Time Complexity

Best Case Complexity


Let Dn be the set of inputs of size n for the problem under consideration and let I be
an element of Dn. Let t(I) be the number of basic operations performed by the
algorithm on input I. Then best case complexity is
W(n) = min {t(I)/ I ε Dn}
Analysis of Algorithms

Frequency count

Consider the code

sum(int n)
{ sum=0;
for (i=1;i<=n;i++)
{
sum=sum+i;
}
print sum;
}
Analysis of Algorithms
Frequency count

Consider the code

sum(int n)
{ sum=0;
for (i=1;i<=n;i++)
{
sum=sum+i;
}
print sum;
}
• Frequency count , the number of times the statement is executed in the
program.
Analysis of Algorithms

Frequency count

sum(int n)
{ sum=0; : 1 time
for (i=1;i<=n;i++) : n+1 times
{
sum=sum+i; : n times
}
print sum; : 1 time
}

• Total: 2n+3
• O(n)
Analysis of Algorithms

Asymptotic Notations

O - Big Oh notation
Ω - Big omega notation
Θ - Theta notation
o - Little oh notation
ω - Little omega notation
Analysis of Algorithms

 Big O notation

 Big O notation is used in Computer Science to


describe the performance or complexity of an
algorithm.

 Big O can be used to describe the execution


time required by an algorithm.
Analysis of Algorithms
 Big O notation – formal definition
Given f, g:N→R+, we say that f(n) ∈ O(g(n)) if there
exists some constants c >0, n0 ≥ 0 such that for every
n ≥ n0, f(n) ≤ cg(n).

That is, for sufficiently large n, the rate of growth of f is


bounded by g, up to a constant c. f, g might represent
arbitrary functions, or the running time or space
complexity of a program or algorithm
Analysis of Algorithms
 Big O notation (Example)

 O(1) - O(1) describes an algorithm that will always execute in the same
time (or space) regardless of the size of the input data set.

 O(log N) - The iterative halving of data sets as in the binary search


produces a growth curve that peaks at the beginning and slowly flattens
out as the size of the data sets increase . Doubling the size of the input
data set has little effect on its growth as after a single iteration of the
algorithm the data set will be halved and therefore on a par with an
input data set half the size. This makes algorithms like binary search
extremely efficient when dealing with large data sets.
Analysis of Algorithms

 Big O notation

For example, when analyzing some algorithm, one might find that the
time (or the number of steps) it takes to complete, a problem of size n
is given by T(n)=4n2-2n+2.

If we ignore constants (which makes sense because those depend on


the particular hardware the program is run on) and slower growing
terms,we could say "T(n) grows at the order of n2" and
write:T(n)=O(n2).
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms

Big O notation
O(nc) and O(c n) are very different. The latter grows much,much faster,
no matter how big the constant c is.
Big Oh Notation, Ο

The notation Ο(n) is the formal way to express the upper


bound of an algorithm's running time. It measures the
worst case time complexity or the longest amount of
time an algorithm can possibly take to complete.
Analysis of Algorithms

Big omega notation Ω(g(n))

g(n) = n2

eg. Ω(n2)
Analysis of Algorithms

 Big omega notation

g(n) = n2
f(n)

3n
2n
n3
n2
Analysis of Algorithms
 Big omega notation

g(n) = n 2

f(n) >= c*g(n) , c>0


f(n)

3 n
Nn2
2N
n 3
Logn
n12
Analysis of Algorithms

 Big omega notation – formal definition


Given f, g:N→R+, we say that f(n) ∈ Ω(g(n)) if there
exists some constants c >0, n0 ≥ 0 such that for every
n ≥ n0, f(n) ≥ cg(n).
Big Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower


bound of an algorithm's running time. It measures the
best case time complexity or the best amount of time an
algorithm can possibly take to complete.
Theta Notation, θ

The notation θ(n) is the formal way to express both the


lower bound and the upper bound of an algorithm's
running time.
Analysis of Algorithms

Little-o notation
f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that
0 ≤ f(n) < cg(n) for all n ≥ k.

Little- ω notation
f(n) = ω (g(n)) means that for any positive constant c, there exists a
constant k, such that 0 ≤ cg(n) < f(n) for all n ≥ k.
Analysis of Algorithms

Thank You

You might also like