1.1 Unit 01 Part 01 Introduction

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 31

Unit 01

Part 1
Algorithms and their characteristics
Specifying Algorithms
Fundamentals of Analysis of Algorithms
Algorithms
• An algorithm is the logical sequence of well defined discrete steps
that describe a complete solution to a given problem, in a finite
amount of time.
• An algorithm is a finite set of instructions which, if followed,
accomplish a particular task.
• Data Algorithms are used to manipulate the data in various ways.
Algorithms Characteristics
• Input
• Zero or More quantities which are externally supplied.
• Output
• At least one quantity is produced.
• Definiteness
• Each instruction must be clear and unambiguous.
• Finiteness
• If we trace out the instructions of an algorithm, using a paper and pen, then for all cases the
algorithm must terminate after a finite number of steps.
• Effectiveness
• The Algorithm should be cost effective. The cost of the Algorithm is judged in terms of time
and space required to execute.
Program
• A program is the expression of an algorithm in a programming
language.
Study of Algorithms
• How to devise algorithms
• study various existing design techniques
• practice
• How to validate algorithms
• to show that algorithms computes correct answer for all possible legal inputs.
• How to analyze algorithms
• refers to the task of determining how much computing time and storage an algorithm requires.
• How to test a program (phases of testing)
• Debugging
• It is the process of executing programs on sample data sets to determine whether faulty results occur, and, if so, to correct
them.
• “Debugging can only point to presence of errors, but not to their absence.” E. Dijkstra.
• Profiling
• Also called as performance measurement.
• It is process of executing a correct program on data sets and measuring the time and space it takes to compute the results.
Algorithm Specification
• Can be specified using natural language such as English
• Another possibility is Graphical representation like flowchart
• Using Pseudocode
Pseudocode Conventions
• Comments //
• Blocks { }
• Statements are delimited by ;
• Identifier begins with letter
• Data types are not declared, but are clear by context
• Assignment :=
• Boolean values true, false
• Logical operators and, or, not
• Relational operators < <= = != >= >
Pseudocode Conventions
• Compound data types can be formed with records, as
node := record
{
datatype_1 d1;
.
.
.
datatype_n dn;
node *link;
}
Pseudocode Conventions
• Array [ ]
• Array elements a[i] or a[i,j]
• while (condition ) do
{
(statement 1);
:
:
(statement n);
}
Pseudocode Conventions
• for variable := value1 to value2 step step do
{
(statement 1);
:
:
(statement n);
}
• repeat
(statement 1);
:
:
(statement n);
until (condition);
Pseudocode Conventions
• if (condition) then (statement)
• if (condition) then (statement 1) else (statement 2)
• case
{
:(condition 1): (statement 1)
:
:
:(condition n): (statement n)
}
• input read
• output write
Pseudocode Conventions
• Algorithm Name (Parameter List)
e.g.
 
Algorithm Max (A,n)
// A is an array of size n
{
Result := A[1];
for i:= 2 to n do
if A[i]>Result then Result := A[i];
return Result;
}
Performance evaluation of algorithms
• Performance evaluation can be divided into:
• Priori estimate ( performance analysis)
• Posteriori testing ( performance measurement )
Complexity Definitions
• Space Complexity
• The space complexity of an algorithm is the amount of memory it needs to
run to completion.
• Time Complexity
• The time complexity of an algorithm is the amount of computer time it needs
to run to completion.
Computing Space Complexity
• The space needed by each of these algorithms is seen to be sum of
following components
• a fixed part, which includes the instruction space, space for simple variables
and fixed size component variables, space for constants and so on.
• a variable part, which includes space needed by component variables whose
size if dependent on the particular problem instance being solved, size of
referenced variables and recursion stack space.
• Thus
Space requirement S(P) = c + Sp
where Sp means instance characteristics
Computing Space Complexity : Example 1
Algorithm compute(a,b,c)
{
return a-b*c/a+(a%c)^b;
}

• Here the space is needed only for variables a, b and c. Hence the
space complexity of above algorithm is independent of instance
characteristics. (i.e S(P) = 0)
Computing Space Complexity : Example 2
Algorithm SumOfArray(a,n)
{
s := 0.0;
for i:= 1 to n do
s := s + a[i];
return s;
}
• The space complexity of above algorithm is
SSumOfArray >= n+3
(n for all elements of array, one each for n, s and i)
Computing Space Complexity : Example 3
Algorithm RSum(a,n)
{
if (n<=0) then
return 0.0;
else
return RSum (a,n-1) + a[n];
}
• The recursion stack space includes space for the formal parameters, the local variables and the
return addresses. Since the depth of recursion is n+1, the space complexity of above algorithm is
SRSum >= 3(n+1)
( 3 = space for value of n + return address + pointer to a[])
Computing Time Complexity
• The time taken by program i.e. T(P) is the sum of compile time and
run(or execution) time.
• Since a compiled program would run several times without
recompilation, we concern only with the run time of the program,
denoted by tp(instance characteristics).
Computing Time Complexity : Example 1
Algorithm compute(a,b,c)
{
return a-b*c/a+(a%c)^b;
}
• The time complexity for ‘compute’ can be considered as 0, since it is
independent of instance characteristics.
Computing Time Complexity : Example 2
Algorithm SumOfArray(a,n)
{
s := 0.0;
for i:= 1 to n do
s := s + a[i];
return s;
}
• The time complexity for ‘SumOfArray’ can be
tSumOfArray = 3n+3
[ assigning s:=0 + returning value + assigning i +
n *(incrementing i + comparing i + assigning s+a[i] to s) ]
Computing Time Complexity : Example 3
Algorithm RSum(a,n)
{
if (n<=0) then
return 0.0;
else
return RSum (a,n-1) + a[n];
}
Computing Time Complexity : Example 3
The time complexity for example 3 can be
tRSum = 2 if n=0
= 2 + tRSum (n-1) if n > 0
 
tRSum = 2 + tRSum (n-1)
= 2 + 2 + tRSum (n-2)
= 2 + 2 + 2 + tRSum (n-3)
:
= 2 * n + tRSum (0)
= 2n+2
Asymptotic Notations
• Describe the asymptotic running time of an algorithm
• The complexity of any operation (algorithm) can be expressed as
function of the size of input to the operation.
• Asymptotic notations are mathematical tools to represent time
complexity of algorithms for asymptotic analysis.
• The following 3 asymptotic notations are mostly used to represent
time complexity of algorithms.
• Big O Notation
• Ω Notation
• Θ Notation
‘Big O’ Notation : (Upper Bound)
• The approximations of the functions can be expressed using a
mathematical notation called ‘Order of magnitude’ or ‘Big-O’
notation.
• The order of magnitude of a function is identified with the term in the
function that increases fastest relative to the size of the problem.
• Definition : A function f (n) is defined to be O(g(n)), that is f (n) =
O(g(n)), and is said to be of order g(n) or big oh of g(n), iff there exists
positive constants n0 and c such that
f(n) <= c *g(n) for all n > n0
‘Big O’ Notation : Examples
f(n) <= c *g(n) for all n > n0
• 3n+2 = O(n)
• as 3n+2 <= 4n for all n>= 2
• 3n+3 = O(n)
• as 3n+3 <= 4n for all n>= 3
• 1000 n2 + 100n - 6 = O (n2)
• as 1000 n2 + 100n - 6 <= 1001n2 for n >= 100
• 6 * 2n + n2 = O ( 2n)
• as 6 * 2n + n2 <= 7 * 2n for n >= 4
‘Omega’ Notation : (Lower Bound)
• Definition : A function f (n) is defined to be Ω(g(n)), that is f (n) = Ω
(g(n)), and is said to be of order g(n) or omega of g(n), iff there exists
positive constants n0 and c such that
f(n) >= c *g(n) for all n > n0
‘Omega’ Notation : Examples
f(n) >= c *g(n) for all n > n0
• 3n+2 = Ω (n)
• as 3n+2 >= 3n for all n>= 1
• 3n+3 = Ω (n)
• as 3n+3 >= 3n for all n>= 1
• 10 n2 + 4n - 6 = Ω (n2)
• as 10 n2 + 4 n - 6 >= n2 for n >= 1
• 6 * 2n + n2 = Ω ( 2n)
• as 6 * 2n + n2 >= 2n for n >= 1
‘Theta’ Notation
• Definition : A function f (n) is defined to be Θ (g(n)), that is f (n) = Θ
(g(n)), and is said to be of order g(n) or theta of g(n), iff there exists
positive constants n0, c1 and c2 such that
c1 *g(n) <= f(n) <= c2 *g(n) for all n > n0
‘Theta’ Notation : Examples
c1 *g(n) <= f(n) <= c2 *g(n) for all n > n0
• 3n+2 = Θ (n)
• as 3n+2 >= 3n for all n>= 2 & 3n+2 <= 4n for all n>= 2
• so c1 = 3 & c2 = 4
• 3n+3 = Θ (n)
• as 3n+3 >= 3n for all n>= 3 and 3n+3 <= 4n for all n>= 3
• 10 n2 + 4n - 6 = Θ (n2)
• as 10n2 + 4n - 6 >= n2 for n >= 1 & 10n2 +4n-6<=11n2 for n>=1
• 6 * 2n + n2 = Θ ( 2n)
• as 6 * 2n + n2 >= 2n for n >= 4 and 6 * 2n + n2 <= 7 * 2n for n >= 4
End

You might also like