0% found this document useful (0 votes)
4 views

DAA Module1

Uploaded by

Sayam H khabiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

DAA Module1

Uploaded by

Sayam H khabiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 61

Design & Analysis of

Algorithms(CSE2007)

MODULE 1
Fundamentals of Algorithmic
problem solving

1-1
What is an algorithm?
word algorithm comes from Persian author,
Abu Ja’far Mohammed ibn Musa al Khowarizmi
Who wrote a textbook on mathematics
An algorithm is an unambiguous step-by-step procedure
to solve the given problem in finite number of steps by
accepting set of legitimate inputs to produce the desired
output.
After producing the result, algorithm shouldterminate

1-2
Notion of Algorithm

problem

algorithm

input “computer” output

1-3
Importance of writing algorithm?
Writing an algorithm is just like preparing plan to solve
the problem.
Algorithm just gives the solution but not the answer.
Importance of writing algorithms are:
We can save the resources like time, human effort & cost
Debugging will be easier.
Since they are written using pseudo code, any technical person
can understand easily
Can be used as an design document
Once algorithm is written & understood neatly, easily can be
converted into executable program by using any programming
language

1-4
Features/ Properties of Algorithm

 Every algorithm must satisfy the following criteria’s


Input: Should accept one or more external inputs
Output: Should produce at least one output
Effectiveness: Every instruction should transform the given I/P to
desired output
Finiteness: Algorithm should terminate after finite number of steps
Definiteness: Each Instruction should be clear and unambiguous

1-5
Algorithm Design And Analysis Process

6
Analysis of algorithms
• How good is the algorithm?
• correctness
• time efficiency
• space efficiency

• Does there exist a better algorithm?


• lower bounds
• optimality

1-7
Important problem types
• sorting

• searching

• string processing

• graph problems

1-8
Sorting (I)
• Rearrange the items of a given list in ascending order.
• Input: A sequence of n numbers <a1, a2, …, an>
• Output: A reordering <a´1, a´2, …, a´n> of the input
sequence such that a´1≤ a´2 ≤ … ≤ a´n.
• Why sorting?
• Help searching
• Algorithms often use sorting as a key subroutine.
• Sorting key
• A specially chosen piece of information used to guide
sorting. E.g., sort student records by names.

1-9
Sorting (II)
• Examples of sorting algorithms
• Selection sort
• Bubble sort
• Insertion sort
• Merge sort
• Heap sort …
• Evaluate sorting algorithm complexity: the number of key
comparisons.
• Two properties
• Stability: A sorting algorithm is called stable if it preserves the
relative order of any two equal elements in its input.
• In place : A sorting algorithm is in place if it does not require
extra memory, except, possibly for a few memory units.

1-10
Searching
• Find a given value, called a search key, in a given set.
• Examples of searching algorithms
• Sequential search
• Binary search …
Input: sorted array a[i] < … < a[j] and key x;
m (i+j)/2;
while i < j and x != a[m] do
if x < a[m] then j  m-1
else i  m+1;
if x = a[m] then output a[m];

Time: O(log n) 1-11


String Processing
• A string is a sequence of characters from an alphabet.
• Text strings: letters, numbers, and special characters.
• String matching: searching for a given word/pattern in a
text.
Examples:
(i) searching for a word or phrase on WWW or in a
Word document
(ii) searching for a short read in the reference genomic
sequence

1-12
Graph Problems

• Informal definition
• A graph is a collection of points called vertices, some of
which are connected by line segments called edges.

• Types of graphs
• Adjacency matrix
• Cost adjacency matrix
• Spanning tree

1-13
Graphs
• Formal definition
• A graph G = <V, E> is defined by a pair of two sets: a
finite set V of items called vertices and a set E of
vertex pairs called edges.
• Undirected and directed graphs (digraphs).
• Complete, dense, and sparse graphs
• A graph with every pair of its vertices connected by
an edge is called complete, K|V|

1 2

3 4

1-14
Graph Representation
• Adjacency matrix
• n x n boolean matrix if |V| is n.
• The element on the ith row and jth column is 1 if
there’s an edge from ith vertex to the jth vertex;
otherwise 0.
• The adjacency matrix of an undirected graph is
symmetric.
• Adjacency linked lists
• A collection of linked lists, one for each vertex, that
contain all the vertices adjacent to the list’s vertex.
0111 2 3 4
0001 4
0001 4
0000

1-15
Weighted Graphs
• Weighted graphs
• Graphs or digraphs with numbers assigned to the
edges.

5
1 2
6 7
9
3 4
8

1-16
Graph Properties -- Paths and Connectivity
• Paths
• A path from vertex u to v of a graph G is defined as a sequence
of adjacent (connected by an edge) vertices that starts with u
and ends with v.
• Simple paths: All edges of a path are distinct.
• Path lengths: the number of edges, or the number of vertices –
1.
• Connected graphs
• A graph is said to be connected if for every pair of its vertices
u and v there is a path from u to v.
• Connected component
• The maximum connected subgraph of a given graph.

1-17
Graph Properties -- Acyclicity

• Cycle
• A simple path of a positive length that starts and
ends a the same vertex.
• Acyclic graph
• A graph without cycles
• DAG (Directed Acyclic Graph)

1 2

3 4

1-18
Algorithm design strategies

• Brute force  Greedy approach

• Divide and conquer  Dynamic programming

• Decrease and conquer  Backtracking and branch-and-bound

• Transform and conquer  Space and time tradeoffs

1-19
Analysis Framework
The process of finding the efficiency of an algorithm is
called as analysis of algorithm.
We can find the efficiency of an algorithm in 2 ways
1) Time efficiency/complexity
2) Space efficiency/complexity
We do analysis of algorithms because of following reasons
1) To compare different algorithms for the same task
2) To predict the performance in a new environment
3) To specify the range of inputs on which algorithm works
properly

20
Time complexity or time efficiency
• Time complexity of an algorithm is the amount of time
taken by the program to run completely & efficiently
• Factors on which time complexity of an algorithm
depends are
1) Speed of computer
2) Choice of programming language
3) Compiler used
4) Choice of algorithmic design technique
5) Number of inputs/outputs
6) Size of the inputs/outputs

21
Time complexity or time
efficiency
• By considering the number of inputs given to the
algorithm & size of the inputs given to the
algorithm, time efficiency is normally computed
by considering the base operation or basic
operation
• The statement/instruction which is consuming
more time is called as basic operation

22
Time complexity or time efficiency
• To find the time efficiency, we need to
1) Identify the base operation
2) Find the time taken by the basic operation to execute once
3) Find, how many times, this basic operation is executing
• Basic operation: the operation that contributes the most towards
the running time of the algorithm
n is the input size
T(n) ≈ copC(n)
running time Total number of
of the algorithm Time taken by the basic times, the basic
operation to execute once operation is
executing
Note: Different basic operations may cost differently!
Order of growth
• As the value of n(input size) increases, time required
for execution also increases i.e., behavior of algorithm
changes with the increase in the value of n. This
change in the behavior is called order of growth
• Most important: Order of growth within a constant
multiple as n→∞
• Example:
• How much faster will algorithm run on computer that is
twice as fast?
• How much longer does it take to solve problem of double
input size?
Order of growth for few values of n
Input size and basic operation examples
Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
Basic asymptotic efficiency
classes
1 constant
log n logarithmic
n linear
n log n n-log-n
n2 quadratic
n3 cubic
2n exponential
n! factorial
Space Complexity or Space Efficiency
• Space complexity of an algorithm is the amount of space
consumed by the program to run completely & efficiently.
• Total amount of space consumed by the program is calculated by
sum of the following components
• Fixed space requirements
• Variable space requirements
• Fixed space requirements are the requirements that do not depend
on the number of inputs & outputs and also size of inputs &
outputs of the program.
• Program space ( instruction space to store the code)
• Data space ( space for constants, variables, structures, etc)
• Stack space ( space for parameters, local variables, return values, etc)
Fixed space
requirements
= Program space + Data Space + Stack Space

28
Space Complexity or Space Efficiency
Variable space requirement
Along with fixed space requirements, it also includes the
extra space required
1) When function uses recursion
2) Dynamically allocated arrays, structures, etc
So, space ‘S’ of a program ‘P’ on a particular instance ‘I’ is
denoted by SP(I) =Fixed space requirements + space used during recursion
+ space used by run time variables
Total space ‘S’ of a program ‘P’ is given by
S(P)=c + SP(I)

29
Asymptotic Notations
• Mathematical notations used to express the time
complexity of an algorithm is called asymptotic
notations
• The 3 different asymptotic notations are
1) O (Big-oh) for worst case
2) Ω (Big-Omega) for best case
3) Θ (Big-Theeta) for average case

30
O (Big-oh) notation
• Big-oh is the formal method of expressing the upper
bound of an algorithm’s running time.
• It is a measure of longest amount of time it could possibly
take for the algorithm to complete.
• “The function, t(n) is said to be in O(g(n)), which is
denoted by t(n)ЄO(g(n)) such that, there exist a positive
constant c & positive integer n0 satisfying the constraint
t(n)≤c g(n) for all n ≥ n0 “

31
Big-oh Time

Prior analysis

Post analysis
Ω (Big-Omega) notation
• Big-omega (Ω) is the formal method of expressing the
lower bound of an algorithm’s running time.
• It is a measure of minimum amount of time it could
possibly take for the algorithm to complete.
• “The function, t(n) is said to be in Ω(g(n)), which is
denoted by t(n)Є Ω(g(n)) such that, there exist a positive
constant c & positive integer n0 satisfying the constraint
t(n) ≥ c g(n) for all n ≥ n0 “

33
Big-omega Time

Post analysis

Prior analysis
Θ (Big-Theeta) notation
• Big-Theeta (Θ) is the formal method of expressing the
average case efficiency of the algorithm.
• “The function, t(n) is said to be in Θ (g(n)), which is
denoted by t(n)Є Θ (g(n)) such that, there exist a
positive constants c1, c2 & positive integer n0 satisfying
the constraint c2g(n) ≤ t(n) ≤ c1 g(n) for all n ≥ n0 “

35
Big-theta
Time
Example: Sequential search

• Worst case n key comparisons


• Best case 1 comparisons
• Average case (n+1)/2, assuming K is in A
• #include<time.h>
• Main()
• { clock_t start, end, total;
• Max=a[0];
• Start = clock();
• For(i=1;i<n; i++)
• {
• If(A[i]>max)
• Max=a[i];
• }
• End = clock();
• Total = double(end-start)/CLOCKS_PER_SEC;

38
Linear Search
• A[10], k, i=0;
• While(i<n)
•{
• If(a[i]==k)
• Return I;
• Else i++;
• }
• If(i>=n)
• Printf(“element not found”);

39
Mathematical Analysis of non-recursive algorithms
General Plan for Analysis of non-recursive algorithms
• Decide the number of input parameters given to the algorithm & decide
the size of inputs. (identify the problem size)
• Identify algorithm’s basic operation
• Decide whether we need to find only average case (Θ) or all the 3 cases
separately.
• For this, check whether the number of times the basic operation is executed depends
only on the problem size or not.
• If it depends only on the problem size, then we need to find only average case Θ
• If it also depends on some other additional properties, then we need to find all 3 cases
separately.
• Find the total number of times, the basic operation is executing & express
it in sum expressions
• Simplify the expressions using standard formulas & rules of sum
manipulations & obtain their order of growth.
41
Example 1: Maximum element

T(n) = 1in-1 1 = n-1 = (n) comparisons


Example 2: Element uniqueness
problem

T(n) = 0in-2 (i+1jn-1 1)


=(n-1)summation(-i)
= 0in-2 n-i-1 = (n-2+1)(n-1)/2
2
= ( n ) comparisons
Mathematical Analysis of Recursive Algorithms
General Plan for Analysis of recursive algorithms
• Decide the number of input parameters given to the algorithm &
decide the size of inputs. (identify the problem size)
• Identify algorithm’s basic operation
• Decide whether we need to find only average case (Θ) or all the 3
cases separately.
• For this, check whether the number of times the basic operation is executed
depends only on the problem size or not.
• If it depends only on the problem size, then we need to find only average case
Θ
• If it also depends on some other additional properties, then we need to find all
3 cases separately.
• Obtain the recurrence relation with an appropriate initial
condition for the number of times the basic operation is executing
• Solve the recurrence relation & obtain the order of growth & then
express it using asymptotic notations
Example 3: factorial of a number
//Purpose: to compute n! recursively
//Input: A non-negative integer ‘n’
//Output: value of n!

ALGORITHM:- fact(n)
{ If n==0 then
return 1
else
return n*fact(n-1)
}

45
Analysis of example3

46
47
Solving the recurrence for M(n)
M(n) = M(n-1) + 1, M(0) = 0
M(n) = M(n-1) + 1
M(N-1) = M(n-1-1)+1= M(N-2)+1
= (M(n-2) + 1) + 1 = M(n-2) + 2

M(N-2) = M(N-2-1)+1= M(N-3)+1


= (M(n-3) + 1) + 2 = M(n-3) + 3

= M(n-i) + i = M(0) + n =n

The method is called backward substitution.


Tower of Hanoi
(0,0,0)
Tower of Hanoi
(0,0,1)
Tower of Hanoi
(0,1,1)
Tower of Hanoi
(0,1,0)
Tower of Hanoi
(1,1,0)
Tower of Hanoi
(1,1,1)
Tower of Hanoi
(1,0,1)
Tower of Hanoi
(1,0,0)
Example 2: The Tower of Hanoi
Puzzle

Procedure Hanoi(disk, source, dest, aux)

IF disk == 1, THEN
move disk from source to dest
ELSE
Hanoi(disk - 1, source, aux, dest) // Step 1
move disk from source to dest // Step 2
Hanoi(disk - 1, aux, dest, source) // Step 3
END IF

END Procedure
Recurrence for number of moves:
M(n) = 2M(n-1) + 1
Solving recurrence for number of
moves
M(n) = 2M(n-1) + 1, M(1) = 1
M(n) = 2M(n-1) + 1
M(N-1) = 2M(N-1-1)+1 = 2M(N-2)+1
= 2(2M(n-2) + 1) + 1 = 2^2*M(n-2)+2+1
= 2^2*M(n-2) + 2^1 + 2^0
= 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0
= 2I M(N-I)+ 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0
= 2^n -1
• M(n) = 2M(n-1) + 1, M(1) = 1
• = 2^2*M(n-2) + 2^1 + 2^0
• = 2^3*M(n-3) + 2^2 + 2^1 + 2^0
• = 2I M(N-I)+2I-1 +……. 2^0
S = a(rn - 1)/r-1-- a=1, r=2, n=I
S= 1(2i - 1)/(2-1) = 2I - 1
= 2I M(N-I)+ 2I - 1
I=N-1

59
• = 2I M(N-I)+ 2I - 1
• I = n-1
• 2n-1 M(N-(N-1))+ 2N-1 - 1
• 2n-1 M(1)+ 2N-1 - 1
• 2n-1 + 2n-1 - 1
• 2* 2n-1 – 1 = 2*(2n / 2) – 1= 2n - 1

60
Tree of calls for the Tower of Hanoi
Puzzle

You might also like