0% found this document useful (0 votes)
23 views33 pages

Module 1 3

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 33

1 Design & Analysis of Algorithms

THE DESIGN AND ANALYSIS OF


ALGORITHM
2 Design & Analysis of Algorithms

THE DESIGN AND ANALYSIS OF


ALGORITHM
WHAT IS AN ALGORITHM???

An algorithm is a sequence of unambiguous instructions for


solving a problem, i.e; for obtaining a required output for any legitimates
input in finite amount of time.

Problem

Algorithm

Input Computer output


Fig: notion of algorithm

CHARACTERISTICS OF ALGORITHM
 The non ambiguity requirement for each step of an algorithm cannot
be compromised
 The range of inputs for which an algorithm work has to be specified
carefully
 The same algorithm can be represented in several different ways
 Several algorithm for solving the same problem may exist
 Algorithm for the same problem can be based on different ideas and
can solve the problem with dramatically different speed
3 Design & Analysis of Algorithms

GREATEST COMMON DIVISOR (GCD) OF TWO NON_NEGETIVE


INTEGERS

Euclids algorithm:
gcd(m,n)=gcd(n,m mod n)

Until m mod n is equal to zero

gcd(60,24)=gcd(24,12)=gcd(12,0)=12

( gcd(m,0)=m )

Algorithm Euclid(m,n)
// computes gcd(m,n) by Euclids algorithm
// input: two non negative,non zero integers
// output: gcd of m&n
While (n≠0) do
r m mod n
mn
nr
return m

CONSECUTIVE INTEGER CHECKING ALGORITHM


Step 1: assign the value of min {min} to t

Step 2: divide m by t. If the remainder of their division is 0, goto step 3;


otherwise goto step 4.

Step 3: divide n by t. If the remainder of their division is zero, return the


value of t as the answer & stop; otherwise preceed to step 4

Step 4: decrease the value of t by 1. Goto step 2

Example for 60,24 the algorithm will try first 24 then 23 and so on until it
reaches 12, where it stops.
4 Design & Analysis of Algorithms

NOTE: The above algorithm does not work correctly when one of its input
is zero.

Therefore it is necessary to specify the range of an algorithm’s input


explicitly and carefully
MIDDLE SCHOOL PROCEDURE FOR COMPUTING GCD
Step 1: find the prime factors of m

Step 2: find the prime factors of n

Step 3: identify all the common factors in the two prime expansions found
in step 1 & step 2 ( if P is the common factor occurring Pm & Pn times in
m&n respectively, it should be min {Pm,Pn} times)

Step 4: compute the product of all the common factors 7 return it as gcd.

Ex: 60 = 2 . 2 . 3 .5

24 = 2 . 2 . 2 . 3

gcd( 60,24) = 2 . 2 . 3 = 12

How would you find common elements in two sorted list????

SEIVE’S METHOD(GENERATING CONSECUTIVE PRIME ≤n)


Sieve of Erastosthenes( 200 B C)

Ex:

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

23 5 7 9 11 13 15 17 19 21 23 25

23 5 7 11 13 17 19 23 25

23 5 7 11 13 17 19 23

NOTE: If ‘P’ is a number whose multiples are being eliminated on the


current pass, then the first multiple we should consider is P.P because all
its smaller multiples 2P……(P-1)P have been eliminated on earlier passes
through the list. This observation helps us to avoid eliminating the same
5 Design & Analysis of Algorithms

number more than once. Obviously P.P should not be greater than n and
therefore P cannot exceed √n rounded down.

Algorithm Sieve (n)


// implements the sieve of Eratosthenes
// input: an integer n≥2
// output: array L of all prime numbers ≤ n
For P 2 to n do A[P]0
For P 2 to [√n] do
If A[P]≠0 // P hasn’t been eliminated
J P*P
while j≤n do
A[j]0
J j+p
//copy the remaining elements of A to the array
// L of all the prime
i 0
for P  2 to n do
if A[P]≠0
L[i]  A[P]
I i+1
return L
6 Design & Analysis of Algorithms

STEPS IN DESIGNING AND IMPLEMENTING AN ALGORITHM

Understand the
problem

Decide on computational means


exact vs appropriate solution
Data structures
Algorithm design technique

Design an algorithm

Prove the correctness

Analyze the algorithm

Code the algorithm

WHAT DOES IT MEAN TO UNDERSTAND THE PROBLEM


 Read the problem’s description carefully & ask questions if any
doubts
7 Design & Analysis of Algorithms

 Do a few examples by hand.


 Think about special cases & ask question if needed
 Choose if any algorithm existing which can solve the problem, else
you have to design it
 Specify input range, which will be an instance of the problem the
algorithm solves.
DECIDING ON COMPUTATIONAL MEANS
 How the objects would be represented?
1. Random Access machine (RAM Model)
2. Parallel Architecture
 How the operations would be implemented?
1. Sequential Algorithms
2. Parallel Algorithms
 Time space etc.

CHOOSING BETWEEN EXACT AND APPROXIMATE PROBLEM


SOLVING
 Approximate algorithms: (no exactness)
1. Extracting square roots, non linear equation, evaluating definite
integrals
 Exact algorithm :(can solve exactly)
1. travelling salesman problem, knapsack etc
2. But are slow, because of complexity.
DECIDING APPROPRIATE DATA STRUCTURES
 Algorithms + Data structures = Programs

DESIGN AN ALGORITHM
 Build a computational model of the solving process

“An algorithm design technique(a strategy/Paradigm) is a general approach


to solving problems algorithmically, that is applicable to a variety of
problems from different areas of computing”

General Design Techniques:

Brute force, greedy, divide & conquer, dynamic programming,


8 Design & Analysis of Algorithms

Decrease & conquer, backtracking, and branch & bound


PROVE CORRECTNESS
 Correct output for every legitimate input in finite time
 For some it is easy, others it is complex
 Common techniques for proving correctness:
Based on correct math formula
By induction
ANALYSE THE ALGORITHM
Efficiency: time and space

Simplicity

Generality: range of inputs, special cases

Optimality: no other algorithm can do better


Coding
How the objects and operations in the algorithm are represented in the
chosen programming language?
IMPORTANT PROBLEM TYPES
 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problems
 Geometric problems
 Numerical problems

FUNDAMENTAL DATA STRUCTURES


LINEAR DATA STRUCTURES
 Array
 Linked list
 Stack
 Queue
9 Design & Analysis of Algorithms

Operations: search, delete, insert

Implementation: static, dynamic

“Data structure can be defined as a particular scheme of organizing related


data terms”

Data terms can range from elementary data types to data structures

LINEAR DATA STRUCTURES


Two most important data structures are

 Arrays
 Linked list

ARRAY
Is a sequence of n terms of the same data type that are stored
contiguously in computer memory and made accessible by specifying a
value of the array’s index

Item[0] Item[1] ………………… Item[n-1]


.

Fig: array of n elements

Each and every element of an array can be accessed in the same constant
amount of time regardless of where in the array element in question is
located ( this differentiate pair linked list)

Array are used to implement a variety of other data structures like string.
(also we have bit string)

Operations on string: strlen(), strcmp, etc;


LINKED LIST
List is a sequence of zero or more elements called nodes. Each node
contains a data & one or more links called pointers.
10 Design & Analysis of Algorithms

Header

Item Item 1 Item n-1

Fig: singly linked list of n elements

 In singly list except all nodes last contain null


 Traversing from the first node to reach any given node is must (time
to reach 2 portion of nodes)
 They do not need any reservation of memory
 Insertion and deletion are efficiently done
 We also have doubly linked list

Arrays and linked list are two principle choices in representing a more
abstract data structures called a linear list of simply a list

Two types of list are: Stacks and queues


STACK:
It is a list in which insertion and deletion can be done only at the end. This
end is called the top

1. LIFO 2. Push 3. Pop are also terms we are familiar with

Stacks are used in many applications. They are indispensible for


implementing recursion algorithms.
QUEUES
It is a list in which we add elements at one end and remove/delete them at
the other end called the front and the rear respectively.

1. FIFO 2. Dequeue 3. Enqueue

Queues have many applications including several algorithms for graph


problems .
11 Design & Analysis of Algorithms

PRIORITY QUEUES
A priority queue implementation is based on an ingenious data structure
called heap
GRAPHS
Informally : “ a collection of points in the plane called vertices or nodes,
some of them connected by line segments called edges or arches”

Formally: “ a graph G=(V, E) is defined by the pair of two sets. A finite set V
of vertices and set of E of pairs (vertices) these edges “
KEY TERM OF GRAPHS
 Adjacent vertices (u,v)
 Undirected edges (u,v)
 Directed edges(u,v)→ u is head & v is tail

a c b a c b

d e f
d e f

Fig(a) fig(b)

V ={ a, b, c, d, e, f} E = { (a,c) , (a,d) , (b,c) , (b,f) , (c,e) , (d,e) , (e,f) }

For fig (a)

E = { (a,c) , (b,c) , (d,a) , (d,e) , (e,f) , (b,f) , (e,c) ,

(c,e) }

for fig (b)


12 Design & Analysis of Algorithms

OTHER KEY TERMS


 Loops
 Complete graph ( every pair connected)
 Dense
 Sparse

GRAPH REPRESENTATIO N
a b c d e f c d
a
a 0 0 1 1 0 0 b c f
b 0 0 1 0 0 1 c a b e
c 1 1 0 0 1 0 d a e
d 1 0 0 0 1 0 e c d f
e 0 0 1 1 0 1 f b e
f 0 1 0 0 1 0

adjacency matrix for fig (a) adjacency matrix for fig(b)

1 if edge exist

A(i,j) =

0 if no edge

 Adjacency matrix of an undirected matrix is symmetric i.e;


A[I,j]=A[j,i] for every 0≤I, i≤n-1.
 Adjacency list of the graph is a collection of linked list one for each
vertex. That contain all the vertices adjacent to the list’ vertex
WEIGHTED GRAPH
It is a graph with numbers assigned to its edges (weights or costs)
13 Design & Analysis of Algorithms

Used in the application little travelling salesman’s problem communication


n/w

The matrix representation of such graph is cost matrix

≥1 if there is a edge

C(i,j) =

0 if there is no edge
5

a b

1 7 4

c 2 d

a b c d
a 0 5 1 0 a b5 c1
b 5 0 7 4 b a5 c7 d4
c 1 7 0 2 c a1 b7 d2
d 0 4 2 0 d b4 c2

PATHS AND CYCLES


 Connectivity and acclivity
A path from vertex u to v of a graph G can be defined as sequence
adjacent vertices that start with u and ends with v
 If all the vertices in a path distinct it is a simple path
 Length of a path is the total number of vertices minus 1(l=|v|-1)
 Directed path
 Connected graph
 Connected component
14 Design & Analysis of Algorithms

 Cycle: starts and ends in the same vertex


 Acycle: graph with no cycles

TREES
Is a connected acyclic graph

Forest: a graph that has no cycles but is not necessarily connected is


called a forest

In tree |E|=|V|-1

(For a connected graph)

Fails for f
a

g h

b e c
i

Fig : graph that is not connected

ROOTED TREES
This has root (decided)

Hence we have various levels

State space trees that under line two important algorithm design technique
backtracking and branch and bound
TERMINOLOGIES OF NODES IN TREES
 Ancestors of v
 Parent of v
 Child of v
 Sibling are vertex having same parent
 Leaf : no children

Sets, Bags, Dictionaries


Set: unordered collection of distinct elements
15 Design & Analysis of Algorithms

Operations: membership, union, intersection

Representation: bit string; linear structure

Bag: unordered collection, elements may be repeated

Dictionary: a bag with operations search, add, delete


CONCLUSION
Algorithm classification

 By types of problems
 By design technique

Design techniques

 a general approach to solving problems


16 Design & Analysis of Algorithms

ANALYSIS OF ALGORITHM
Why analyze the problem??
An algorithm can be analyzed in terms of time efficiency or space
utilization. We will consider only the former right now. The running time of
an algorithm is influenced by several factors:

 Speed of the machine running the program


 Language in which the program was written. For example, programs
written in assembly language generally run faster than those written
in C or C++, which in turn tend to run faster than those written in
Java.
 Efficiency of the compiler that created the program
 Method adopted in Clocking the runtime
 The size of the input: processing 1000 records will take more time
than processing 10 records.
 Organization of the input: if the item we are searching for is at the top
of the list, it will take less time to find it than if it is at the bottom.
METRIC TO MEASURE TIME
Basic operation is used as metric, which does not depend on the
extraneous factors.

 Example: comparison in sorting, Multiplication &addition in matrix


multiplication etc.
 T(n) ≈ Cop C(n) ,
 Where Cop is time of execution of basic operation on a particular
computer.
 C(n) is the number of times this operation needs to be executed for
this algorithm
 T(n) is the estimated run time of an algorithm

If c(n) = ½ n(n-1), how much longer the algorithm run if we double the its
input size?

The answer is 4 times longer(vary small value of n).


17 Design & Analysis of Algorithms

C(n)=½ n(n-1)=½ n2-½ n ≈ ½n2

T(2n)/T(n) ≈ Cop .C(2n)/ Cop.C(n) ≈ ½ (2n)2/ ½n2 = 4

“It is for these reasons the efficiency analysis frame work ignores
multiplicative constants and concentrates on the count’s order of growth to
within a constant multiple for large-size inputs.”
GENERALISING RUNNING TIME
Comparing the growth of the running time as the input grows to the growth
of known functions.

Input (1) log n N n log n n² n³ 2ⁿ


Size:
N
5 1 3 5 15 25 125 32
10 1 4 10 33 100 10³ 10³
100 1 7 100 664 104 106 1030

1000 1 10 1000 104 106 109 10300


10000 1 13 10000 105 108 1012 10300
0

ANALYSING THE RUNNING TIME


T (n), or the running time of a particular algorithm on input of size n, is
taken to be the number of times the instructions in the algorithm are
executed. Pseudo code algorithm illustrates the calculation of the mean
(average) of a set of n numbers:
18 Design & Analysis of Algorithms

1. n = read input from user

2. sum = 0

3. i = 0

4. while i < n

5. number = read input from user

6. sum = sum + number

7. i = i + 1

8. mean = sum / n

The computing time for this algorithm in terms on input size n is:
T(n) = 4n + 5

Statement Number of times executed


1 1
2 1
3 1
4 n+1
5 n
6 n
7 n
8 1

WORST CASE, BEST CASE, AVERAGE CASE EFFICIENCIES FOR


SEQUENTIAL SEARCH
 i→ 0
 While i<n & a[i]!= k do
19 Design & Analysis of Algorithms

i←i+1
 If I < n return I

Else return -1.

 Cworst(n)= n
 Cbest(n)=1
 Cavg(n)=(n+1)/2

Average case analysis is not so easy; it is not just by taking average of best
case & worst case efficiency
20 Design & Analysis of Algorithms

Analysis of Algorithms
Asymptotic Notation
“Helps in making meaning but inexact statements about

time complexity and space complexity of an algorithm.”

The word asymptotic means that it is the study of function of a parameter n, as n become larger
and larger without any bounds.Here we are concerned with how the runtime of an algorithm
increases with the size of the input. We can do analysis of algorithms which will accurately
reflect the way in which the computation time will increase with the size, but ignores the other
details with little effect on total. It permits us to provide upper and lower bounds on the value of
a function f for suitably large values.

The efficiency analysis frame work concentrates on order of growth of an algorithms’


basic operation count as the principal indicator of the algorithms’ efficiency. To compare and
rank such orders of growth, computer scientist use three notations:

 (big-oh)
 Ω(big-omega)
 Ө(big-theta)

Definition 2: Let f(n) and g(n) be two functions. We write:

f(n) = O(g(n)) or f = O(g)

if there are positive integers C and N such that f(n) <= C * g(n) for all integers n >= N.

With this definition, we can clearly see the difference between the three types of notation:

In all three graphs above, n0 is the minimal possible value to get valid bounds, but any greater
value will work.
21 Design & Analysis of Algorithms

There is a handy theorem that relates these notations:

Theorem: For any two functions f(n) and g(n), f(n) = (g(n)) if and only if f(n) = O(g(n)) and f(n)
=  (g(n)).

EXAMPLE:

Show: f(n) = 3n3 + 3n - 1 =  (n3)

As implied by the theorem above, to show this result, we must show two properties:

f(n) = O (n3)

f(n) =  (n3)

First, we show (i), using the same techniques we've already seen for big-Oh.

We consider N = 1, and thus we only consider n >= 1 to show the big-Oh result.

f(n) = 3n3 + 3n - 1

< 3n3 + 3n + 1

<= 3n3 + 3n3 + 1n3

= 7n3

thus, with

C = 7 and N = 1 we have shown that f(n) = O(n3)

Next, we show (ii). Here we must provide a lower bound for f(n). Here, we choose a value for
N, such that the highest order term in f(n) will always dominate (be greater than) the lower order
terms.

We choose N = 2, since for n >=2, we have n3 >= 8. This will allow n3 to be larger than the
remainder of the polynomial (3n - 1) for all n >= 2.

So, by subtracting an extra n3 term, we will form a polynomial that will always be less than f(n)
for n >= 2.

f(n) = 3n3 + 3n - 1

> 3n3 - n3 since n3 > 3n - 1 for any n >= 2

= 2n3

Thus, with C = 2 and N = 2, we have shown that

f(n) =  (n3)
22 Design & Analysis of Algorithms

Since f(n) is shown to always be greater than 2n3.

Big-Oh Operations

Summation Rule
Suppose T1(n) = O(f1(n)) and T2(n) = O(f2(n)). Further, suppose that f2 grows no faster than
f1, i.e., f2(n) = O(f1(n)). Then, we can conclude that T1(n) + T2(n) = O(f1(n)). More generally,
the summation rule tells us O(f1(n) + f2(n)) = O(max(f1(n), f2(n))).

Proof:

Suppose that C and C' are constants such that T1(n) <= C * f1(n) and T2(n) <= C' * f2(n).

Let D = the larger of C and C'. Then,

T1(n) + T2(n) <= C * f1(n) + C' * f2(n)

<= D * f1(n) + D * f2(n)

<= D * (f1(n) + f2(n))

<= O(f1(n) + f2(n))

Product Rule:
Suppose T1(n) = O(f1(n)) and T2(n) = O(f2(n)). Then, we can conclude that

T1(n) * T2(n) = O(f1(n) * f2(n)).

The Product Rule can be proven using a similar strategy as the Summation Rule proof.

General Rules:
• All basic statements (assignments, reads, writes, conditional testing, library calls) run in
constant time: O(1).

• The time to execute a loop is the sum, over all times around the loop, of the time to
execute all the statements in the loop, plus the time to evaluate the condition for
termination. Evaluation of basic termination conditions is O(1) in each iteration of the
loop.

• The complexity of an algorithm is determined by the complexity of the most frequently


executed statements. If one set of statements have a running time of O(n3) and the rest
are O(n), then the complexity of the algorithm is O(n3). This is a result of the Summation
Rule.

Compute the big-Oh running time of the following C++ code segment:
for (i = 2; i < n; i++) {
sum += i;
}
23 Design & Analysis of Algorithms

• The number of iterations of a for loop is equal to the top index of the loop minus the
bottom index, plus one more instruction to account for the final conditional test.

• Note: if the for loop terminating condition is i <= n, rather than i < n, then the number of
times the conditional test is performed is:

• ((top_index + 1) – bottom_index) + 1)

• In this case, we have n - 2 + 1 = n - 1. The assignment in the loop is executed n - 2


times. So, we have (n - 1) + (n - 2) = (2n - 3) instructions executed = O(n).

Consider the sorting algorithm shown below. Find the number of instructions executed and the
complexity of this algorithm.
for (i = 1; i < n; i++) {
SmallPos = i;
Smallest = Array[SmallPos];
for (j = i+1; j <= n; j++)
if (Array[j] < Smallest) {
SmallPos = j;
Smallest = Array[SmallPos]
}
Array[SmallPos] = Array[i];
Array[i] = Smallest;
}
The total computing time is:

T(n) = (n) + 4(n-1) + n(n+1)/2 – 1 + 3[n(n-1) / 2]

= n + 4n - 4 + (n2 + n)/2 – 1 + (3n2 - 3n) / 2

= 5n - 5 + (4n2 - 2n) / 2

= 5n - 5 + 2n2 - n

= 2n2 + 4n - 5

= O(n2)
24 Design & Analysis of Algorithms

EXAMPLE :

The following program segment initializes a two-dimensional array A (which has n rows
and n columns) to be an n x n identity matrix – that is, a matrix with 1’s on the diagonal
and 0’s everywhere else. More formally, if A is an n x n identity matrix, then:

A x M = M x A = M, for any n x n matrix M.

What is the complexity of this C++ code?

1) cin >> n; // Same as: n = GetInteger();


2) for (i = 1; i <= n; i ++)
3) for (j = 1; j <= n; j ++)
4) A[i][j] = 0;
5) for (i = 1; i <= n; i ++)
6) A[i][i] = 1;

Here is a simple linear search algorithm that returns the index location of a value in an array.

What is the complexity of this C++ code?

1) cin >> n; // Same as: n = GetInteger();


2) for (i = 1; i <= n; i ++)
3) for (j = 1; j <= n; j ++)
4) A[i][j] = 0;
5) for (i = 1; i <= n; i ++)
6) A[i][i] = 1;

Average number of lines executed equals:

1 + 3 + 5 + ... + (2n - 1) / n
= (2 (1 + 2 + 3 + ... + n) - n) / n
We know that 1 + 2 + 3 + ... + n = n (n + 1) / 2, so the average number of
lines executed is:

[2[n(n+1)/2] – n]/n
=n
=O(n)
25 Design & Analysis of Algorithms

Mathematical Analysis of Non-Recursive Algorithms

An Example:
Consider the Algorithm

Algorithm MaxElement(A[0..n-1])
//Determines the value largest element
//Input:A[0..n-1]
//output: value of largest element

MaxValueßa[0]
For iß 1 to n-1 do
If A[i] > maxval maxvalßA[i]
Return MaxVal.

Here if A[i] >maxval is the basic operation


Therefore, C(n) = ∑1n-11 = n-1 ɛ Ө(n)

General plan for analyzing time efficiency of Non Recursive Algorithm


1) Decide on a parameter indicating input size.
2) Identify algorithms basic operation
3) Check number of times the basic operation is executed
 Worst case
 best case
 average case
4) Set up a sum expressing the execution of Basic operation.
5) Find the closed form formula for the count

Rules of Sum manipulation


• ∑i=lu c.ai = c ∑i=lu ai -------(R1)

• ∑i=lu (ai + bi) = ∑i=lu ai + ∑i=lu bi -------(R2)

Summation formulas
• ∑i=0u 1 = u-l +1 where l ≤ u

• ∑i=0u i = ∑i=1u I =1+2+…..n= n(n+1)/2

≈1/2 n2 ɛ Ө(n2)

Example
26 Design & Analysis of Algorithms

Algorithm unique elements(a[0..n-1])


//input:a[0..n-1] ,
//ouput: returns true if elements are unique
For i 0 to n-2 do
For i+1 to n-1 do
If a[i]==a[j] return false.
Return true

Analyzing time efficiency


Input size n which is the size of array
Basic operation is comparison in the inner loop
We will initialize only in worst case (either no unique
or last two are unique)
Cworst = ∑i=0n-2 ∑j=i+1n-1 1 = ∑i=0n-2 [(n-1) - (i+1)+1]
= ∑i=0n-2 (n-i-1)
=∑i=0n-2 (n-1) - ∑i=0n-2 i
=(n-1) ∑i=0n-2 1 - ((n-2)(n-1))/2
=(n-1)2 - ((n-2)(n-1))/2
=(n(n-1))/2
≈1/2 n2 ɛ Ө(n2)
Mathematical Analysis of Recursive Algorithms
Example:
Algorithm F(n)

//Computes n! recursively,

Input: A non negative integer n, output: The value of n!

If n= 0 return 1

else return F(n-1)*n

I.e. we have the function computed as


27 Design & Analysis of Algorithms

F(n) = F(n-1) * n for n> 0

If M(n) is the basic operation i.e. multiplication then

If M(n) = M (n-1) + 1 for n> 0

[to compute f(n-1)] [to multiply f(n-1) by n]

Here M(n) is defined implicitly, i.e. in terms of its value at another point, namely n-1. Such
equations are termed as Recurrence Relations or Recurrence.

Explicit formula for Recurrence (i.e. in terms of n only)


To solve a recurrence we need initial condition, i.e. the algorithm stops when n=0 (if n=0 return
1).

M(0) = 0
[the call stops when n=0] [no multiplication when n=0]
Now we set as follows

M(n)= M(n-1) + 1 ,n > 0

M(n)=0 , n=0

We also have
F(n) = F(n-1) . N for n> 0
F(n)= 1 n=0
Back substitution to solve recurrence

M(n)=M(n-1) + 1 substitute M(n-1)=M(n-2)+1


=[M(n-2)+1]+1 = M(n-2)+2substitute M(n-2)=M(n-3)+1
=[M(n-3)+1]+2 = M(n-3)+3
…..
...
.. =M(n-i)+i
. =M(n-n)+n
=M(0)+n = 0+ n // With n=0 as initial condition
=n //closed form

General plan for Analyzing efficiency of


Recursive Algorithm
1) Decide on a parameter for its input size.
2) Identify basic operation.
28 Design & Analysis of Algorithms

3) Check whether basic operation vary for different inputs of the same size.
 worst case
 Best case &
 Average case efficiency
4) Set up a recurrence relation with initial condition w.r.t basic operation.
5) Solve the recurrence and ascertain its order of growth.

BRUTE FORCE
 The simplest of the design strategies.
 Brute force is a straight forward approach to solving a problem, usually directly based on
the problem statement and definition of the concepts involved.
 Brute force strategy I s indeed the one that is easiest to apply.
 It should not be over looked as an upper text algorithm design strategy.
 Unlike others it is applicable for a wide variety of problem like computing sum of n
numbers, largest of n and so on.
 For some like sorting, searching, matrix multiplication etc. it reasonable algorithm with
some of practical value.
 For some, it is designing more efficient one may be unjustifiable, if only a few instances
of the problem need to be solved. Hence brute force can solve with acceptable speed.
 Even if too efficient a brute force can be still useful for solving small size instances of a
problem.

Selection sort:
We start selection sort by scanning the entire given list to find its smallest element and the
exchange it with the first element, putting the smallest element in its final position in the sorted
list. Then we scan list, starting with the second element, putting the second element in its final
position. Generally, on the i th pass through the list, which we number from 0 to n-2, the
algorithm searches for the smallest element among the last n-1 elements and swaps it with Ai:

Ao ≤ A1 ≤ ….≤ Ai-1 | Ai ….Amin ….An-1

After n-1 passes, list is sorted.

Here is a pseudo code of this algorithm.

ALGORITHM Selection Sort(A[0…n-1])


//sorts a given array by selection sort
//Input: An array A[0..n-1] of orderable elements
//Output: Array A[0…n-1] sorted in ascending order
For i←0 to n-2 do
29 Design & Analysis of Algorithms

Min ← i
For j← i+1 to n-1 do
If A[j] < A[min]
Min ← j
Swap A[j] and A[min]
As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated
below:

89 45 68 90 29 34 17

17 45 68 90 29 34 89

17 29 68 90 45 34 89

17 29 34 90 45 68 89

17 29 34 45 90 68 89

17 29 34 45 68 90 89

17 29 34 45 68 89 90

The analysis of selection sort is straight forward.The input’s size given by the number of
elements n;the algorithm’s basic operation is the key comparison A[j]<A[min].the number of
times it is executed depends only on the array size:
n−2 n −1 n−2
( n−1 ) n
C(n)= ∑❑ ∑ 1 = ∑ (n−1−i) =
2
i=0 j =i +1 i=0

Thus selection sort is a θ(n2) algorithm on all inputs. Note, however, that the number of key
swaps is only θ(n) or, more precisely, n-1. This property distinguishes selection sort positively
from many others sorting algorithms.

Bubble sort:
Another brute-force application to the sorting problem is to compare adjacent elements of the
list and exchange them if they are out of order. By doing it repeatedly, we end up “bubbling up”
the largest element to the last position on the list. The next pass bubbles up the second largest
element, and so on until, after n-1 passes, the list is sorted. Pass i(0<i<n-2)of bubble sort can be
represented by the following diagram:

A0…………Aj……Aj+1…..An-i-1 | An-i ≤ …..≤ An-1

Here is the pseudo code of this algorithm.

Algorithm Bubble sort(A(0…n-1))


30 Design & Analysis of Algorithms

//Sorts a given array by bubble sort


//Input: an array A(0…n-1) of orderable elements
//Output: array A(0..n-1) sorted in ascending order
For i← 0 to n-2 do
For j←0 to n-2-I do
If A[j+1] < A[j] swap A[j] and A[j+1]
The action of the algorithm on the list 89,45,68,90,29,34,17 is an exercise.

The number of key comparisons for the bubble sort version given above is the same for all
arrays of size n; it is obtained by a sum that is almost identical to the sum for selection sort:
n−2 n−2−i n−2
C(n) = ∑❑ ∑ 1 = ∑ [ ( n−2−i) −0+1]
i=0 j=0 i=0

n−2
n(n−1)
= ∑ (n−1−i) =
2
Є θ(n2)
i=0

The number of keys swaps, however, depends on the input. For the worst case of decreasing
arrays, it is the same as the number of key comparisons:

n(n−1)
Sworst(n) = C(n) = Є θ(n2)
2

Selection Search and Brute-Force String Matching


Selection Search:
Here the algorithm simply compares successive elements of a given list with a given search
key until either a match is encountered or the list is exhausted without finding a match. A simple
extra tick is often employed in implementing sequential search: if we append the search key to
the end of the list, the search for the key will have to be successful, and therefore we can
eliminate a check for the list’s end on each iteration of the algorithm. Here is pseudo code given

Algorithm Sequential Search2(A(0…n-1))


//implements sequential search with a search key aa a sentinel
//Input: An array S of n elements and a search key K
//Output: The index of the first element in A(0…n-1)
// whose value is equal to K or – if no such element is found
A[n]← K
i←0
while A[i] ≠ K do
31 Design & Analysis of Algorithms

i← i+1
if i< n return i
else return -1
another straight forward improvement can be incorporated in sequential search if a given list is
known to be sorted searching in such a list can be stopped as soon as an element greater than
or equal to the search key is encountered.

Brute force string matching:


Given a string of n characters called the text and a string of m characters (m ≤ n) find the string
of the text that matches the pattern

We want to find i, the index of the leftmost character of the first matching sub string in the text,

Such that ti=po……ti+j……ti+m-1 = Pm-1

to……..ti……….ti+j…..ti+m-1……..tn-1 text T

↑ ↑ ↑

Po Pj Pm-1 pattern P

If matches other than the first one need to be for and a string-matching algorithm can simply
continue working until the entire text is executed.

Note: The last position in the text which can be still be a beginning of a matching string in n-m.

Algorithm:Brute Force Match(T[0….n-1], P[0….m-1])


//Implement brute force string match
//Input: T[0…n-1] a text of n characters
//and a pattern of P[0…m-1] of m characters
//Output: The index I of the first character in text T[0…n-1] or -1 if unsuccessful.
For i← 0 to n-m do
For(j← 0;j<m && P[j] == T[i+j]; j++)
If j==m return i
Return -1
Worst case, the algorithm is in θ(nm)
Average case, the algorithm is in θ(m+n) = θ(n)

Exhaustive search:
It is simply a brute force approach to combinational problems. It suggests generating each and
every element of the problems domain, selecting those of them that satisfy all the constraints
and then finding a desired element.

Travelling salesman problem:


To find the shortest tour through a given n cities that visits each city exactly once before
returning to the city where it started.
32 Design & Analysis of Algorithms

Weighted graph is used for representation.

It can be stared as the problem of finding the shortest Hamiltonian circuit of the graph. Hamilton
circuit is defined as a cycle that passes through all the vertices of the graph exactly once.

a b

5 3

8 7

c 1 d

Tour Length

a→ b→ c→ d→ a l = 2+8+1+7=18

a→ b→ d→ c→ a l = 2+3+1+5=11

a→ c→ b→ d→ a l=5+8+3+7=23

a→ c→ b→ d→ a l=5+1+3+2=11

a→ d→ b→ c→ a l=7+3+8+5=23

a→ d→ c→ b→ a l=7+1+8+2=18

Observations of the problem:

 Tour is sequence of n+1 adjacent vertices, where first and last vertices are the same.
 Compute the tour length and find the shortest one.
 All vertices can be got with n-1 permutations.
 Some of the tours can be avoided.
 Total number of permutations will still be (n-1)!/2.

Knapsack problem:
Given n items of known weights w1, w2,…..wn & v1, v2,…..vn and a knapsack of capacity w.
Find the most valuable subset of the items that fit into the knapsack.
33 Design & Analysis of Algorithms

Here we have come across

 All combinations of the subset.


 Feasible subsets
 Optimal subsets
 The number of subsets of n elements is 2n which leads to Ω(2n) algorithm no matter how
efficiently individual subsets are generated.

Ex: knapsack=10, item1= w1=7 w2=3 w3=4 w4=5

V1= $42 v2=$12 v3=$40 v4=$25

Subject Total Weight Total Value

Φ 0 $0

{1} 7 $42

{2} 3 $12

{3} 4 $40

{4} 5 $25

{1,2} 10 $36

{1,3} 11 $not feasible

{1,4} 12 $52

{2,3} 7 $37

{2,4} 8 $65

{3,4} 9 $not feasible

{1,2,3} 14 $not feasible

{1,2,4} 15 $not feasible

{1,3,4} 16 $not feasible

{2,3,4} 12 $not feasible

{1,2,3,4} 19 $not feasible

You might also like