0% found this document useful (0 votes)
3 views25 pages

Design and Analysis of Algorithms_part2_part1

The document discusses dynamic programming as an algorithm design technique, detailing its applications in problems such as the all pairs shortest path, traveling salesperson, and optimal binary search trees. It emphasizes the principle of optimality and outlines the steps involved in dynamic programming solutions. Additionally, it provides complexity analysis for various algorithms, highlighting their efficiency in solving specific computational problems.

Uploaded by

swlabcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views25 pages

Design and Analysis of Algorithms_part2_part1

The document discusses dynamic programming as an algorithm design technique, detailing its applications in problems such as the all pairs shortest path, traveling salesperson, and optimal binary search trees. It emphasizes the principle of optimality and outlines the steps involved in dynamic programming solutions. Additionally, it provides complexity analysis for various algorithms, highlighting their efficiency in solving specific computational problems.

Uploaded by

swlabcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Analysis: - If the no/: of edges in the graph is given by /E/ then the time

for Kruskals algorithm is given by 0 (|E| log |E|).

DESIGN AND ANALYSIS OF ALGORITHMS Page 50


UNIT-III
Dynamic Programming: General method, applications- Matrix chained multiplication, Optimal
binarysearch trees, 0/1 Knapsack problem, All pairs shortest path problem, Traveling sales
person problem, Reliability design.

Dynamic Programming

Dynamic programming is a name, coined by Richard Bellman in 1955. Dynamic


programming, as greedy method, is a powerful algorithm design technique that can be
used when the solution to the problem may be viewed as the result of a sequence of
decisions. In the greedy method we make irrevocable decisions one at a time, using a
greedy criterion. However, in dynamic programming we examine the decision
sequence to see whether an optimal decision sequence contains optimal decision
subsequence.

When optimal decision sequences contain optimal decision subsequences, we can


establish recurrence equations, called dynamic-programming recurrence equations,
that enable us to solve the problem in an efficient way.

Dynamic programming is based on the principle of optimality (also coined by


Bellman). The principle of optimality states that no matter whatever the initial state
and initial decision are, the remaining decision sequence must constitute an optimal
decision sequence with regard to the state resulting from the first decision. The
principle implies that an optimal decision sequence is comprised of optimal decision
subsequences. Since the principle of optimality may not hold for some formulations of
some problems, it is necessary to verify that it does hold for the problem being
solved. Dynamic programming cannot be applied when this principle does not hold.

The steps in a dynamic programming solution are:

 Verify that the principle of optimality holds

 Set up the dynamic-programming recurrence equations

 Solve the dynamic-programming recurrence equations for the value ofthe


optimalsolution.

 Perform a trace back step in which the solution itself is constructed.

DESIGN AND ANALYSIS OF ALGORITHMS Page 51


DESIGN AND ANALYSIS OF ALGORITHMS Page 51
All pairs shortest paths

In the all pairs shortest path problem, we are to find a shortest path between every pair of
vertices in a directed graph G. That is, for every pair of vertices (i, j), we are to find a
shortest path from i to j as well as one from j to i. These two paths are the same when G is
undirected.
When no edge has a negative length, the all-pairs shortest path problem may be solved
by using Dijkstra’s greedy single source algorithm n times, once with each of the n vertices
as the source vertex.

The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the lengthof a
shortest path from i to j. The matrix A can be obtained by solving n single-source problems
using the algorithm shortest Paths. Since each application of this procedure requires O (n2) time,
the matrix A can be obtained in O (n3) time.

The dynamic programming solution, called Floyd’s algorithm, runs in O (n3) time. Floyd’s
algorithm works even when the graph has negative length edges (provided there are no
negative length cycles).

The shortest i to j path in G, i ≠ j originates at vertex i and goes through some


intermediate vertices (possibly none) and terminates at vertex j. If k is an intermediate
vertex on this shortest path, then the subpaths from i to k and from k to j must be shortest
paths from i to k and k to j, respectively. Otherwise, the i to j path is notof minimum length.
So, the principle of optimality holds. Let Ak (i, j) represent the length of a shortest path
from i to j going through no vertex of index greater than k, we obtain:

Ak (i, j) = {min {min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n

Algorithm All Paths (Cost, A, n)


// cost [1:n, 1:n] is the cost adjacency matrix of a graph which
// n vertices; A [I, j] is the cost of a shortest path from vertex
// i to vertex j. cost [i, i] = 0.0, for 1 < i < n.
{
for i := 1 to n do
for j:= 1 to n do
A [i, j] := cost [i, j]; // copy cost into A.
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
A [i, j] := min (A [i, j], A [i, k] + A [k, j]);
}

DESIGN AND ANALYSIS OF ALGORITHMS Page 51


Complexity Analysis: A Dynamic programming algorithm based on this recurrence
involves in calculating n+1 matrices, each of size n x n. Therefore, the algorithm has a
complexity of O (n3).

DESIGN AND ANALYSIS OF ALGORITHMS Page 51


TRAVELLING SALESPERSON PROBLEM
For the following graph find minimum cost tour for the traveling sales person problem:

1 2

3 4

Let us start the tour from vertex 1:


g (1, V – {1}) = min {c1k + g (k, V – {1, K})} - (1)
2<k<n

More generally writing:


g (i, s) = min {cij + g (J, s – {J})} - (2)

Clearly, g (i, T) = ci1 , 1 ≤ i ≤ n. So,

g (2, T) = C21 = 5
g (3, T) = C31 = 6
g (4, ~) = C41 = 8
Using equation – (2) we obtain:
g (1,{2, 3, 4}) = min {c12 + g (2, {3,
4}, c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4, {3})}
= min {9 + g (3, {4}), 10 + g (4, {3})}
g (3, {4}) = min {c34 + g (4, T)} = 12 + 8 = 20
g (4, {3}) = min {c43 + g (3, ~)} = 9 + 6 = 15

Therefore, g (2, {3, 4}) = min {9 + 20, 10 + 15} = min {29, 25} = 25
g (3, {2, 4}) = min {(c32 + g (2, {4}), (c34 + g (4, {2})}

g (2, {4}) = min {c24 + g (4, T)} = 10 + 8 = 18

g (4, {2}) = min {c42 + g (2, ~)} = 8 + 5 = 13

Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} = 25
g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3, {2})}
g (2, {3}) = min {c23 + g (3, ~} = 9 + 6 = 15
g (3, {2}) = min {c32 + g (2, T} = 13 + 5 = 18

DESIGN AND ANALYSIS OF ALGORITHMS Page 55


Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 18} = min {23, 27} = 23

g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2, 3})} = min
{10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} = 35

The optimal tour for the graph has length = 35 The

optimal tour is: 1, 2, 4, 3, 1.

OPTIMAL BINARY SEARCH TREE


Let us assume that the given set of identifiers is {a1, . . . , an} with a1 < a2 < < an.
Let p (i) be the probability with which we search for ai. Let q (i) be the probability that the
identifier x being searched for is such that ai < x < ai+1, 0 < i < n (assume a0 = - ~ and
an+1 = +oc). We have to arrange the identifiers in a binary search tree in a way that
minimizes the expected total access time.
In a binary search tree, the number of comparisons needed to access an element at depth 'd'is
d + 1, so if 'ai' is placed at depth 'di', then we want to minimize:
n
~ Pi (1 + di ) .
i ~1

Let P (i) be the probability with which we shall be searching for 'ai'. Let Q (i) be the
probability of an un-successful search. Every internal node represents a point where a
successful search may terminate. Every external node represents a point where an
unsuccessful search may terminate.

The expected cost contribution for the internal node for 'ai' is:

P (i) * level (ai ) .

Unsuccessful search terminate with I= 0 (i.e at an external node). Hence the costcontribution
for this node is:

Q (i) * level ((Ei) - 1)

The expected cost of binary search tree is:


n n
~ P(i) * level (ai) + ~ Q (i) * level ((Ei ) - 1)

Given a fixed set of identifiers, we wish to create a binary search tree organization. We
may expect different binary search trees for the same identifier set to have different
performance characteristics.
The computation of each of these c(i, j)’s requires us to find the minimum of m
quantities. Hence, each such c(i, j) can be computed in time O(m). The total time for all
c(i, j)’s with j – i = m is therefore O(nm – m2).

The total time to evaluate all the c(i, j)’s and r(i, j)’s is therefore:

DESIGN AND ANALYSIS OF ALGORITHMS Page 56


~ (nm - m2 ) = O (n3
) 1<m<n

Example 1:

Let n = 4, and (a1, a2, a3, a4) = (do, if, need, while) Let P (1: 4) = (3, 3, 1, 1) and Q (0:
4) = (2, 3, 1, 1, 1)

Solution:
Table for recording W (i, j), C (i, j) and R (i, j):
Column
Row 0 1 2 3 4
0 2, 0, 0 3, 0, 0 1, 0, 0 1, 0, 0, 1, 0, 0
1 8, 8, 1 7, 7, 2 3, 3, 3 3, 3, 4
2 12, 19, 1 9, 12, 2 5, 8, 3
3 14, 25, 2 11, 19, 2
4 16, 32, 2

This computation is carried out row-wise from row 0 to row 4. Initially, W (i, i) = Q
(i) and C (i, i) = 0 and R (i, i) = 0, 0 < i < 4.
Solving for C (0, n):

First, computing all C (i, j) such that j - i = 1; j = i + 1 and as 0 < i < 4; i = 0, 1, 2 and
3; i < k ≤ J. Start with i = 0; so j = 1; as i < k ≤ j, so the possible value for k = 1

W (0, 1) = P (1) + Q (1) + W (0, 0) = 3 + 3 + 2 = 8


C (0, 1) = W (0, 1) + min {C (0, 0) + C (1, 1)} = 8
R (0, 1) = 1 (value of 'K' that is minimum in the above equation).
Next with i = 1; so j = 2; as i < k ≤ j, so the possible value for k = 2
W (1, 2) = P (2) + Q (2) + W (1, 1) = 3 + 1 + 3 = 7
C (1, 2) = W (1, 2) + min {C (1, 1) + C (2, 2)} = 7
R (1, 2) = 2

Next with i = 2; so j = 3; as i < k ≤ j, so the possible value for k = 3

W(2, 3) = P (3) + Q (3) + W (2, 2) = 1 + 1 + 1 = 3


C (2, 3) = W (2, 3) + min {C (2, 2) + C (3, 3)} = 3 + [(0 + 0)] = 3
ft (2, 3) = 3

Next with i = 3; so j = 4; as i < k ≤ j, so the possible value for k = 4


W(3, 4) = P (4) + Q (4) + W (3, 3) = 1 + 1 + 1 =3

DESIGN AND ANALYSIS OF ALGORITHMS Page 57


C (3, 4) = W(3, 4) + min {[C (3, 3) + C (4, 4)]} = 3 + [(0 + 0)] = 3
ft (3, 4) = 4

Second, Computing all C (i, j) such that j - i = 2; j = i + 2 and as 0 < i < 3; i = 0, 1, 2; i <
k ≤ J. Start with i = 0; so j = 2; as i < k ≤ J, so the possible values for k = 1 and 2.
W (0, 2) = P (2) + Q (2) + W (0, 1) = 3 + 1 + 8 = 12
C (0, 2) = W (0, 2) + min {(C (0, 0) + C (1, 2)), (C (0, 1) + C (2, 2))} = 12
+ min {(0 + 7, 8 + 0)} = 19
ft (0, 2) = 1
Next, with i = 1; so j = 3; as i < k ≤ j, so the possible value for k = 2 and 3.
W(1, 3) = P (3) + Q (3) + W (1, 2) = 1 + 1+ 7 = 9
C (1, 3) = W (1, 3) + min {[C (1, 1) + C (2, 3)], [C (1, 3) 2) + C (3, 3)]}
= W(1, + min {(0 + 3), (7 + 0)} = 9 + 3 = 12
ft (1, 3) = 2
Next, with i = 2; so j = 4; as i < k ≤ j, so the possible value for k = 3 and 4.

W (2, 4) = P (4) + Q (4) + W (2, 3) = 1 + 1 + 3 = 5


C (2, 4) = W (2, 4) + min {[C (2, 2) + C (3, 4)], [C (2, 3) + C (4, 4)]
= 5 + min {(0 + 3), (3 + 0)} = 5 + 3 = 8
ft (2, 4) = 3
Third, Computing all C (i, j) such that J - i = 3; j = i + 3 and as 0 < i < 2; i = 0, 1; i < k
≤ J. Start with i = 0; so j = 3; as i < k ≤ j, so the possible values for k = 1, 2 and 3.
W(0, 3) = P (3) + Q (3) + W (0, 2) = 1 + 1 =+ 12 = 14
C (0, 3) W (0, 3) + min {[C (0, 0) + C (1, 3)], [C (0, 1) + C (2, 3)],
[C (0, 2) + C (3, = 3)]}
14 + min {(0 + 12), (8 + 3), (19 = 2 + 0)} = 14 + 11 = 25
ft (0, 3)

DESIGN AND ANALYSIS OF ALGORITHMS Page 58


Start with i = 1; so j = 4; as i < k ≤ j, so the possible values for k = 2, 3 and 4.

W(1, 4) = P (4) + Q (4) + W (1, 3) = 1 + 1 + 9 = 11 = W 2)


C (1, 4) (1, 4) + min {[C (1, 1) + C (2, 4)], [C (1, + C (3, 4)],
[C (1, 3) + C (4, 4)]} + 8 = 19
= 11 + min {(0 + 8), (7 + 3), (12 + 0)} = 11 = 2
ft (1, 4)

Fourth, Computing all C (i, j) such that j - i = 4; j = i + 4 and as 0 < i < 1; i = 0; i < k ≤
J.
Start with i = 0; so j = 4; as i < k ≤ j, so the possible values for k = 1, 2, 3 and 4.

W (0, 4) = P (4) + Q (4) + W (0, 3) = 1 + 1 + 14 = 16


C (0, 4) = W(0, 4) + min {[C (0, 0) + C (1, 4)], [C (0, 1) + C (2, 4)],
[C (0, 2) + C (3, 4)], [C (0, 3) + C (4, 4)]}
= 16 + min [0 + 19, 8 + 8, 19+3, 25+0] = 16 + 16 = 32 ft (0,
4) = 2
From the table we see that C (0, 4) = 32 is the minimum cost of a binary search tree for
(a1, a2, a3, a4). The root of the tree 'T04' is 'a2'.

Hence the left sub tree is 'T01' and right sub tree is T24. The root of 'T01' is 'a1' and the
root of 'T24' is a3.

The left and right sub trees for 'T01' are 'T00' and 'T11' respectively. The root of T01 is
'a1'

The left and right sub trees for T24 are T22 and T34 respectively.

The root of T24 is 'a3'.

The root of T22 is null


The root of T34 is a4.
a2
T 04 if

a1 a3
T 01 T 24 do read

a4
T 00 T 11 T 22 T 34 while

DESIGN AND ANALYSIS OF ALGORITHMS Page 59


0/1 – KNAPSACK

We are given n objects and a knapsack. Each object i has a positive weight wi and a positive
value Vi. The knapsack can carry a weight not exceeding W. Fill the knapsack so that the value
of objects in the knapsack is optimized.
A solution to the knapsack problem can be obtained by making a sequence of decisions on
the variables x1, x2, . . . . , xn. A decision on variable xi involves determining which of the
values 0 or 1 is to be assigned to it. Let us assume that

decisions on the xi are made in the order xn, xn-1,......... x1. Following a decision on xn,
we may be in one of two possible states: the capacity remaining in m – wn and a profit
of pn has accrued. It is clear that the remaining decisions xn-1, ........ , x1 must be optimal
with respect to the problem state resulting from the decision on xn. Otherwise, xn,. .
. . , x1 will not be optimal. Hence, the principal of optimality holds.
Fn (m) = max {fn-1 (m), fn-1 (m - wn) + pn} -- 1

For arbitrary fi (y), i > 0, this equation generalizes to:

Fi (y) = max {fi-1 (y), fi-1 (y - wi) + pi} -- 2

Equation-2 can be solved for fn (m) by beginning with the knowledge fo (y) = 0 for all y
and fi (y) = - ~, y < 0. Then f1, f2, . . . fn can be successively computed using
equation–2.

When the wi’s are integer, we need to compute fi (y) for integer y, 0 < y < m. Since fi (y)
= - ~ for y < 0, these function values need not be computed explicitly. Since each fi
can be computed from fi - 1 in Θ (m) time, it takes Θ (m n) time to compute fn. When
the wi’s are real numbers, fi (y) is needed for real numbers y such that 0 < y < m. So, fi
cannot be explicitly computed for all y in this range. Even when the wi’s are integer, the
explicit Θ (m n) computation of fn may not be the most efficient computation. So, we
explore an alternative method for both cases.
The fi (y) is an ascending step function; i.e., there are a finite number of y’s, 0 = y1 < y2
< . . . . < yk, such that fi (y1) < fi (y2) < ............ < fi (yk); fi (y) = - ~ , y < y1; fi (y) = f
(yk), y > yk; and fi (y) = fi (yj), yj < y < yj+1. So, we need to compute only fi (yj), 1 < j
< k. We use the ordered set Si = {(f (yj), yj) | 1 < j < k} to represent fi (y). Each number
of Si is a pair (P, W), where P = fi (yj) and W = yj. Notice that S 0 = {(0, 0)}. We can
compute Si+1 from Si by first computing:

Si 1 = {(P, W) | (P – pi, W – wi) e Si}

Now, Si+1 can be computed by merging the pairs in Si and Si 1 together. Note that if Si+1
contains two pairs (Pj, Wj) and (Pk, Wk) with the property that Pj < Pk and Wj > Wk,
then the pair (Pj, Wj) can be discarded because of equation-2. Discarding or purging
rules such as this one are also known as dominance rules. Dominated tuples get purged. In
the above, (Pk, Wk) dominates (Pj, Wj).

DESIGN AND ANALYSIS OF ALGORITHMS Page 60


Reliability Design

The problem is to design a system that is composed of several devices connected in series.
Let ri be the reliability of device Di (that is ri is the probability that device i will
function properly) then the reliability of the entire system is fT ri. Even if the individual
devices are very reliable (the ri’s are very close to one), the reliability of the system may
not be very good. For example, if n = 10 and ri = 0.99, i < i < 10, then fT ri = .904.
Hence, it is desirable to duplicate devices. Multiply copies of the same device type are
connected in parallel.

If stage i contains mi copies of device Di. Then the probability that all mi have a
malfunction is (1 - ri) mi. Hence the reliability of stage i becomes 1 – (1 - r )mi.
i

DESIGN AND ANALYSIS OF ALGORITHMS Page 61


DESIGN AND ANALYSIS OF ALGORITHMS Page 62
DESIGN AND ANALYSIS OF ALGORITHMS Page 63
UNIT IV:

Backtracking: General method, Applications- n-queue problem, Sum of subsets problem,


Graph coloring, Hamiltonian cycles.
Backtracking (General method)
Many problems are difficult to solve algorithmically. Backtracking makes it possible to solve at
least some large instances of difficult combinatorial problems.
Suppose you have to make a series of decisions among various choices, where
 You don’t have enough information to know what to choose
 Each decision leads to a new set of choices.
 Some sequence of choices ( more than one choices) may be a solution to your problem.

Backtracking is a methodical (Logical) way of trying out various sequences of decisions, until
you find one that “works”

Given a maze, find a path from start to finish.


 In maze, at each intersection, you have to decide between 3 or fewer choices:
 Go straight
 Go left
 Go right
 You don’t have enough information to choose correctly
 Each choice leads to another set of choices.
 One or more sequences of choices may or may not lead to a solution.
 Many types of maze problem can be solved with backtracking.

Sorting the array of integers in a[1:n] is a problem whose solution is expressible by an n-tuple
xi is the index in ‘a’ of the ith smallest element.
The criterion function ‘P’ is the inequality a[xi]≤ a[xi+1] for 1≤ i ≤ n
Si is finite and includes the integers 1 through n.
mi size of set Si
m=m1m2m3---mn n tuples that possible candidates for satisfying the function P.
With brute force approach would be to form all these n-tuples, evaluate (judge) each one with P
and save those which yield the optimum.
By using backtrack algorithm; yield the same answer with far fewer than ‘m’ trails.
Many of the problems we solve using backtracking requires that all the solutions satisfy a
complex set of constraints.
For any problem these constraints can be divided into two categories:

DESIGN AND ANALYSIS OF ALGORITHMS Page 64


 Explicit constraints.
 Implicit constraints.

Explicit constraints: Explicit constraints are rules that restrict each xi to take on values only
from a given set.
Example: xi ≥ 0 or si={all non negative real numbers}
Xi=0 or 1 or Si={0, 1}
li ≤ xi ≤ ui or si={a: li ≤ a ≤ ui }
The explicit constraint depends on the particular instance I of the problem being solved. All
tuples that satisfy the explicit constraints define a possible solution space for I.
Implicit Constraints:
The implicit constraints are rules that determine which of the tuples in the solution space of I
satisfy the criterion function. Thus implicit constraints describe the way in which the Xi must
relate to each other.
Applications of Backtracking:
 N Queens Problem
 Sum of subsets problem
 Graph coloring
 Hamiltonian cycles.

N-Queens Problem:
It is a classic combinatorial problem. The eight queen’s puzzle is the problem of placing eight
queens puzzle is the problem of placing eight queens on an 8×8 chessboard so that no two
queens attack each other. That is so that no two of them are on the same row, column, or
diagonal.
The 8-queens puzzle is an example of the more general n-queens problem of placing n queens on
an n×n chessboard.

Here queens can also be numbered 1 through 8


Each queen must be on a different row
Assume queen ‘i’ is to be placed on row ‘i’
All solutions to the 8-queens problem can therefore be represented a s s-tuples(x1, x2, x3—x8)
xi the column on which queen ‘i’ is placed
si {1, 2, 3, 4, 5, 6, 7, 8}, 1 ≤ i ≤8
Therefore the solution space consists of 88 s-tuples.
The implicit constraints for this problem are that no two xi’s can be the same column and no two
queens can be on the same diagonal.
By these two constraints the size of solution pace reduces from 88 tuples to 8! Tuples.
Form example si(4,6,8,2,7,1,3,5)

DESIGN AND ANALYSIS OF ALGORITHMS Page 65


In the same way for n-queens are to be placed on an n×n chessboard, the solution space consists
of all n! Permutations of n-tuples (1,2, ---n).

Some solution to the 8-Queens problem


Algorithm for new queen be placed All solutions to the n·queens problem
Algorithm Place(k,i) Algorithm NQueens(k, n)
//Return true if a queen can be placed in kth // its prints all possible placements of n-
row & ith column queens on an n×n chessboard.
//Other wise return false {
{ for i:=1 to n do{
for j:=1 to k-1 do if Place(k,i) then
if(x[j]=i or Abs(x[j]-i)=Abs(j-k))) {
then return false X[k]:=I;
return true if(k==n) then write (x[1:n]);
} else NQueens(k+1, n);
}
}}

DESIGN AND ANALYSIS OF ALGORITHMS Page 66


Sum of Subsets Problem:
Given positive numbers wi 1 ≤ i ≤ n, & m, here sum of subsets problem is finding all subsets of
wi whose sums are m.
Definition: Given n distinct +ve numbers (usually called weights), desire (want) to find all
combinations of these numbers whose sums are m. this is called sum of subsets problem.
To formulate this problem by using either fixed sized tuples or variable sized tuples.
Backtracking solution uses the fixed size tuple strategy.

For example:
If n=4 (w1, w2, w3, w4)=(11,13,24,7) and m=31.
Then desired subsets are (11, 13, 7) & (24, 7).
The two solutions are described by the vectors (1, 2, 4) and (3, 4).

In general all solution are k-tuples (x1, x2, x3---xk) 1 ≤ k ≤ n, different solutions may have
different sized tuples.

 Explicit constraints requires xi ∈ {j / j is an integer 1 ≤ j ≤ n }


 Implicit constraints requires:
No two be the same & that the sum of the corresponding wi’s be m
i.e., (1, 2, 4) & (1, 4, 2) represents the same. Another constraint is xi<xi+1 1 ≤ i ≤ k

Wi weight of item i

DESIGN AND ANALYSIS OF ALGORITHMS Page 67


M Capacity of bag (subset)
Xi the element of the solution vector is either one or zero.
Xi value depending on whether the weight wi is included or not.
If Xi=1 then wi is chosen.
If Xi=0 then wi is not chosen.

The above equation specify that x1, x2, x3, --- xk cannot lead to an answer node if this condition
is not satisfied.

The equation cannot lead to solution.

Recursive backtracking algorithm for sum of subsets problem


Algorithm SumOfSub(s, k, r)
{

X[k]=1
If(S+w[k]=m) then write(x[1: ]); // subset found.
Else if (S+w[k] + w{k+1] ≤ M)
Then SumOfSub(S+w[k], k+1, r-w[k]);
if ((S+r - w{k] ≥ M) and (S+w[k+1] ≤M) ) then
{
X[k]=0;
SumOfSub(S, k+1, r-w[k]);
}
}

DESIGN AND ANALYSIS OF ALGORITHMS Page 68


Graph Coloring:
Let G be a undirected graph and ‘m’ be a given +ve integer. The graph coloring problem is
assigning colors to the vertices of an undirected graph with the restriction that no two adjacent
vertices are assigned the same color yet only ‘m’ colors are used.
The optimization version calls for coloring a graph using the minimum number of coloring.
The decision version, known as K-coloring asks whether a graph is colourable using at most k-
colors.
Note that, if ‘d’ is the degree of the given graph then it can be colored with ‘d+1’ colors.
The m- colorability optimization problem asks for the smallest integer ‘m’ for which the graph G
can be colored. This integer is referred as “Chromatic number” of the graph.
Example

 Above graph can be colored with 3 colors 1, 2, & 3.


 The color of each node is indicated next to it.
 3-colors are needed to color this graph and hence this graph’ Chromatic Number
is 3.
 A graph is said to be planar iff it can be drawn in a plane (flat) in such a way that no two
edges cross each other.
 M-Colorability decision problem is the 4-color problem for planar graphs.
 Given any map, can the regions be colored in such a way that no two adjacent regions
have the same color yet only 4-colors are needed?
 To solve this problem, graphs are very useful, because a map can easily be transformed
into a graph.
 Each region of the map becomes a node, and if two regions are adjacent, then the
corresponding nodes are joined by an edge.

o Example:

o
DESIGN AND ANALYSIS OF ALGORITHMS Page 69
The above map requires 4 colors.
 Many years, it was known that 5-colors were required to color this map.

 After several hundred years, this problem was solved by a group of mathematicians with
the help of a computer. They show that 4-colors are sufficient.
Suppose we represent a graph by its adjacency matrix G[1:n, 1:n]

Ex:

Here G[i, j]=1 if (i, j) is an edge of G, and G[i, j]=0 otherwise.


Colors are represented by the integers 1, 2,---m and the solutions are given by the n-tuple (x1,
x2,---xn)
xi Color of node i.

State Space Tree for


n=3 nodes
m=3 colors

1st node coloured in 3-ways


2nd node coloured in 3-ways
3rd node coloured in 3-ways
So we can colour in the graph in 27 possibilities of colouring.

DESIGN AND ANALYSIS OF ALGORITHMS Page 70


Finding all m-coloring of a graph Getting next color
Algorithm mColoring(k){ Algorithm NextValue(k){
// g(1:n, 1:n) boolean adjacency matrix. //x[1],x[2],---x[k-1] have been assigned
// k index (node) of the next vertex to integer values in the range [1, m]
color. repeat {
repeat{ x[k]=(x[k]+1)mod (m+1); //next highest
nextvalue(k); // assign to x[k] a legal color. color
if(x[k]=0) then return; // no new color if(x[k]=0) then return; // all colors have
possible been used.
if(k=n) then write(x[1: n]; for j=1 to n do
else mcoloring(k+1); {
} if ((g[k,j]≠0) and (x[k]=x[j]))
until(false) then break;
} }
if(j=n+1) then return; //new color found
} until(false)
}

Adjacency matrix is

DESIGN AND ANALYSIS OF ALGORITHMS Page 71


Hamiltonian Cycles:
 Def: Let G=(V, E) be a connected graph with n vertices. A Hamiltonian cycle is a round
trip path along n-edges of G that visits every vertex once & returns to its starting
position.
 It is also called the Hamiltonian circuit.
 Hamiltonian circuit is a graph cycle (i.e., closed loop) through a graph that visits each
node exactly once.
 A graph possessing a Hamiltonian cycle is said to be Hamiltonian graph.
Example:

 In graph G, Hamiltonian cycle begins at some vertiex v1 ∈ G and the vertices


of G are visited in the order v1,v2,---vn+1, then the edges (vi, vi+1) are in E, 1 ≤ i ≤
n.

g1
The above graph contains Hamiltonian cycle: 1,2,8,7,6,5,4,3,1

The above graph contains no Hamiltonian cycles.

 There is no known easy way to determine whether a given graph contains a


Hamiltonian cycle.
 By using backtracking method, it can be possible
 Backtracking algorithm, that finds all the Hamiltonian cycles in a graph.
 The graph may be directed or undirected. Only distinct cycles are output.
 From graph g1 backtracking solution vector= {1, 2, 8, 7, 6, 5, 4, 3, 1}
 The backtracking solution vector (x1, x2, --- xn)
xi ith visited vertex of proposed cycle.

DESIGN AND ANALYSIS OF ALGORITHMS Page 72


 By using backtracking we need to determine how to compute the set of possible
vertices for xk if x1,x2,x3---xk-1 have already been chosen.
If k=1 then x1 can be any of the n-vertices.
By using “NextValue” algorithm the recursive backtracking scheme to find all Hamiltoman
cycles.
This algorithm is started by 1st initializing the adjacency matrix G[1:n, 1:n] then setting x[2:n]
to zero & x[1] to 1, and then executing Hamiltonian (2)
Generating Next Vertex Finding all Hamiltonian Cycles
Algorithm NextValue(k) Algorithm Hamiltonian(k)
{ {
// x[1: k-1] is path of k-1 distinct vertices. Repeat{
// if x[k]=0, then no vertex has yet been NextValue(k); //assign a legal next value to
assigned to x[k] x[k]
Repeat{ If(x[k]=0) then return;
X[k]=(x[k]+1) mod (n+1); //Next vertex If(k=n) then write(x[1:n]);
If(x[k]=0) then return; Else Hamiltonian(k+1);
If(G[x[k-1], x[k]]≠0) then } until(false)
{ }
For j:=1 to k-1 do if(x[j]=x[k]) then break;
//Check for distinctness
If(j=k) then //if true , then vertex is distinct
If((k<n) or (k=n) and G[x[n], x[1]]≠0))
Then return ;
}
}
Until (false);
}

DESIGN AND ANALYSIS OF ALGORITHMS Page 73


UNIT-V
Branch and Bound: General method, applications- Travelling sales person problem, 0/1
Knapsack problem- LC branch and Bound solution, FIFO branch and Bound solution.

NP-Hard and NP-Complete Problems: Basic concepts, Non deterministic algorithms, NP-Hard
and NP Complete classes, NP-Hard problems, Cook’s theorem.
Branch & Bound
Branch & Bound (B & B) is general algorithm (or Systematic method) for finding optimal
solution of various optimization problems, especially in discrete and combinatorial
optimization.
The B&B strategy is very similar to backtracking in that a state space tree is used to solve
a problem.
The differences are that the B&B method
 Does not limit us to any particular way of traversing the tree.
 It is used only for optimization problem
 It is applicable to a wide variety of discrete combinatorial problem.
B&B is rather general optimization technique that applies where the greedy method &
dynamic programming fail.
It is much slower, indeed (truly), it often (rapidly) leads to exponential time complexities
in the worst case.
The term B&B refers to all state space search methods in which all children of the “E-
node” are generated before any other “live node” can become the “E-node”
 Live node is a node that has been generated but whose children have not yet been
generated.
 E-node is a live node whose children are currently being explored.
 Dead node is a generated node that is not to be expanded or explored any further. All
children of a dead node have already been expanded.

Two graph search strategies, BFS & D-search (DFS) in which the exploration of a new
node cannot begin until the node currently being explored is fully explored.
Both BFS & D-search (DFS) generalized to B&B strategies.
 BFS like state space search will be called FIFO (First In First Out) search as the list of
live nodes is “First-in-first-out” list (or queue).
 D-search (DFS) Like state space search will be called LIFO (Last In First Out) search
as the list of live nodes is a “last-in-first-out” list (or stack).
In backtracking, bounding function are used to help avoid the generation of sub-trees that
do not contain an answer node.
We will use 3-types of search strategies in branch and bound
1) FIFO (First In First Out) search
2) LIFO (Last In First Out) search

DESIGN AND ANALYSIS OF ALGORITHMS Page 74

You might also like