DAA(Design and Analysis of Algorithm) CSE
DAA(Design and Analysis of Algorithm) CSE
Algorithm-CS T44
B.Tech./ CSE/IV SEMESTER
1.1 Algorithm 1
1.2 Performance Analysis 3
1.3 Analysis 6
1.4 Asymptotic analysis 6
1.5 Analysis Of Searching 6
1.5.1 Sequential Search 6
1.5.2 Binary Search 7
1.5.3 Fibonacci Search 8
1.6 Analysis Of Sorting 9
1.6.1 Heap Sort 9
1.6.2 Radix Sort 12
1.6.3 Insertion Sort 14
1.6.4 Selection Sort 14
1.6.5 Bubble Sort 16
1.7 Recursion 17
1.8 Analysis Of Non-Recursive And Recursive Algorithms 18
1.9 Solving Recurrence Equations 19
Chapter2 29
Chapter 3 55
3.1Dynamic Programming 55
3.1.1 Multistage Graph 55
3.1.2 All Pair Shortest Path 61
3.1.3 0/1 Knapsack Problem 64
3.1.4 Travelling Salesman Problem 66
3.1.5. Chained Matrix Multiplication 68
3.2. Basic Search and Traversal Technique 71
3.2.1 Techniques For Graphs 72
3.2.2 Breadth First Search and Traversal 73
3.2.3 Depth First Search and Traversal 74
3.2.4 Topological Sorting 74
Chapter 4 Backtracking 77
4.1 Backtracking 77
4.1.1 Sum of subsets 78
4.1.2 Hamiltonian cycles 79
4.1.3 8-Queens Problem 81
4.1.4 Graph colouring 82
4.1.5Knapsack problem using backtracking 85
5.1 Introduction 88
5.2 Least Cost (LC) Search 89
5.2.1 Control Abstractions for LC Search 91
5.2.2 15 Puzzle Problem 92
5.3 Bounding 93
5.3.1 Example: Job Scheduling with deadlines Problem 93
5.4 Travelling Salesman Problem 95
5.5 0/1 Knapsack Problem 102
5.5.1. LC Branch and Bound 103
5.5.2. FIFO Branch and Bound 107
SYLLABUS
CS T44DESIGN AND ANALYSIS OF ALGORITHMS
COURSE OBJECTIVE
To acquaint students with algorithm techniques when programming for the
storage and manipulation of data.
The concept of data abstraction and the problem of building implementations of
abstract data types are emphasized.
UNIT I
Algorithms: Definitions and notations: standard notations - asymptotic notations – worst
case, best case and average case analysis; big oh, small oh, omega and theta notations;
Analysis of Sorting and Searching: Heap, shell, radix, insertion, selection and bubble sort;
sequential, binary and Fibonacci search. Recursive algorithms, analysis of non-recursive
and recursive algorithms, solving recurrence equations, analyzing control structures.
UNIT II
Divide and Conquer Method: General Method – binary search –maximum and minimum –
merge sort - quick sort – Strassen’s Matrix multiplication. Greedy Method: General
method – knapsack problem – minimum spanning tree algorithms – single source shortest
path algorithm – scheduling, optimal storage on tapes, optimal merge patterns.
UNIT III
Dynamic Programming: General method – multi-stage graphs – all pair shortest path
algorithm – 0/1 Knapsack and Traveling salesman problem – chained matrix
multiplication. Basic Search and Traversal technique: Techniques for binary trees and
graphs – AND/OR graphs – biconnected components – topological sorting.
UNIT IV
Backtracking: The general method – 8-queens problem – sum of subsets – graph coloring
– Hamiltonian cycle – Knapsack problem.
UNIT V
Branch and Bound Method: Least Cost (LC) search – the 15-puzzle problem – control
abstractions for LC-Search – Bounding – FIFO Branch-and-Bound - 0/1 Knapsack
problem – Traveling Salesman Problem. Introduction to NP-Hard and NP-Completeness.
TEXT BOOKS
Ellis Horowitz, SartajSahni, SanguthevarRajasekaran, “Fundamentals of Computer
Algorithms”, Galgotia Publications Pvt. Ltd., 2008.
REFERENCE BOOKS
Gilles Brassard and Paul Bratley, “Fundamentals of Algorithms”, PHI, 1997.
AnanyLevitin, “Introduction to Design and Analysis of Algorithms”, Pearson Education,
2005.
Chapter 1: Algorithm
1.1 Algorithm
Informal Definition:
An Algorithm is any well-defined computational procedure that takes
some value or set of values as Input and produces a set of values or some value as
output. Thus algorithm is a sequence of computational steps that transforms the i/p
into the o/p.
Formal Definition:
An Algorithm is a finite set of instructions that, if followed, accomplishes
a particular task. In addition, all algorithms should satisfy the following criteria.
1
Here link is a pointer to the record type node. Individual data items of a
record can be accessed with and period.
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;
6. There are two Boolean values TRUE and FALSE.
Logical Operators AND, OR, NOT
Relational Operators <, <=,>,>=, =,!=
7. The following looping statements are employed.
For, while and repeat-until
While Loop:
While < condition > do
{
<statement-1>
.
.
.
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
<statement-n>
until<condition>
8. A conditional statement has the following forms.
If <condition> then <statement>
If <condition> then <statement-1>
Else <statement-1>
Case statement:
Case
{
:<condition-1>:<statement-1>
.
.
.
2
:<condition-n>:<statement-n>
: else :<statement-n+1>
}
9. Input and output are done using the instructions read & write.
The Space needed by each of these algorithms is seen to be the sum of the
following component.
1.A fixed part that is independent of the characteristics (eg:number,size)of the
inputs and outputs.
The part typically includes the instruction space (ie. Space for the code),
space for simple variable and fixed-size component variables (also called
aggregate) space for constants, and so on.
3
space needed by referenced variables (to the extent that is depends on
instance characteristics), and the recursion stack space.
The space requirement s(p) of any algorithm p may therefore be written
as,
S(P) = c+ Sp(Instance characteristics)
Where ‗c‘ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
4
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}
If the count is zero to start with, then it will be 2n+3 on termination. So each
invocation of sum execute a total of 2n+3 steps.
Total 2n+3
5
Clearly the partial rank of T[I] does not depend on the order of the element
in
Sub array T[1…I-1].
1.3 Analysis
Best case:
This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Worst case:
This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Average case:
This type of analysis results in average running time over every type of input.
Omega: the function f(n)=Ω(g(n)) iff there exist positive constants c and no such that
f(n) ≥ c*g(n) for all n, n ≥ no.
Theta: the function f(n)=ө(g(n)) iff there exist positive constants c1,c2 and no such
that c1 g(n) ≤ f(n) ≤ c2 g(n) for all n, n ≥ no.
1.5Analysis Of Searching
Searching
Let us assume that we have a sequential file and we wish to retrieve an
element matching with key ‗k‘, then, we have to search the entire file from the
beginning till the end to check whether the element matching k is present in the file
or not.
There are a number of complex searching algorithms to serve the purpose of
searching. The linear search and binary search methods are relatively straight
forward methods of searching.
1.5.1 Sequential Search
In this method, we start to search from the beginning of the list and examine
each element till the end of the list. If the desired element is found we stop the
search and return the index of that element. If the item is not found and the list is
exhausted the search returns a zero value.
Pseudo Code Algorithm:
Algorithm sequential(a[],n,x)
{
i:=1;
while(i<=n)
{
6
if(x==a[i])
{
write(―Element Found in ‖,i);
write (‗search successful‘);
exit( );
}
else
{
i:=i+1;
}
}
write (‗search unsuccessful‘);
}
Analysis:
Worst Case:
The worst case occurs when an item is not found or the search item is the last
th
(n ) element. For both situations we must examine all n elements of the array.So the
complexity of the sequential search is n. i.e., O(n). The execution time is proportional
to n.i.e) the algorithm is executed in linear time.
Best Case:
The best case occurs when the search item is at the first location itself. So the
time complexity is O(1).
Average Case:
The average case occurs if the search element is at the middle location. So
we must examine n/2 elements of the array. So the time complexity is O(n/2).
Example:
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
20 35 18 8 14 41 3 39
Best Case:
If x=20 then element is found at the first location.
T(n)=O(1)
Average Case:
If x=8 then we must compare the 4 elements to find.
T(n)=O(n/2)
Worst Case:
If x=41 or 100 then we must examine all the 8 elements of the array. So the
complexity is O(n).
7
the list has no more elements. On the other hand, if the mid value is lesser than X,
then the list is chopped off at (mid+1) th location. The middle entry of the right-
reduced list is examined and the procedure is continued until desired key is found or
the search interval is exhausted.
The algorithm for binary search is as follows,
Algorithm : binary search
Input : A, vector of n elements
K, search element
Output : low –index of k
Method : low=1,high=n
While(low<=high-1)
{
mid=(low+high)/2
if(k<a[mid])
high=mid
else
low=mid
if end
}
while end
if(k=A[low])
{
write("search successful")
write(k is at location low)
exit();
}
else
write (search unsuccessful);
if end;
algorithm ends.
8
6. If the item is greater than entry Fk-1, discard the elements from positions 1 to
Fk-1. Renumber the remaining elements from 1 to Fk-2, set k = k - 2, and
return to step 2.
1.6Analysis Of Sorting
25
13 17
5 8 3
All the tree levels are completely filled except possibly for the lowest
level,which is filled from the left upto a point.
The levels above the lowest level from a complete binary tree of height h-1
contains 2h –1 nodes.
A heap of height h has
Maximum no. of elements =2 h+1 nodes.(When lowest level is
completely filled)
Minimum no. of nodes=2h nodes.
Height of a node:
It is defined as no. of edges on a simple downward path from a node to leaf.
Height of a tree:
It is defined as no. of edges on a simple downward path from a root node to
leaf.
To insert an element into the heap, one adds it "at the bottom" of the heap
and then compares it with its parent, grandparent, great grandparent and so on, until
it is less than or equal to one of these values.
Example:
a[1] a[2] a[3] a[4] a[5] a[6] a[7]
40 80 35 90 45 50 70
Step1:Take the first element of an array.a[1]=40.
9
40
40 80
80 40
Step 3:Take the 3rd element .a[3]=35.Add this element to the right side of the root
node.
80
40 35
Step 4:Take the 4th element .a[4]=90.Add this element to the left side of the node
40.
80 80 90
Adjust Adjust
40 35 90 35 80 35
90 40 40
Step 5:Take the 5th element .a[5]=45.Add this element to the right side of the node
80.
90
80 35
40 45
10
Step 6:Take the 6th element .a[6]=50.Add this element to the left side of the node
35.
90
90
Adjust
80 35
80 50
40 45 50
40 45 35
Step 7:Take the 7th element .a[7]=70.Add this element to the right side of the node
50.
90 90
Adjust
80 50 80 70
40 45 35 70 40 45 35 50
The figure shows one example of how insert( ) would insert a new value into
an existing heap.To do heap sort, first add the first element in the tree. Then add
the second element to the tree.If the tree does not satisfy the heap property then
adjust the nodes in the tree.Then insert the next element and adjust the nodes.
Repeat the above steps till the insertion of last element. Finally we obtain the sorted
elements in heap tree representation.
Procedures for Heap Sort
1.Build a Heap(Heapify)
2.Adjust
Procedure Heapify( )
Create a heap from n elements.
Maintain the heap property.Thats why we are using adjust( ) procedure to
arrange the elements.
Procedure Adjust( )
Adjust takes as input the array a[ ] and integer I and n. It regards a[1..n] as a
complete binary tree. If the subtrees rooted at 2I and 2I+1 are max heaps, then
adjust will rearrange elements of a[ ] such that the tree rooted at I is also a max
heap. The maximum elements from the max heap a[1..n] can be deleted by deleting
the root of the corresponding complete binary tree. The last element of the array, i.e.
a[n], is copied to the root, and finally we call Adjust(a,1,n-1).
Algorithm Heapsort(a,n)
{
Heapify(a,n);
//Transform the array into heap.
11
//Interc hange the new ma with the element at the end of the array.
for i=n to 2 step –1 do
{
t:=a[i];
a[i]:=a[1];
a[1]:=t;
Adjust(a,1,i-1);
}
}
Algorithm Heapify(a,n)
{
//Readjust the elements in a[1:n] to form a heap.
for i:= n/2 to 1 step –1 do
Adjust(a,I,n);
}
Algorithm Adjust(a,i,n)
{
//The complete binary trees with roots 2i and 2i+1 are
//combined with node i to form a heap rooted at i.
//No node has an address greater than n or less than 1.
j=2i;
item=a[i];
while (j<=n) do
{
if ((j<n) and (a[j]< a[j+1])) then
j=j+1;
//compare left and right child and let j be the larger child
if ( item >= a[i]) then break;
// A position for item is found
a[j/2]=a[j];
j=2j;
}
a[j/2]=item;
}
Analysis:
Heapify() requires n operations.->o(n)
Adjust() requires o(log n) operations for each invocation.Adjust is called n
times by Heapsort( ).So time is o(n log n).
Heapsort = O(n) +O(n log n) =O(n).
Algorithm HeapSort( ):
T(n)=O(n)
Algorithm Heapify( ):
T(n)=O(n)
Algorithm Adjust( ):
T(n)=O(n log n)
12
significant digit and combine in an array. Then on the second pass ,the entire
numbers are sorted on the least significant digit and combine in an array and so on.
Each element in the nth element array A has d digits, where digit 1 is
the lower order digit and d is the highest order digit.
This algorithm is based on properties of keys.All other algorithms are named
as comparison sort.But in this algorithm instead of comaring the elements ,we sort
the elements bit by bit.
Pseudo Code Algorithm:
Algorithm radixsort(a,n,d)
{
for i:= 1 to d do
{
for j:=0 to 9 do
{
for k:=0 to 9 do
b[i][k]:=0;
for j:=1 to n do
{
k:=[ a[j] % pow(10,i) - a[i] % pow(10,i-1) ] / pow(10,i-1);
b[k][0]:=b[k][0]+1;
b[k][b[k][0]]:=a[j];
}
}
}
for j:=0 to 9 do
{
for k:=2 to b[j][1] do
{
a[b++]:=b[j][k+1];
}
}
}
Algorithm pow(a,b)
{
temp:=1;
for i:= 1 to b do
temp:=temp*a;
return temp;
}
Example:
Input Pass1 Pass2 Pass3
MSB 329 LSB 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657]
720 329 457 720
355 839 657 839
13
Analysis:
Running time depends on the stable sort.
Each digit is in the range of 1 to k.
Each pass over n takes θ (n+k) time.
There are d no. of passes.
Total time for radix sort is θ(dn+kd) where d is constant and k is θ (n)
The radix sort is linear in time.
1.6.3Insertion Sort
If the first few objects are already sorted, an unsorted object can be inserted
in the sorted set in proper place. This is called insertion sort.
An algorithm consider the elements one at a time, inserting each in its suitable
place among those already considered (keeping them sorted).Insertion sort is an
example of an incremental algorithm; it builds the sorted sequence one number at a
time.
Pseudo Code Algorithm:
Algorithm Insertion_Sort(A,n)
1. For j = 2 to length [A] do
2. key = A[j]
3. {Put A[j] into the sorted sequence A[1 . . j-1]
4. i ← j -1
5. while i > 0 and A[i] > key do
6. A[i+1] = A[i]
7. i = i-1
8. A[i+1] = key
Analysis:
Best-Case:
The while-loop in line 5 executed only once for each j. This happens if given
array A is already sorted.
T(n) = an + b = O(n)
T(n)=O(n)
It is a linear function of n.
Worst-Case:
The worst-case occurs, when line 5 executed j times for each j. This can
happens if array A starts out in reverse order .
T(n) = an2 + bc+ c = O(n2)
T(n)=O(n2 )
It is a quadratic function of n.
Extra Memory
This algorithm does not require extra memory.
14
Pseudo Code Algorithm:
Algorithm Selection_Sort (A,n)
{
for i := 1 to n-1 do
{
min := i;
m:=a[min];
for j: = i+1 ton do
{
if (a[j] < m) then
min = j;
}
t := a[i];
a[min] := a[i];
a[i] = t;
}
}
Example
Consider the unsorted array, n=8
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
20 35 18 8 14 41 3 39
After sorting ,the resulting array should be
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 35 39 41
When i=1
Find the smallest element in the unsorted array.
Place the smallest element in position of a[1]
i.e., the smallest element in the unsorted array is 3 so exchange the values of
a[1] and a[7].
When i=2
Find the smallest from a[2] to a[8] , i.e., 8 in a[4]
Exchange the values of a[2] and a[4].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 18 35 14 41 20 39
When i=3
Find the smallest from a[3] to a[8] , i.e., 14 in a[5]
Exchange the values of a[3] and a[5].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
15
3 8 14 35 18 41 20 39
When i=4
Find the smallest from a[4] to a[8] , i.e., 18 in a[5]
Exchange the values of a[4] and a[5].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 35 41 20 39
When i=5
Find the smallest from a[5] to a[8] , i.e., 20 in a[7]
Exchange the values of a[5] and a[7].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 41 35 39
When i=6
Find the smallest from a[6] to a[8] , i.e., 35 in a[7]
Exchange the values of a[6] and a[7].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 35 41 39
When i=7
Find the smallest from a[7] to a[8] , i.e., 39 in a[8]
Exchange the values of a[7] and a[8].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 35 39 41
The number of moves with this technique is always of the order O(n).
Selection sort works very well for small files. It has a quite important application
because each item is actually moved at most once. It is a method of choice for
sorting files with very large objects (records) and small keys.
Analysis:
The number of moves with this technique is always of the order O(n).So the
time complexity is O(n).
16
7. Then repeat the steps 2,3,4 & 5 are repeated until no swaps have been done
on the last pass.
8. At the end of the 2nd pass the 2nd largest element is positioned in the
corresponding location.
9. For n data items, the process continue for (n-1) passes or until no exchanges
are made in a single
1.7 Recursion
Recursion may have the following definitions:
-The nested repetition of identical algorithm is recursion.
-It is a technique of defining an object/process by itself.
-Recursion is a process by which a function calls itself repeatedly until some
specified condition has been satisfied.
When to use recursion
Recursion can be used for repetitive computations in which each action is
stated in terms of previous result. There are two conditions that must be satisfied by
any recursive procedure.
1. Each time a function calls itself it should get nearer to the solution.
2. There must be a decision criterion for stopping the process.
In making the decision about whether to write an algorithm in recursive or non-
recursive form, it is always advisable to consider a tree structure for the problem. If
the structure is simple then use non-recursive form. If the tree appears quite bushy,
with little duplication of tasks, then recursion is suitable.
The recursion algorithm for finding the factorial of a number is given below,
Algorithm : factorial-recursion
Input : n, the number whose factorial is to be found.
Output : f, the factorial of n
Method : if(n=0)
f=1
17
else
f=factorial(n-1) * n
if end
algorithm ends.
The general procedure for any recursive algorithm is as follows,
1. Save the parameters, local variables and return addresses.
2. If the termination criterion is reached perform final computation and goto step
3 otherwise perform final computations and goto step 1
3. Restore the most recently saved parameters, local variable and return
address and goto the latest return address.
18
Iteration is more of a bottom up approach. It begins with what is known and from this
constructs the solution step by step. The iterative function obviously uses time that is
O(n) where as recursive function has an exponential time complexity.
It is always true that recursion can be replaced by iteration and stacks. It is also true
that stack can be replaced by a recursive program with no stack.
Figure 1.1
0 if n=0
T(n) = 3T(n ÷ 2)+n otherwise
n 1 2 4 8 16 32
T(n) 1 5 19 65 211 665
19
* For instance, T(16) = 3 * T(8) +16
= 3 * 65 +16
= 211.
* Instead of writing T(2) = 5, it is more
useful to write T(2) = 3 * 1 +2.
Then,
T(A) = 3 * T(2) +4
= 3 * (3 * 1 +2) +4
= (32 * 1) + (3 * 2) +4
* We continue in this way, writing ‗n‘ as an explicit power of 2.
n T(n)
1 1
2 3 * 1 +2
22 32 * 1 + 3 * 2 + 2 2
23 33 * 1 + 32 * 2 + 3 * 2 2 + 2 3
24 34 * 1 + 33 * 2 + 32 * 2 2 + 3 * 2 3 + 2 4
25 35 * 1 + 34 * 2 + 33 * 2 2 + 3 2 * 2 3 + 3 * 2 4 + 2 5
= ∑ 3k-i 2i
= 3k ∑ (2/3)i
3 k+1 – 2k+1 3
k
= 3 * ----------------- * ----
3 k+1 1
k+1
3 – 2k+1
= 3k * -----------------
3k+1-1
= 3k+1 – 2k+1
20
Eg : 2
0 n=0
tn = 5 n=1
3tn-1 + 4tn-2, otherwise
Characteristics Polynomial, x2 – 3x – 4 = 0
(x – 4)(x + 1) = 0
Roots r1 = 4, r2 = -1
Eqn 1 C1 = -C2
= 5 / (-5) = -1
C2 = -1 , C1 = 1
fn = 1. 4n + (-1) . (-1)n
fn = 4n + 1n
2. Homogenous Recurrences :
* We begin our study of the technique of the characteristic equation with the
resolution of homogenous linear recurrences with constant co-efficient, i.e the
recurrences of the form,
a0tn + a1tn-1 + ….. + aktn-k = 0
where the ti are the values we are looking for.
* The values of ti on ‗K‘ values of i (Usually 0 ≤ i ≤ k-1 (or) 0 ≤ i ≤ k) are needed
to determine the sequence.
* The initial condition will be considered later.
* The equation typically has infinitely many solution.
* The recurrence is,
linear because it does not contain terms of the form t n-i, t n-j, t2n-i, and
soon.
21
homogeneous because the linear combination of the t n-i is equal to zero.
With constant co-efficient because the ai are constants
* Consider for instance our non familiar recurrence for the Fibonacci sequence,
fn = f n-1 + f n-
* This recurrence easily fits the mould of equation after obvious rewriting.
fn – f n-1 – f n-2 = 0
* Therefore, the fibonacci sequence corresponds to a homogenous linear
recurrence
with constant co-efficient with k=2,a0=1&a1=a2 = -1.
* In other words, if fn&gn satisfy equation.
k
So ∑ aif n-i = 0 & similarly for gn&fn
i=0
We set tn = C fn + d gn for arbitrary constants C & d, then tn is also a solution to
equation.
* This is true because,
a0tn + a1tn-1 + … + aktn-k
= a0(C fn +d gn) + a1(C fn-1 + d gn-1) + …+ ak(C fn-k + d gn-k)
= C(a0fn + a1 fn-1 + … + akfn-k)+
d(a0 gn + a1 gn-1 + … + akgn-k)
=C*0+d*0
= 0.
1) (Fibonacci) Consider the recurrence.
n if n=0 or n=1
fn =
f n-1 + f n-2 otherwise
1 ±√ (1 + 4)
= ----------------
2
1 ± √5
= ----------
2
1+√5 1 - √5
r1 = --------- and r2 = ---------
2 2
22
The general solution is,
fn = C1r1n + C2r2n
when n=0, f0 = C1 + C2 = 0
when n=1, f1 = C1r1 + C2r2 = 1
C1 + C2 = 0 (1)
C1r1 + C2r2 = 1 (2)
C1 = -C2
Substitute C1 in equation(2)
-C2r1 + C2r2 = 1
C2[r2 – r1] = 1
1 - √5 1 - √5
C2 --------- - --------- = 1
2 2
1 – √5 – 1 – √5
C2 --------------------- =1
2
-C2 * 2√5
-------------- = 1
2
– √5C2 = 1
C1 = 1/√5 C2 = -1/√5
Thus,
1 1 + √5 n -1 1 - √5 n
3. Inhomogeneous recurrence
* The solution of a linear recurrence with constant co-efficient becomes more
difficultwhen the recurrence is not homogeneous, that is when the linear combination
is not equal to zero.
* Consider the following recurrence
a0tn + a1 t n-1 + … + ak t n-k = bn p(n)
* The left hand side is the same as before,(homogeneous) but on the right-hand
side
23
we have bnp(n), where,
b is a constant
p(n) is a polynomial in ‗n‘ of degree ‗d‘.
Example(1)
Consider the recurrence,
tn – 2t n-1 = 3n(A)
In this case, b=3, p(n) = 1, degree = 0.
The characteristic polynomial is,
(x – 2)(x – 3) = 0
The roots are, r1 = 2, r2 = 3
The general solution,
tn = C1r1n + C2r2n
tn = C12n + C23n (1)
Example :(2)
24
substituting n=1 in eqn(A),
t1 – 2t0 = 6 . 3
t1 – 2t0 = 18
t1 = 18 + 2t0
substituting t1 value in eqn(3)
2C1 + 3C2 + 3C3 = 18 + 2t0(4)
C1 + C2 + = t0(2)
n=0, C1 + C2 = t0(2)
n=1, 2C1 + 3C2 + 3C3 = 18 + 2t0(4)
n=2, 4C1 + 9C2 + 18C3 = 99 + 4t0(5)
Sub, C2 = 9 in eqn(2)
C1 + C2 = t0
C1 + 9 = t0
C1 = t0-9
25
3C3 = 9
C3 = 9/3
C3 = 3
Sub. C1, C2, C3, r1, r2, r3 values in eqn (1)
tn = C12n + C23n + C3.n.3n
= (t0 – 9)2n + 9.3n + 3.n.3n
= Max[O[(t0-9),2n], O[9.3n], O[3.n.3n]]
4. Change of variables:
* It is sometimes possible to solve more complicated recurrences by making a
change of variable.
* In the following example, we write T(n) for the term of a general recurrences,
and ti for the term of a new recurrence obtained from the first by a change of
variable.
Example: (1)
Consider the recurrence,
1 , if n=1
T(n) =
3T(n/2) + n , if ‗n‘ is a power of 2, n>1
1
T(n) =
3T(n/2) + n
In this case,
b = 2, p(n) = 1, degree = 0
26
The roots are, r1 = 3, r2 = 2.
We use the fact that, T(2i) = ti& thus T(n) = tlogn when n= 2i to obtain,
T(n) = C1. 3 log2n + C2. 2log2n
T(n) = C1 . nlog23 + C2.n [i = logn]
When ‗n‘ is a power of 2, which is sufficient to conclude that,
Example: (2)
In this eqn,
b = 4, P(n) = 1, degree = 0
Example : 3
In this case,
b = 2, P(n) = i, degree = 1
(x – 2)(x – 2)2 = 0
27
The roots are, r1 = 2, r2 = 2, r3 = 2
tn = O(n.log22n)
5. Range Transformation:
* When we make a change of variable, we transform the domain of the recurrence.
* Instead it may be useful to transform the range to obtain a recurrence in a form
that we know how to solve
* Both transformation can be sometimes be used together.
Example: 1
Consider the following recurrence , which defines T(n). where ‗n‘ is the power
Of 2
1/3 , if n=1
T(n) =
n T2(n/2) , otherwise
The first step is a change of variable,
Let ti denote T(2i)
ti = T(2i) = 2i T2(2 i-1)
= 2i t2i-1
* This recurrence is not clear, furthermore the co-efficient 2i is not a constant.
* To transform the range, we create another recurrence by using ui to denote
lgti
ui = lgti = i + 2lg t i-1
= i + 2u i-1
ui– 2u i-1 = i
(x – 2)(x – 1)2 = 0
28
We use he initial condition T(1) = 1/3
To determine C1: T(1) = 2C1/ 4 = 1/3
Implies that C1 = lg(4/3) = 2 – log 3
29
Chapter 2
2.1 General Method
Given a function to compute on ‗n‘ inputs the divide-and-conquer strategy
suggests splitting the inputs into ‗k‘ distinct subsets, 1<k<=n, yielding ‗k‘ sub
problems. These sub problems must be solved, and then a method must be
found to combine sub solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and-conquer
strategy can possibly be reapplied.
Often the sub problems resulting from a divide-and-conquer design are of the
same type as the original problem.
For those cases the re application of the divide-and-conquer principle is
naturally expressed by a recursive algorithm.
D And C(Algorithm) is initially invoked as D and C(P), where ‗p‘ is the problem
to be solved.
Small(P) is a Boolean-valued function that determines whether the i/p size is
small enough that the answer can be computed without splitting.
If this so, the function ‗S‘ is invoked.
Otherwise the problem P is divided into smaller sub problems.
These sub problems P1,P2,………Pk are solved by recursive application of
DAndC.
Combine is a function that determines the solution to p using the solutions to
the ‗k‘ sub problems.
If the size of ‗p‘ is n and the sizes of the ‗k‘ sub problems are n1,n2,…….nk,
respectively, then the computing time of D And C is described by the
recurrence relation.
T(n) = { g(n) n small
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise.
Where T(n) Time for D And C on any i/p of size ‗n‘.
g(n) Time of compute the answer directly for small i/ps.
f(n) Time for dividing P & combining the solution to sub problems.
Pseudo Code Algorithm for D&C Method:
1. Algorithm D And C(P)
2. {
3. if small(P) then return S(P);
4. else
5. {
6. divide P into smaller instances
P1,P2,…….,Pk, k>=1;
7. Apply D And C to each of these sub problems;
8. return combine (D And C(P1), D And C(P2),…….,D And C(Pk));
9. }
10. }
30
One of the methods for solving any such recurrence relation is called the
substitution method.
This method repeatedly makes substitution for each occurrence of the
function. T is the Right-hand side until all such occurrences disappear.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n.
We have,
T(n) = 2T(n/2)+n
= 2[2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4T(n/4)+2n
= 4[2T(n/4/2)+n/4]+2n
= 4[2T(n/8)+n/4]+2n
= 8T(n/8)+n+2n
= 8T(n/8)+3n
31
We observe that low & high are integer Variables such that each time through
the loop either x is found or low is increased by atleast one or high is
decreased at least one .
Thus we have 2 sequences of integers approaching each other and
eventually low becomes > than high & causes termination in a finite no. of
steps if ‗x‘ is not present.
Example:
1. Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
Place them in a[1:14], and simulate the steps Binsearch goes through as it
searches for different values of ‗x‘.
Only the variables, low, high & mid need to be traced as we simulate the
algorithm.
We try the following values for x:151,-14 and 9.
for 2 successful searches & 1 unsuccessful search.
32
If low becomes > than high, then ‗x‘ is not present & hence the loop is exited.
Analysis
Time Complexity for Successful Search
33
The no. of element comparison is (n-1).
Worst Case
The worst case occurs when the elements are in decreasing order.
The no. of elements comparison is 2(n-1)
Average Case
The average no. of element comparison is < than 2(n-1)
On the average a[i] is > than max half the time, and so, the avg. no. of
comparison is 3((n-1)/2).
Divide & Conquer Method
A divide- and conquer algorithm for this problem would proceed as follows:
Let P=(n,a[I],……,a[j]) denote an arbitrary instance of the problem.
Here ‗n‘ is the no. of elements in the list (a[i],….,a[j]) and we are interested in
finding the maximum and minimum of the list.
If the list has more than 2 elements, P has to be divided into smaller
instances.
For example , we might divide ‗P‘ into the 2 instances,
P1=([n/2],a[1],……..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],…..,a[n])
After having divided ‗P‘ into 2 smaller sub problems, we can solve them by
recursively invoking the same divide-and-conquer algorithm.
Algorithm: Recursively Finding the Maximum & Minimum
1. Algorithm MaxMin (i,j,max,min)
2. //a[1:n] is a global array, parameters i & j
3. //are integers, 1<=i<=j<=n.The effect is to
4. //set max & min to the largest & smallest value
5. //in a[i:j], respectively.
6. {
7. if(i=j) then max:= min:= a[i];
8. else if (i=j-1) then // Another case of small(p)
9. {
10. if (a[i]<a[j]) then
11. {
12. max:=a[j];
13. min:=a[i];
14. }
15. else
16. {
17. max:=a[i];
18. min:=a[j];
19. }
20. }
21. else
22. {
23. // if P is not small, divide P into subproblems.
24. // find where to split the set
25. mid:=(i+j)/2;
26. //solve the subproblems
27. MaxMin(i,mid,max.min);
28. MaxMin(mid+1,j,max1,min1);
34
29. //combine the solutions
30. if (max<max1) then max=max1;
31. if(min>min1) then min = min1;
32. }
33. }
The procedure is initially invoked by the statement,
MaxMin(1,n,x,y)
Suppose we simulate MaxMin on the following 9 elements
A: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
A good way of keeping track of recursive calls is to build a tree by adding a
node each time a new call is made.
For this Algorithm each node has 4 items of information : i,j,max&imin.
Examining fig: we see that the root node contains 1 & 9 as the values of i &j
corresponding to the initial call to MaxMin.
This execution produces 2 new calls to MaxMin, where i&j have the values 1,
5 & 6 ,9 respectively & thus split the set into 2 subsets of approximately the
same size.
From the tree, we can immediately see the maximum depth of recursion is 4.
(including the 1st call)
The included numbers in the upper left corner of each node represent the
order in which max & min are assigned values.
No. of element Comparison
If T(n) represents this no., then the resulting recurrence relations is
T(n) = T([n/2]+T[n/2]+2 n>2
1 n=2
0 n=1
when ‗n‘ is a power of 2, n=2^k for some +ve integer ‗k‘, then
T(n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
*
*
= 2^k-1T(2)+
= 2^k-1+2^k-2
= 2^k/2+2^k-2
= n/2+n-2
= (n+2n)/2)-2
T(n)=(3n/2)-2
*Note that (3n/3)-3 is the best-average, and worst-case no. of comparisions when
‗n‘ is a power of 2.
35
Given a sequence of ‗n‘ elements a[1],…,a[n] the general idea is to imagine
then split into 2 sets a[1],…..,a[n/2] and a[[n/2]+1],….a[n].
Each set is individually sorted, and the resulting sorted sequences are
merged to produce a single sorted sequence of ‗n‘ elements.
Thus, we have another ideal example of the divide-and-conquer strategy in
which the splitting is into 2 equal-sized sets & the combining operation is the
mering of 2 sorted sets into one.
36
23. for k=j to high do
24. {
25. b[i]=a[k];
26. i=i+1;
27. }
28. else
29. for k=h to mid do
30. {
31. b[i]=a[k];
32. i=i+1;
33. }
34. for k=low to high do
35. a[k] = b[k];
36. }
Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423,
861, 254, 450, 520)
Algorithm Mergesort begins by splitting a[] into 2 subarrays each of size five
(a[1:5] and a[6:10]).
The elements in a[1:5] are then split into 2 subarrays of size 3 (a[1:3] ) and
2(a[4:5])
Then the items in a a[1:3] are split into subarrays of size 2 a[1:2] & one(a[3:3])
The 2 values in a[1:2] are split to find time into one-element subarrays, and
now the merging begins.
(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
Where vertical bars indicate the boundaries of subarrays.
Elements a[1] and a[2] are merged to yield,
(285, 310|179|652, 351| 423, 861, 254, 450, 520)
Then a[3] is merged with a[1:2] and
(179, 285, 310| 652, 351| 423, 861, 254, 450, 520)
Next, elements a[4] & a[5] are merged.
(179, 285, 310| 351, 652 | 423, 861, 254, 450, 520)
and then a[1:3] & a[4:5]
(179, 285, 310, 351, 652| 423, 861, 254, 450, 520)
Repeated recursive calls are invoked producing the following subarrays.
(179, 285, 310, 351, 652| 423| 861| 254| 450, 520)
Elements a[6] &a[7] are merged.
Then a[8] is merged with a[6:7]
(179, 285, 310, 351, 652| 254,423, 861| 450, 520)
Next a[9] &a[10] are merged, and then a[6:8] & a[9:10]
(179, 285, 310, 351, 652| 254, 423, 450, 520, 861 )
At this point there are 2 sorted subarrays& the final merge produces the fully
sorted result.
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)
37
Tree for calls of MergeSort
1,10
1,5 6,10
6,6 7,7
1,1 2,2
Figure 2.1
This represents the sequence of recursive calls that are produced by merge
sort when it is applied to 10 elements.
The pair of values in each node are the values of parameters low and high.
plitting process should be continued until a set containing a single element.
Tree for calls of merge
1,1,2 6,6,7
6,8,10
1,3,5
1,5,10
Figure 2.2
This tree represents the merging of 2 nodes.
It represents the calls to procedure merge by mergesort.
The values in each node are the values of parameters low,mid and high.
38
Time Complexity
If the time for the merging operations is proportional to ‗n‘, then the computing
time for merge sort is described by the recurrence relation.
39
7. //a[k]>=t for q<k<p. q is returned
8. //Set a[p]=infinite.
9. {
10. v=a[m];I=m;j=p;
11. repeat
12. {
13. repeat
14. I=I+1;
15. until(a[I]>=v);
16. repeat
17. j=j-1;
18. until(a[j]<=v);
19. if (I<j) then interchange(a,i.j);
20. }until(I>=j);
21. a[m]=a[j]; a[j]=v;
22. retun j;
23. }
Algorithm
1. Algorithm Interchange(a,I,j)
2. //Exchange a[I] with a[j]
3. {
4. p=a[I];
5. a[I]=a[j];
6. a[j]=p;
7. }
40
C (i ,j ) = A(i,k)* B(k,j) for all ‗i‘ and and j between 1 and n.
The time complexity for the matrix Multiplication is O(n^3).
Divide & Conquer Method
Divide and conquer method suggest another way to compute the product of
n*n matrix.
We assume that n is a power of 2 .In the case n is not a power of 2 ,then
enough rows and columns of zero can be added to both A and B .So that the
resulting dimension are the powers of two.
If n=2 then the following formula as a computed using a matrix multiplication
operation for the elements of A & B.
If n>2,Then the elements are partitioned into sub matrix n/2*n/2..since ‗n‘ is a
power of 2 these product can be recursively computed using the same
formula. This Algorithm will continue applying itself to smaller sub matrix until
‗n‘ become suitable small(n=2) so that the product is computed directly .
The formulas are
C11=A11B11+A12B21
C12=A11B12+A12B22
C21=A21B11+A22B21
C22=A21B12+A22B22
Example
2222 1 1 11
4 *4 = 2222 1 11 1
2222 * 1 11 1
2222 1 11 1
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 * 1 1 1 1 = 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
41
T(n)= b n<=2 a & b are constant
7T(n/2)+an^2 n>2
P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
So the answer c(i,j) is 32 32
32 32
since n/2n/2 &matrix can be can be added in Cn 2 for some constant C. The overall
computing time T(n) of the resulting divide and conquer algorithm is given by the
sequence
that is T(n)=O(n^3)
Drawbacks
No better improvement in D & C method when comared to conventional
method.
Matrix multiplication is more expensive then the matrix adddition O(n^3).We
can attempt to reformulate the equation for cij so as to have fewer
multiplication and possibly more addition .
Strassen’s Method
Strassen has discovered a way to compute the cij of equation (2) using only
7 multiplication and 18 addition or subtraction.
Strassen‘s formulas are
P=(A11+A12)(B11+B22)
Q=(A12+A22)B11
R=A11(B12-B22)
S=A22(B21-B11)
T=(A11+A12)B22
U=(A21-A11)(B11+B12)
V=(A12-A22)(B21+B22)
42
Cij is computed using the following formulas:
C11=P+S-T+V
C12=R+T
C21=Q+T
C22=P+R-Q+V
Time Complexity
T(n)= b n2
2
7T(n/2)+an n>2 where a & b are constants.
By substitution method,we get
T(n)=O(nlog27) O(n2.81)
2.2.1General Method
Greedy method is the most straight forward designed technique.
As the name suggests, they are short sighted in their approach taking
decision on the basis of the information immediately at the hand without
worrying about the effect these decision may have in the future.
Definition
A problem with N inputs will have some constraints. Any subsets that satisfy
these constraints are called a feasible solution.
A feasible solution that either maximize or minimize a given objectives
function is called an optimal solution.
An optimal solution should have highest profit.
Control algorithm for Greedy Method
1. Algorithm Greedy (a,n)
2.//a[1:n] contain the ‗n‘ inputs
3. {
4. Solution =0;//Initialise the solution.
5. For i=1 to n do
6. {
7. x=select(a);
8. if (feasible(solution,x))then
9. Solution=union (solution,x);
10.}
11. return solution;
12.}
Function select( )
The function select an input from a[] and removes it.
The select input value is assigned to x.
43
Function feasible( )
It is a Boolean value function that determines whether x can be included into
the solution vector or not.
Function union( )
It combines x with the solution and updates the objective function.
Function greedy( )
The function Greedy describes the essential way that a greedy algorithm will
look,once a particular problem is chosen and the functions select( ) , feasible(
) & union( ) are properly implemented.
Example
Suppose we have in a country the following coins are available :
Dollars(100 cents)
Quarters(25 cents)
Dimes( 10 cents)
Nickel(5 Cents)
Pennies(1 cent)
Our aim is paying a given amount to a customer using the smallest possible
number of coins.
For example if we must pay 276 cents possible solution then,
1 doll+7 q+ 1 pen9 coins
2 dolls +3Q +1 pen6 coins
2 dolls+7dim+1 nic +1 pen11 coins.
44
Algorithm
1. Algorithm Greedy knapsack (m,n)
2//P[1:n] and the w[1:n]contain the profits
3. // & weight respectively of the n objects ordered
4. //such that p[i]/w[i] >=p[i+1]/W[i+1]
5. //m is the Knapsack size and x[1:n] is the solution vector.
6. {
7. for i:=1 to n do a[i]=0.0;
8. u:=m;
9. for i: =1 to n do
10. {
11. if (w[i]>u) then break;
13. x[i]=1.0;u=u-w[i];
14.}
15. if(i<=n)then x[i]=u/w[i];
16.}
Example
There is a bag that can carry only 20kg. We have 3 objects with different
weights. We have to put the objects inside the bag so that the weight should not
exceed and it should give the maximum profit and it should not exceed the maximum
limit.
Capacity=20
N=3 ,M=20
Wi=18,15,10
Pi=25,24,15
Pi/Wi=25/18=1.36,24/15=1.6,15/10=1.5
Descending Order Pi/Wi1.6 1.5 1.36
Pi = 24 15 25
Wi = 15 10 18
Xi = 1 5/10 0
PiXi=1*24+0.5*1531.5
The optimal solution is 31.5
Note
We have to visit all the nodes.
The subset tree (i.e.) any connected graph with ‗N‘ vertices must have at least
N-1 edges and also it does not form a cycle.
45
Definition
A spanning tree of a graph is an undirected tree consisting of only those
edges that are necessary to connect all the vertices in the original graph.
A Spanning tree has a property that for any pair of vertices there exist only
one path between them and the insertion of an edge to a spanning tree form a
unique cycle.
46
7. i:=0; mincost:=0.0;
8. While ((i<n-1) and (heap not empty)) do
9. {
10. Delete a minimum cost edge (u, v) from the heap and reheapify using Adjust;
11. j:=find (u);
12. k:=find (v);
13. if (j ≠ k) then
14. {
15. i:=i+1;
16. t[i,1]:=u;
17. t[i,2]:=v;
18. mincost:= mincost + cost[u,v];
19. union (j,k);
20. }
21. }
22. if (i ≠ n-1) then write(―No spanning tree‖);
23. else return mincost;
24. }
Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative
for second tree.
Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as
representative.
47
Step 4. Edge (a, b) creates a third tree.
Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the
representative.
Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would
be created. Vertex c is the representative of both.
48
Step 9. Instead of adding edge (h, i) add edge (a, h).
Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead
to complete the spanning tree. In this spanning tree all trees joined and vertex c is a
sole representative.
Analysis
The time complexity of minimum cost spanning tree algorithm in worst case is
O (|E|log|E|),
where E is the set of edges in G.
Suppose v={1,2,3…n}
Traversing starts with a set u initialized to 1.u={1}.
It then grows a spanning tree one edge at a time.
At each step,it finds a shortest edge (u,v) that connects u and v-u and then
adds the vertex in v-u to u. [ The next edge (u,v) or (i,j) to be added in such
that i is a vertex included in the tree, j is a vertex not yet included, and cost of
(i,j), cost[i,j] is minimum among all the edges.]
Repeat the above step until u=v.
The working of prims will be explained by following diagram
Algorithm
Procedure prim(c:array[1…n,1…n] of real)
{
prim print the edges of minimum cost spanning tree for a graph with vertices
{1,2,…n} and cost matrix c on edges}
var
49
LOWCOST: array (1…n) of real;
CLOSEST: array (1…n) of integer;
i, j, k, min: integer;
{
i and j are indices. During the scan of LOWCOST array, k is the index
of the closest vertex found so far and min=LOWCOST[k]
}
begin
for i:=2 to n do begin
{
initializing with only vertex1 in set v
LOWCOST[i]:=c [1, i];
CLOSEST[i]:= 1;
end
for i:=2 to n do begin
{
find the closest vertex k outside of u to same vertex in v
min:=LOWCOST[2];
k:=2;
for j:=3 to n do
{
if (LOWCOST[j]<min) then begin
min:=LOWCOST[j];
k:=j;
end
write (k,CLOSEST[k]);{ print edge}
LOWCOST[k]:=infinity ;{ k is added to v}
for j:=2 to n do{adjust cost to u}
if((c[k,j]<=LOWCOST[j])and(LOWCOST[j]<=infinity))then
begin
LOWCOST[j]:=c [k, j];
CLOSEST[j]:= k;
end
end
end {prim}
Step 1: Step 2:
50
Step 3: Step 4:
Step 5: Step 6:
Analysis
T(n)=O(n2).
51
5. choose a vertex w in V-S such that D[w] is minimum
6. Add w to S
7. for each vertex v in V-S do
8. D[V]:=min(D[v],D[w]+c[w,v]);
9. end
end{Dijikstra}
Greedy Algorithm for sequencing unit time jobs with deadlines and profits
Algorithm JS(d,j,n)
//d[i]1,1in are the deadlines,n1.
//The jobs are ordered such that p[1]>p[2]…>p[n]
//J[i] is the ith job in the optimal solution 1ik.
// Also at terminal d [ J[ i]]<=d[ J[i+1]],1ik
{
d[0]:= J[0]:=0;//Initialize
J[1]:=1;//Include job1
k:=1;
for i := 2 to n do
{
// consider jobs in non increasing order of p[i];
//find the position for i and check feasibility insertion
r=k;
while((d[J[r]]>d[i] ) and (d[J[r]] r)do
r :=r-1;
if (d[j[r]] d[i])and (d[i]>r)) then
{
//insert I into J[].
for q:=k to (r+1) step –1 do
J [q+1]:=J[q];
J[r+1]=i;
k=k+1;
}
}
return k;
}
52
Example
1. n=5 (P1,P2,…P5)=(20,15,10,5,1)
(d1,d2….d3)=(2,2,1,3,3)
(1) (1) 20
(2) (2) 15
(3) (3) 10
(4) (4) 5
(5) (5) 1
(1,2) (2,1) 35
(1,3) (3,1) 30
(1,4) (1,4) 25
(1,5) (1,5) 21
(2,3) (3,2) 25
(2,4) (2,4) 20
(2,5) (2,5) 16
(1,2,3) (3,2,1) 45
(1,2,4) (1,2,4) 40
2. n=4 (P1,P2,…P4)=(100,10,15,27)
(d1,d2….d4)=(2,1,2,1)
53
2.2.6 Optimal Storage On Tapes
We have n no of programs. The length of the tape l & length of each program
li, 1≤i≤n.
The objective of the problem is to store 'n' no of programs on a tape with the
constraint : Sum of the length of all the programs should not exceed the
length of the tape l.i.e)All the programs can be stored on the tape iff the sum
of the length of the program is atmost l.
Assume whenever a program is retrieved from the tape, tape is initially
positioned at the front.
Time needed (tj)to retreive the program lj is proportional to ∑ li*k where k=1 to
j.
Program P1 having length l1 takes t1 time to store it.
Program P2 having length l2 takes t2 time to store it.
Program P3 having length l3 takes t3 time to store it.
We have to store l1,l2,l3 in l;
If all the programs are equally retrieved,then
Mean retreival time d = (1/n) ∑ tj ,1≤j≤n
d = (t1+t2+.....+tn)/n
Our aim is to minimize the mean retreival time. To minimize MRT we have to
find the permutation for n programs.i.e)We must consider the order in which
programs are saved.
Example
P1---l1---> 5
P2---l2--->10
P3---l3---> 3
Possible orderings=n!=3!=6
I d(i) MRT
1,2,3 5+15+!8 38
1,3,2 5+8+18 31
2,3,1 10+13+18 41
2,1,3 10+15+18 43
3,1,2 3+8+18 29(min)
3 , 2 ,1 3+13+18 34
The 5th order 3,2,1 takes minimum MRT 29.So that if the programs are
stored in the sequence 3,2,1 then it is an optimal solution.
Analysis
The greedy method simply requires to store the programs in non decreasing
order of their lengths. This ordering can be carried out in O(n log n) time using an
efficient sorting algorithm(heap sort).
54
Time taken for merging these two files is O(m+n).
If we have more than 2 files,then we can merge the files in some order.
Let us take 4 files say x1,x2,x3&x4. We can merge these 4 files in any order.
For example:
1.Merge x1 & x2 => y1
Merge y1 & x3 => y2
Merge y2 & x4 => y3
2.Merge x1 & x2 => y1
Merge x3 & x4 => y2
Merge y1 & y2 => y3
Given n sorted files. We have to sort the n sorted files & store it in a single file
in sorted order.
We can merge the files in any order, but it should take minimum amount of
computing time.
Example 1
n=3
File x1 length=30
File x2 length=20
File x3 length=10
Way 1
Merge x1 & x2 - 50 record moves
Merge(x1 & x2)&x3 - 60 record moves
Total no of record moves -110 record moves
55
Chapter 3
3.1Dynamic Programming
The General Method
It is an algorithm design method that can be used when the solution to a
problem can be viewed as the result of a sequence of decisions.
The idea of dynamic programming is thus quit simple: avoid calculating the
same thing twice, usually by keeping a table of known result that fills up a sub
instances are solved.
Divide and conquer is a top-down method. When a problem is solved by
divide and conquer, we immediately attack the complete instance, which we
then divide into smaller and smaller sub-instances as the algorithm
progresses.
Dynamic programming on the other hand is a bottom-up technique. We
usually start with the smallest and hence the simplest sub- instances. By
combining their solutions, we obtain the answers to sub-instances of
increasing size, until finally we arrive at the solution of the original instances.
The essential difference between the greedy method and dynamic
programming is that the greedy method only one decision sequence is ever
generated. In dynamic programming, many decision sequences may be
generated. However, sequences containing sub-optimal sub-sequences can
not be optimal and so will not be generated.
Because of principle of optimality,decision sequences containing
subsequences that are suboptimal are not considered. Although the total
number of different descision sequences is exponential in the number of
descisions(if there are d choices for each of the n descisions to be made then
there are dn possible decision sequences),Dynamic programming algorithms
often have a polynomial complexity.
56
Procedure
Maintain a cost matrix cost(n) which stores the distance from any vertex to the
destination.
If a vertex is having more than one path, then we have to choose the
minimum distance path and the intermediate vertex which gives the minimum
distance path will be stored in the distance array ‗D‘.
In this way we will find out the minimum cost path from each and every vertex.
Finally cost(1) will give the shortest distance from source to destination.
For finding the path, start from vertex-1 then the distance array D(1) will give
the minimum cost neighbour vertex which in turn give the next nearest vertex
and proceed in this way till we reach the Destination.
For a ‗k‘ stage graph, there will be ‗k‘ vertex in the path.
In the above graph V1……V5 represent the stages. This 5 stage graph can
be solved by using forward approach as follows,
Algorithm
Algorithm FGraph (G,k,n,p)
// The input is a k-stage graph G=(V,E) with ‗n‘ vertex indexed in order of stages.
// E is a set of edges.
// c[i,j] is the cost of <i,j>
// p[1:k] is a minimum cost path.
{
cost[n]=0.0;
for j=n-1 to 1 step-1 do
{
//compute cost[j],
Let ‗r‘ be the vertex such that <j,r> is an edge of ‗G‘ & c[j,r]+cost[r] is
minimum.
cost[j] = c[j+r] + cost[r];
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j=2 to k-1 do
p[j]=d[p[j-1]];
}
57
V1 V2 V3 V4
V5
4 6
2 2
5 4
9 1
4
7
7 3 2
t
s
3
11 5 5
2
11 6
Figure 3.1
Trace Out
n=12 k=5
When j=8
cost(8) = min {C (8,10) + Cost (10), C (8,11) + Cost (11) }
= min (5 + 2, 6 + 5)
58
= min (7,11)
=7
cost(8) =7 d(8)=10
When j=7
cost(7) = min(c (7,9)+ cost(9),c (7,10)+ cost(10))
= min(4+4,3+2)
= min(8,5)
=5
cost(7) = 5 d(7) = 10
When j=6
cost(6) = min (c (6,9) + cost(9),c (6,10) +cost(10))
= min(6+4 , 5 +2)
= min(10,7)
=7
cost(6) = 7 d(6) = 10
When j=5
cost(5) = min (c (5,7) + cost(7),c (5,8) +cost(8))
= min(11+5 , 8 +7)
= min(16,15)
= 15
cost(5) = 15 d(5) = 18
When j=4
cost(4) = min (c (4,8) + cost(8))
= min(11+7)
= 18
cost(4) = 18 d(4) = 8
When j=3
cost(3) = min (c (3,6) + cost(6),c (3,7) +cost(7))
= min(2+7 , 7 +5)
= min(9,12)
=9
cost(3) = 9 d(3) = 6
When j=2
cost(2) = min (c (2,6) + cost(6),c (2,7) +cost(7) ,c (2,8) +cost(8))
= min(4+7 , 2+5 , 1+7 )
= min(11,7,8)
=7
cost(2) = 7 d(2) = 7
When j=1
cost(1)= min (c (1,2)+cost(2) ,c (1,3)+cost(3) ,c (1,4)+cost(4)
c(1,5)+cost(5))
p[1]=1 p[5]=12
for j=2 to 4
59
When j=2 p[2]=d[p[1]]=d[1]=2
When j=3 p[3]=d[p[2]]=d[2]=7
When j=4 p[4]=d[p[3]]=d[7]=10
The path through which you have to find the shortest distance from vertex 1 to vertex
12.
(ie)
9 2 3 2
Analysis
The time complexity of this forward method is O ( V + E )
Backward Method
If there are ‗k‘ stages in a graph then using backward approach, we will find
out the minimum cost path from destination to source (ie)[from stage k to
stage 1]
Procedure
1. It is similar to forward approach, but differs only in two or three ways.
2. Maintain a cost matrix to store the cost of every vertices and a distance matrix
to store the minimum distance vertex.
3. Find out the cost of each and every vertex starting from vertex 1 up to vertex
k.
4. To find out the path star from vertex ‗k‘, then the distance array D (k) will give
the minimum cost neighbor vertex which in turn gives the next nearest
neighbor vertex and proceed till we reach the destination.
Algorithm
Algorithm BGraph (G,k,n,p)
// The i/p is a k-stage graph G=(V,E) with ‗n‘ vertex indexed in order of stages
// E is a set of edges.
// c[i,j] is the cost of<i,j>
60
// p[1:k] is a minimum cost path.
{
bcost[1]=0.0;
for j=2 to n do
{
//compute bcost[j],
Let ‗r‘ be the vertex such that <r,j> is an edge of ‗G‘ &bcost[r]+c[r,j] is
minimum.
bcost[j] = bcost[r] + c[r,j];0
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j= k-1 to 2 do
p[j]=d[p[j+1]];
}
Trace Out
n=12 k=5
bcost (1)=0 d (1)=0
for j = 2 to 12
When j=2
bcost(2) = 9 d(2)=1
When j=3
bcost(3) = 7 d(3)=1
When j=4
bcost(4) = 3 d(4)=1
When j=5
bcost(5) = 2 d(5)=1
When j=6
bcost(6) =min(c (2,6) + bcost(2),c (3,6) + bcost(3))
=min(13,9)
bcost(6) = 9 d(6)=3
When j=7
bcost(7) =min(c (3,7) + bcost(3),c (5,7) + bcost(5) ,c (2,7) + bcost(2))
=min(14,13,11)
bcost(7) = 11 d(7)=2
When j=8
bcost(8) =min(c (2,8) + bcost(2),c (4,8) + bcost(4) ,c (5,8) +bcost(5))
=min(10,14,10)
bcost(8) = 10 d(8)=2
When j=9
bcost(9) =min(c (6,9) + bcost(6),c (7,9) + bcost(7))
=min(15,15)
bcost(9) = 15 d(9)=6
When j=10
bcost(10)=min(c(6,10)+bcost(6),c(7,10)+bcost(7)),c(8,10)+bcost(8))
=min(14,14,15)
bcost(10)= 14 d(10)=6
When j=11
61
bcost(11) =c (8,11) + bcost(8))
bcost(11) = 16 d(11)=8
When j=12
bcost(12)=min(c(9,12)+bcost(9),c(10,12)+bcost(10),c(11,12)+bcost(11)
)
=min(19,16,21)
bcost(12) = 16 d(12)=10
p[1]=1 p[5]=12
for j=4 to 2
When j=4 p[4]=d[p[5]]=d[12]=10
When j=3 p[3]=d[p[4]]=d[10]=6
When j=2 p[2]=d[p[3]]=d[6]=3
The path through which you have to find the shortest distance from vertex 1 to vertex
12.
p[5]=12
p[4]= =10
p[3]= =6
p[2]= =3
p[1]=1
So the minimum cost path is,
7 2 5 2
1 3 6 10 12
62
We have to perform N iteration after iteration k.the matrix D will give you the
distance between nodes with only (1,2...,k)as intermediate nodes.
At the iteration k, we have to check for each pair of nodes (i,j) whether or not
there exists a path from i to j passing through node k.
Cost Adjacency Matrix
D0 =L= 0 5
50 0 15 5
30 0 15
15 5 0
1 7 5 11 12 - -
2 7 2 21 - - 24
3 3 - 32 - -
4 4 1 41 – 43 -
vertex 1
7 5 11 12 - -
7 12 2 21 212 - 24
3 - 32 - -
4 9 1 41 412 43 –
vertex 2
7 5 7 11 12 - 124
7 12 2 21 212 - 24
10 3 5321 32 - 324
4 9 1 11 41 412 43 4124
vertex 3
7 5 7 11 12 - 124
7 12 2 21 212 - 24
10 3 5 321 32 - 324
4 4 1 6 41 432 43 4324
vertex 4:
7 5 8 7 11 12 1243 124
663 2 241 2432243 24
9 3 6 5 3241 32 3243 324
4 4 1 6 41 432 43 4324
At 0th iteration it nil give you the direct distances between any 2 nodes
D0= 0 5
50 0 15 5
30 0 15
15 5 0
63
At 1st iteration we have to check the each pair(i,j) whether there is a path
through node 1.if so we have to check whether it is minimum than the
previous value and if I is so than the distance through 1 is the value of
d1(i,j).at the same time we have to solve the intermediate node in the matrix
position p(i,j).
0 5
50 0 15 5 p[3,2]= 1
D1= 30 35 0 15 p[4,2]= 1
15 20 5 0
15
30
5
5 50 5 15
15
likewise we have to find the value for N iteration (ie) for N nodes.
0 5 2010 P[1,3] = 2
D2= 50 0 15 5 P[1,4] = 2
30 35 0 15
15 20 5 0
0 5 20 10
D3= 45 0 15 5 P[2,1]=3
30 35 0 15
15 20 5 0
0 5 15 10
20 0 10 5 P[1,3]=4
D4= 30 35 0 15 P[2,3]=4
15 20 5 0
64
Since,p[1,3]=4,the shortest path from 1 to3 passes through 4.
Looking now at p[1,4]&p[4,3] we discover that between 1 & 4, we have to go to
node 2 but that from 4 to 3 we proceed directly.
Finally we see the trips from 1 to 2, & from 2 to 4, are also direct.
The shortest path from 1 to 3 is 1,2,4,3.
Algorithm
Function Floyd (L[1..r,1..r]):array[1..n,1..n]
array D[1..n,1..n]
D=L
For k = 1 to n do
For i = 1 to n do
For j = 1 to n do
D [ i , j ] = min (D[ i, j ], D[ i, k ] + D[ k, j ]
Return D
Analysis
This algorithm takes a time of (n3)
65
The cell is filled with the profit p[i], since only one object can be selected to
the maximum.
If i>l and j < w(i) then
T(i,j) = T (i-l,j) .
The cell is filled the profit of previous object since it is not possible with the
current object.
If i>l and j w(i) then
T (i,j) = {p(i) +T(i-l,j-w(i))
Since only ‗l‘ unit can be selected to the maximum. It is the current profit + profit
of the previous object to fill the remaining capacity of the bag.
After the table generation, it will give the details of the profit.
Rules To Get The Combination Of Object
Start with the last position of i and j, T[i,j].i.e)T[n,M].
If T[i,j] = T[i-l,j] then
no object of ‗i‘ is required so move up to T[i-l,j].
Example
Capacity of the bag M=11 No.of objects n=5
W1 W2 W3 W4 W5
1 2 5 6 7
P1 P2 P3 P4 P5
1 6 18 22 28
Table
0 1 2 3 4 5 6 7 8 9 10 11
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 23 28 29 29 40
5 0 1 6 7 7 19 22 28 29 34 35 35
The selected objects are 2 5
Analysis
A time (nM) is necessary to construct the tble V, and thst the
composition of the optimal load can be detemined in atime O(n+M).
g(i,s) be the length of a shortest path starting at vertex I going through all
vertices in S and terminating at vertex 1.
g(i,s) =min{cij+g(j,s-{j})
The function g(1,v-{1}) is the length of an optimal tour.
Steps To Find The Path
1. Find g(i,) =ci1, 1<=i<=n, hence we can use equation(2) to obtain g(i,s) for all
s to size 1.
2. That we have to start with s=1,(ie) there will be only one vertex in set ‗s‘.
3. Then s=2, and we have to proceed until |s| <n-1.
4. For example consider the graph.
5 10
8 6 8
13
15 9
20 10
12 9
Cost Matrix
0 10 15 20
5 0 9 10
6 13 0 12
8 8 9 0
s = {1}
v ={2,3,4}
67
g(i,s) =min{cij+g(j,s-{j})
Step 1:
When s = 0 and i =1 to n.
g(1,) = c11 => 0
g(2,) = c21 => 5
g(3,) = c31 => 6
g(4,) = c41 => 8
g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
=min{10+25,15+25,20+23}
=min{35,35,43}
=35
Step2:
When s = 1 and i =2 to 4
g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}
= min{9+20,10+15}
=min{29,25}
=25
g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}
=min{13+18,12+13}
= min{31,25}
=25
g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}
=min{8+15,9+18}
= min{23,27}
=23
68
Step4:
When s =3
g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
=min{10+25,15+25,20+23}
=min{35,35,43}
=35
An optimal cost is 35
The shortest path is,
g(1,{2,3,4}) = c12 + g(2,{3,4}) => 1->2
g(2,{3,4}) = c24 + g(4,{3}) => 1->2->4
g(4,{3}) = c43 + g(3{}) => 1->2->4->3->1
So the optimal tour is 1 2 4 3 1
Analysis:
An algorithm that proceeds to find an optimal tour by using above 2 equations
require (n22n) time.
Space needed by this algorithm is O(n.2 n).This is too large even for modest
values of n. This is a one main drawback of this problem.
70
If s=1 then mi,i+1 = d(i-1)*di *d(i+1)
for i= 1 to 3
when i=1 m12 = d0*d1*d2 =13*5*89=5785
when i=2 m23 = d1*d2*d3 =5*89*3=1335
when i=3 m34 = d2*d3*d4 =89*3*34=9078
1 2 3 4
3 0 9078 s=1
4 0 s=0
71
Here there is no need to fill the lower diagonal. The solution to the original problem is
given by m1n in the table. The execution time of the algorithm is (n3).
INORDER:
First visit the left node of the root node.Then visit the root node and right node
of the root node.Do this process continuously till we traverse all the nodes in the
tree.
Recursive formulation of inorder traversal
treenode=record
{
Type data;//Type ios the data type of data.
Treenode *lchild;Treenode *rchild;
}
Algorithm InOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
72
{
if t ≠ 0 then
{
InOrder(t->lchild);
Visit(t);
InOrder(t->rchild);
} }
Algorithm PreOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
{
if t ≠ 0 then
{
Visit(t);
PreOrder(t->lchild);
PreOrder(t->rchild);
} }
Algorithm PostOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
{
if t ≠ 0 then
{
PostOrder(t->lchild);
PostOrder(t->rchild);
Visit(t);
}}
A
B C
D E
E F
H I
Figure 3.2.1.
The above picture shows a binary tree.Visit(t) – Visiting a node requires only the
printing of its data field.The output resulting from inorder,preorder and postorder
traversal is FDHGIBEAC, ABDFGHIEC and FHIGDEBCA respectively.
Analysis
The total time taken by the traversal is θ(n).the additional space needed is
that for the recursion stack.If t has depth d,then this space is θ(d).For an n-node
binary tree,d≤n and so S(n)=O(n).
73
3.2.1.2. Techniques For Graphs
The fundamental problem concerning graphs is the reachability problem.
In it simplest from it requires us to determine whether there exist a path in the
given graph, G =(V,E) such that this path starts at vertex ‗v‘ and ends at
vertex ‗u‘.
A more general form is to determine for a given starting vertex v €V all
vertices u such that there is a path from v to u.
This problem can be solved by starting at vertex ‗v‘ and systematically
searching the graph ‗G‘ for vertex that can be reached from ‗v‘.
We describe 2 search methods for this.
Breadth first Search and Traversal.
Depth first Search and Traversal.
2 3
4 5 6 7
Figure 3.2.2.
74
u=v; // q is a queue of unexplored vertices
visited (v)= 1;
repeat
{
for all vertices ‗w‘ adjacent from u do
{
if (visited[w]=0) then
{
Add w to q;
visited[w]=1
}
}
if q is empty then return;// No unexplored vertex,delete u from q;
//get first unexplored vertex
} until (false)//q becomes empty
}
Here the time and space required by BFT on an n-vertex, e-edge graph
one O(n+e) and O(n) resp if adjacency list is used. if adjacency matrix is used then
the bounds are O(n2) and O(n) respectively.
75
BFS & DFS are 2 fundamentally different search methods. In BFS, a node is
fully explored before the exploration of any other node begins. The next node to
explore is the first unexplored node remaining. In DFS the exploration of a node is
suspended as soon as a new unexplored node is reached. T he exploration of this
new node is immediately begun.
The graph shown above has many valid topological sorts, including:
7,5,3,11,8,2,10,9
7,5,11,2,3,10,8,9
3,7,8,5,11,10,9,2
3,5,7,11,10,2,8,9
In this problem, the functions are:
i) Decide whether a vertex has any predecessor.
ii) Decide a vertex together with all its incident edges.
This can be efficiently done if for each vertex a count of the no of its immediate
predecessor is kept.
This can be easily implemented if the network is represented by its adjacency
lists.
76
The following algorithm assumes that the network is represented by
adjacency lists.
Algorithm:
Procedure Topological_order (count,vertex,link,n)
//the n vertices of an AOV-network are listed in topological order .
the network is represented as a set of adjacency lists with
COUNT(i)=the in-degree of vertex i//
1) top 0 //initialize stack//
2) for i 1 to n do //create a linked stack of vertices with no predecessors //
3) if COUNT (I)=0 then [ COUNT(i) top;topI]
4) end
5) for i 1 to n do //print the vertices in topological order //
6) if top = 0 then [print (‗network has a cycle ‗);stop]
7) j top;topCOUNT (top);print(j)//unstack a vertex//
8) ptrLINK(j)
9) while ptr<> 0 do
10) //decrease the count of successor vertices of j//
11) K VERTEX(ptr) // K is a successor of j//
12) COUNT(K) COUNT(K)-1 // Decrease count//
13) If COUNT(K)=0 // Add vertex K to stack//
14) Then[COUNT(K) top; topK]
15) PtrLINK(ptr)
16) End
17) End
18) End TOPOLOGICAL_ORDER.
77
Chapter 4: Backtracking
4.1 Backtracking
It is one of the most general algorithm design techniques.
Many problems which deal with searching for a set of solutions or for a
optimal solution satisfying some constraints can be solved using the
backtracking formulation.
To apply backtracking method, tne desired solution must be expressible as an
n-tuple (x1…xn) where xi is chosen from some finite set Si.
The problem is to find a vector, which maximizes or minimizes a criterion
function P(x1….xn).
The major advantage of this method is, once we know that a partial vector
(x1,…xi) will not lead to an optimal solution that (mi+1………..mn) possible test
vectors may be ignored entirely.
Many problems solved using backtracking require that all the solutions satisfy
a complex set of constraints.
These constraints are classified as:
i) Explicit constraints.
ii) Implicit constraints.
1) Explicit constraints:
Explicit constraints are rules that restrict each Xi to take values only from
a given set.
Some examples are,
Xi 0 or Si = {all non-negative real nos.}
Xi =0 or 1 or Si={0,1}.
Li Xi Ui or Si= {a: Li a Ui}
All tupules that satisfy the explicit constraint define a possible solution space
for I.
2) Implicit constraints:
The implicit constraint determines which of the tuples in the solution
space I can actually satisfy the criterion functions.
Algorithm:
Algorithm IBacktracking (n)
// This schema describes the backtracking procedure .All solutions are generated in
X[1:n]
//and printed as soon as they are determined.
{
k=1;
While (k 0) do
{
if (there remains all untried
X[k] T (X[1],[2],…..X[k-1]) and Bk (X[1],…..X[k])) is true ) then
{
if(X[1],……X[k] )is the path to the answer node)
Then write(X[1:k]);
k=k+1; //consider the next step.
}
78
else k=k-1; //consider backtracking to the previous set.
}
}
All solutions are generated in X[1:n] and printed as soon as they are
determined.
1. Sum of subsets.
2. Graph coloring.
3. Hamiltonian cycle.
4. N-Queens problem.
5. Knapsack problem.
79
S, n, r
0,1,73
X(1)=1 x(1)=0
5,2,68 0,2,68
X(5)=1 x(5)=1
A 20,6,18
In the state space tree, edges from level ‗i‘ nodes to ‗i+1‘ nodes are labeled
with the values of Xi, which is either 0 or 1.
The left sub tree of the root defines all subsets containing Wi.
The right subtree of the root defines all subsets, which does not include Wi.
Generation of state space tree:
Maintain an array X to represent all elements in the set.
The value of Xi indicates whether the weight Wi is included or not.
Sum is initialized to 0 i.e., s=0.
We have to check starting from the first node.
Assign X(k)<- 1.
If S+X(k)=M then we print the subset b‘coz the sum is the required output.
If the above condition is not satisfied then we have to check
S+X(k)+W(k+1)<=M. If so, we have to generate the left sub tree. It means
W(t) can be included so the sum will be incremented and we have to check for
the next k.
After generating the left sub tree we have to generate the right sub tree, for
this we have to check S+W(k+1)<=M.B‘coz W(k) is omitted and W(k+1) has to
be selected.
Repeat the process and find all the possible combinations of the subset
80
Algorithm:
Algorithm sumofsubset(s,k,r)
{
//generate the left child. note s+w(k)<=M since Bk-1 is true.
X{k]=1;
If (S+W[k]=m) then write(X[1:k]); // there is no recursive call here
W[j]>0,1<=j<=n.
Else if (S+W[k]+W[k+1]<=m) then sum of sub (S+W[k], k+1,r- W[k]);
//generate right child and evaluate Bk.
If ((S+ r- W[k]>=m)and(S+ W[k+1]<=m)) then
{
X{k]=0;
sum of sub (S, k+1, r- W[k]);
}
}
1 2 3 4
8 7 6 5
->1,3,4,5,6,7,8,2,1 and
->1,2,8,7,6,5,4,3,1.
The backtracking algorithm helps to find Hamiltonian cycle for any type of
graph.
Procedure:
1. Define a solution vector X(Xi……..Xn) where Xi represents the I th visited vertex
of the proposed cycle.
2. Create a cost adjacency matrix for the given graph.
3. The solution array initialized to all zeros except X(1)=1,b‘coz the cycle should start
at vertex ‗1‘.
4. Now we have to find the second vertex to be visited in the cycle.
5. The vertex from 1 to n are included in the cycle one by one by checking 2
conditions,
81
1.There should be a path from previous visited vertex to current vertex.
2.The current vertex must be distinct and should not have been visited
earlier.
6. When these two conditions are satisfied the current vertex is included in the cycle,
else the next vertex is tried.
7. When the nth vertex is visited we have to check, is there any path from nth vertex
to first 8vertex. if no path, the go back one step and after the previous visited node.
8. Repeat the above steps to generate possible Hamiltonian cycle.
}
Repeat
}
82
Second, we have to check no two queens are in same column.
The function, which is used to check these two conditions, is [I, X (j)], which
gives position of the Ith queen, where I represents the row and X (j) represents
the column position.
Third, we have to check no two queens are in it diagonal.
Consider two dimensional array A[1:n,1:n] in which we observe that every
element on the same diagonal that runs from upper left to lower right has the
same value.
Also, every element on the same diagonal that runs from lower right to upper
left has the same value.
Suppose two queens are in same position (i,j) and (k,l) then two queens lie on
the same diagonal , if and only if |j-l|=|I-k|.
83
Algorithm Nqueen (K,N)
//using backtracking it prints all possible positions of n queens in ‗n*n‘ chessboard.
So
//that they are non-tracking.
{
For I=1 to n do
{
If place (k,I) then
{
X [k]=I;
If (k=n) then write (X [1:n]);
Else nquenns(k+1,n) ;
}
}
}
Example: 4 queens.
Two possible solutions are
Q Q
Q Q
Q Q
Q Q
Solutin-1 Solution 2
(2 4 1 3) (3 1 4 2)
4 5
2
1
84
1 is adjacent to 2, 3, 4.
2 is adjacent to 1, 3, 4, 5
3 is adjacent to 1, 2, 4
4 is adjacent to 1, 2, 3, 5
5 is adjacent to 2, 4
2 3
5 4
1 2 //
t
h
e
4 3 gr
N= 4 a
M= 3 p
h
Adjacency Matrix: is
r
0 1 0 1 e
1 0 1 0 p
0 1 0 1
r
1 0 1 0
e
s
85 e
n
t
e
Problem is to color the given graph of 4 nodes using 3 colors.
Node-1 can take the given graph of 4 nodes using 3 colors.
The state space tree will give all possible colors in that ,the numbers which are
inside the circles are nodes ,and the branch with a number is the colors of the nodes.
State space tree
Algorithm:
Algorithm mcoloring(k)
// the graph is represented by its Boolean adjacency matrix G[1:n,1:n] .All
assignments //of 1,2,……….,m to the vertices of the graph such that adjacent
vertices are assigned //distinct integers are printed. ‘k‘ is the index of the next vertex
to color
{
repeat
{
// generate all legal assignment for X[k].
Nextvalue(k); // Assign to X[k] a legal color.
If (X[k]=0) then return; // No new color possible.
If (k=n) then // Almost ‗m‘ colors have been used to color the ‗n‘
vertices
Write(x[1:n]);
Else mcoloring(k+1);
}until(false);
}
Algorithm Nextvalue(K)
// X[1],……X[k-1] have been assigned integer values in the range[1,m] such that
//adjacent values have distinct integers. A value for X[k] is determined in the
//range[0,m].X[k] is assigned the next highest numbers color while maintaining
//distinctness form the adjacent vertices of vertex K. If no such color exists, then X[k]
is 0.
{
repeat
{
86
X[k] = (X[k]+1)mod(m+1); // next highest color.
If(X[k]=0) then return; //All colors have been used.
For j=1 to n do
{
// Check if this color is distinct from adjacent color.
If((G[k,j] 0)and(X[k] = X[j]))
// If (k,j) is an edge and if adjacent vertices have the same color.
Then break;
}
WiXi m
1i n
and PiXi
1i n
is Maximized.
87
Algorithm:
Algorithm bknap(k,cp,cw)
// ‗m‘ is the size of the knapsack; ‗n‘ no.of weights & profits. W[]&P[] are the
//weights & weights. P[I]/W[I] P[I+1]/W[I+1].
//fwFinal weights of knapsack.
//fp final max.profit.
//x[k] = 0 if W[k] is not the knapsack,else X[k]=1.
{
// Generate left child.
If((W+W[k] m) then
{
Y[k] =1;
If(k<n) then Bnap(k+1,cp+P[k],Cw +W[k])
If((Cp + p[w] >fp) and (k=n)) then
{
fp = cp + P[k];
fw = Cw+W[k];
for j=1 to k do X[j] = Y[j];
} }
88
Example:
M= 6 Wi = 2,3,4 4 2 2
N= 3 Pi = 1,2,5 Pi/Wi (i.e.) 5 2 1
Xi = 1 0 1
The maximum weight is 6
The Maximum profit is (1*5) + (0*2) + (1*1)
5+1
6.
Fp = (-1)
1 3 & 0+4 6
cw = 4,cp = 5,y(1) =1
k = k+2
2 3 but 7>6
so y(2) = 0
So bound(5,4,2,6)
B=5
C=4
I=3 to 3
C=6
6 6
So return 5+(1-(6-6))/(2*1)
5.5 is not less than fp.
So, k=k+1 (i.e.) 3.
3=3 & 4+2 6
cw= 6,cp = 6, y(3)=1.
K=4.
If 4> 3 then
Fp =6,fw=6,k=3 ,x(1) 1 0 1
The solution Xi 1 0 1
Profit 6
Weight 6.
89
Chapter 5: Branch And Bound Method
5.1. Introduction
The design technique known as branch and bound is very similar to
backtracking(seen in unit 4) in that it searches a tree model of the solution space
and is applicable to a wide variety of discrete combinatorial problems.
Each node in the combinatorial tree generated in the last Unit defines a problem
state. All paths from the root to other nodes define the state space of the problem.
Solution states are those problem states 's' for which the path from the root to 's'
defines a tuple in the solution space. The leaf nodes in the combinatorial tree are the
solution states.
Answer states are those solution states 's' for which the path from the root to 's'
defines a tuple that is a member of the set of solutions (i.e.,it satisfies the implicit
constraints) of the problem.
The tree organization of the solution space is referred to as the state space tree.
A node which has been generated and all of whose children have not yet been
generated is called a live node.
The live node whose children are currently being generated is called the E-node
(node being expanded).
A dead node is a generated node, which is not to be expanded further or all of
whose children have been generated.
Bounding functions are used to kill live nodes without generating all their children.
Depth first node generation with bounding function is called backtracking. State
generation methods in which the E-node remains the E-node until it is dead lead to
branch-and-bound method.
The term branch-and-bound refers to all state space search methods in which all
children of the E-node are generated before any other live node can become the E-
node.
In branch-and-bound terminology breadth first search(BFS)- like state space
search will be called FIFO (First In First Output) search as the list of live nodes is a
first -in-first -out list(or queue).
A D-search (depth search) state space search will be called LIFO(Last In First
Out)search, as the list of live nodes is a list-in-first-out list (or stack).
Bounding functions are used to help avoid the generation of subtrees that do not
contain an answer node.
The branch-and-bound algorithms search a tree model of the solution space to
get the solution. However, this type of algorithms is oriented more toward
optimization. An algorithm of this type specifies a real -valued cost function for each
of the nodes that appear in the search tree.
Usually, the goal here is to find a configuration for which the cost function is
minimized. the branch-and-bound algorithms are rarely simple. they tend to be quite
complicated in many cases.
Example [4-queens] Let us see how a FIFO branch-and-bound algorithm would
search the state space tree for the 4-queens problem.
90
Figure 5.1
Initially, there is only one live node, node1. This represents the case in which no
queen has been placed on the chessboard. This node becomes the E-node.
It is expanded and its children,nodes2,18,34,and 50,are generated.
These nodes represent a chessboard with queen1 in row 1and columns 1, 2, 3,
and 4 respectively.
The only live nodes 2, 18, 34, and 50.If the nodes are generated in this order,
then the next E-node is node 2.
It is expanded and the nodes 3,8, and 13 are generated. Node 3 is immediately
killed using the bounding function. Nodes 8 and 13 are added to the queue of live
nodes.
Node 18 becomes the next E-node. nodes 19,24, and 29 are generated. Nodes
19 and 24 are killed as a result of the bounding functions. Node 29 is added to the
queue of live nodes.
Now the E-node is node 34.Figure 8.1 shows the portion of the tree of Figure 7.2
that is generated by a FIFO branch-and-bound search. Nodes that are killed as a
result of the bounding functions are a "B" under them.
Numbers inside the nodes correspond to the numbers in Figure 7.2.Numbers
outside the nodes give the order in which the nodes are generated by FIFO branch-
and-bound.
At the time the answer node, node 31, is reached, the only live nodes remaining
are nodes 38 and 54.
91
In LC search method, we can speed up the searching by ranking (cost)
function.
Figure 5.2
In Example 5.1, when node 30 is generated, it should have become obvious to the
search algorithm that this node will lead to answer node in one move. However, the
rigid FIFO rule first requires the expansion of all live nodes generated before node
30 was expanded.
The search for an answer node can often be speeded by using an "intelligent"
ranking function (.) for live nodes. The next E-node is selected on the basis of this
ranking function.
. In the 4-queens example if we use a ranking function that assigns node 30 a
better rank than all other live nodes, then node 30 will become E-node following
node 29.The remaining live nodes will never become E-nodes as the expansion of
node 30 results in the generation of an answer node (node 31).
The ideal way to assign ranks would be on the basis of the additional
computational effort (or cost) needed to reach an answer node from the live
node.For any node x, this cost could be
(1) The number of nodes on the sub-tree x that need to be generated before
any answer node is generated or, more simply,
(2) The number of levels the nearest answer node (in the sub-tree x) is from x
Using cost measure (2), the cost of the root of the tree of Figure 8.1 is 4 (node 31
is four levels from node 1).The costs of nodes 18 and 34,29 and 35,and 30 and 38
are respectively 3, 2, and 1.The costs of all remaining nodes on levels 2, 3, and 4
are respectively greater than 3, 2, and 1.
Using these costs as a basis to select the next E-node, the E-nodes are nodes 1,
18, 29, and 30 (in that order).The only other nodes to get generated are nodes 2, 34,
50, 19, 24, 32, and 31.
The difficulty of using the ideal cost function is that computing the cost of a node
usually involves a search of the sub-tree x for an answer node. Hence, by the time
the cost of a node is determined, that sub-tree has been searched and there is no
need to explore x again. For this reason, search algorithms usually rank nodes only
on the basis of an estimate (.) of their cost.
92
Let (x) be an estimate of the additional effort needed to reach an answer node
from x. node x is assigned a rank using a function (.) such that (x)=f(h(x))+
(x),where h(x) is the cost of reaching x from the root and f(.) is any non-decreasing
function.
listnode = record
{
listnode *next, *parent;
Float cost;
}
Algorithm LC Search(t)
\\ Search t for an answer node
{
If *t is an answer node then output *t and return;
E = t ; \\ E-node
Initialize the list of live nodes to be empty;
Repeat
{
For each child x of E do
{
If x is an answer node the output the path
from x to t and return
93
Add(x) ; \\ x is a new live node.
(x->parent):=E; \\Pointer for path to root.
}
If there are no more live nodes then
{
Write(―No answer node‖); return;
}
E=Least();
} until (false);
}
94
Problem State
From the initial arrangement, four moves are possible
We can move any one of the tiles numbered 2,3,5,6 to the empty spot
These arrangements are called states of the puzzle
To speed up the search, we can associate the cost c(x) with each node x in
the state space tree.
C‘(x)=f(x)+g‘(x) where f(x) – length of the path from root to node x
g‘(x) – an estimate of the length of a shortest path from
x to a goal node in the subtree with root x
(or)
g‘(x) – number of nonblank tiles not in their goal
position
C’(2)=1+4 = 5
C’(3)=1+4 = 5
C’(4)=1+2 = 3 (least cost node)
C’(5)=1+4 = 5
The live node with least C‘ is node 4, which becomes the next E-node.
It generates all its children 10,11 and becomes dead node
The live nodes are 2,3,5,10,11 and 12
To determine the next E-node, C‘(x) is computed for all live nodes and the live
node with least C‘(x)is selected as next E-node
C’(x)=f(x)+g’(x)
C’(2)=1+4 = 5
C’(3)=1+4 = 5
C’(5)=1+4 = 5
C’(10)=2+1 = 3 (least cost node)
C’(11)=2+3 = 5
The live node with least C‘ is node10, which becomes the next E-
node. It generates all its children 22 and 23 and becomes dead node
The node 23 is determined as goal node and the search terminates.
In this case LC search is as efficient as using exact function C‘().The LC search will
be far more selective than any other search methods
95
5.3 Bounding
A branch -and-bound searches the state space tree using any search
mechanism in which all the children of the E-node are generated before
another node becomes the E-node.
We assume that each answer node x has a cost c(x) associated with it and
that a minimum-cost answer node is to be found. Three common search
strategies are FIFO, LIFO, and LC.
A cost function (.) such that (x) <=c(x) is used to provide lower bounds on
solutions obtainable from any node x.
If upper is an upper bound, then all live nodes x with (x)>upper may be
killed as all answer nodes reachable from x have cost c(x)>=c‘(x)>upper6
The starting value for upper can be set to infinity.
Each time a new answer node is found, the value of upper can be updated
C‘(x) and upper and c(x) can be computed using formulas
96
Compute c(x) using the following rule:
1. if x is a leaf node,
c(x) – summation of penalty of the jobs which are not in the path from root node to
any node x
2. if x is a root node
c(x) - minimum penalty corresponding to any node in the subtree within root x
(or)
c(x)=min{c(child-1),c(child-2),….c(child-n) ,where x is ode in subtree
Figure 8.6 corresponds to the variable tuple size formulations while figure 8.7
corresponds to the fixed tuple size formulation. In both figures square nodes
represent infeasible subsets. In figure 8.6 all non-square nodes are answer nodes.
Node 9 represents an optimal solution and is the only minimum-cost answer node
.For this node j={2,3} and the penalty (cost) is 8. In figure 8.7 only non-square leaf
nodes are answer nodes. Node 25 represents the optimal solution and is also a
minimum-cost answer node. This node corresponds to j={2,3} and a penalty of 8.
97
The costs of the answer nodes of figure 8.7 are given below the nodes.
Procedure :
Step 0: Generate cost matrix C for the given graph G
Step 1: ROW REDUCTION: For all rows do step 2
Step 2: Find least cost in a row and negate it with rest of the elements.
Step 3: COLUMN REDUCTION:Use cost matrix Row reduced one.For all columns
do
Step 4.
Step 4: Find least cost in a column and negate it with rest of the elements.
Step 5: Preserve cost matrix C [in which row reduced first and then column reduced
next] for the i th time.
Step 6: Enlist all edges (i, j) having cost = 0.
Step 7: Calculate effective cost of the edges.
Cij = least cost in the ith row excluding (i, j) + least cost in the j th
column excluding (i, j).
Step 8: Compare all effective cost and pick up the largest l(EC). If two or more
have the same cost then arbitarily choose any one among them.
Step9: Delete (i, j) : delete ith row and jthcolumn.Change (j, i) value to infinity.(to
avoid infinite loop formation). If (i,j) not present,leave it.
Step 10: Repeat step 1 to step 9 until the resultant cost matrix having order of 2*2
and reduce it. (Both R.R and C.C)
Step 11: Use preserved cost matrix Cn, Cn-1, ……………,C1.Choose an edge [i, j]
having value =0 at the first time from a preserved matrix and leave that
matrix.
Step 12:Use result obtained in Step 11 to generate a complete tour.
98
Example: Given graph G
22 5
27 1 25
9 19
25
5 2
8
1
50
10 31 17 15
6 40
30
4 7 3
6
24
Phase 1:
Step1: Row Reduction in C.
In R1,an edge <1,2> = 25
In R2,an edge <2,1> = 5
In R3,an edge <3,5> = 1
In R4,an edge <4,5> = 6
In R5,an edge <5,3> = 7
Subtract these elements from other elements.
1 2 3 4 5
0 15 6 2
1
0 12 25 20
2
18 14 5 0
3
3 44 18 0
4
15 1 0 3
5
99
Step 3: Column Reduction in C1
In C1,an edge <2,1> = 0
In C2,an edge <1,2> = 0
In C3,an edge <5,3> = 0
In C4,an edge <5,4> = 3
In C5,an edge <3,5>&<4,5> = 7
Subtract these elements from other elements.
1 2 3 4 5
0 15 3 2
1
0 12 22 20
2
18 14 2 0
3
3 44 18 0
4
15 1 0 0
5
Step 5:
Preserve the above in C1,
1 2 3 4 5
0 15 3 2
1
0 12 22 20
2
18 14 2 0
3
3 44 18 0
4
5 15 1 0 0
100
2 3 4 5
15 3 2
1
14 2 0
3
44 18 0
4
1 0 0
5
Step 10: The order of reduced Cost matrix 2 x 2 .Therefore goto step 1.
Phase 2:
Step 1 : Do RR
2 3 4 5
1
13 1 0
3 14 2 0
4 44 18 0
5 1 0 0
Step 3: Do CR
2 3 4 5
1 13 1 0
3 13 2 0
4 43 18 0
5 0 0 0
C2 = 2 3 4 5
1 13 1 0
3 13 2 0
4 43 18 0
5 0 0 0
101
Step 7: calculation of E.C.
Step 10: The order of cost matrix 2x2 hence goto step 1
Phase 3:
Step 1: Do RR
2 3 4
1
12 0
3 11 0
5 0 0
Step 3: Do CR
2 3 4
1
12 0
3
5
11 0
0 0
Step 5: Preserve the above in C3
2 3 4
1
0 12
11 0
0 0
Step 6: L={(1,4), (3,4), (5,2), (5,3)}
102
Step 7: Calculation of E.C
EC of edge (1,4)=12+0=12
EC of edge (3,4)=11+0=11
EC of edge (5,2)=0+11=11
EC of edge (5,3)=0+12=12
Step 8: Here we are having two edges (1,4) and (5,3) with cost = 12. Hence
arbitrarily
choose (1,4)
Step 9: Delete (1,4).Change (4,1) = if exists.
Now cost matrix is
2 3
2 11
3 0 0
Do RR:
2 3
3 0
5 0 0
Do CR:
2 3
3 0
5 0 0
Therefore C4 =
2 3
3 0
5 0 0
Step 11: List C1, C2, C3 & C4
C4 : 2 3
3
5
0
0 0
C3: 2 3 4
1
12 0
3
5
11 0
0 0
103
C2 :
2 3 4 5
1 13 1 0
3
13 2 0
4
5 43 18 0
0 0 0
C1:
1 2 3 4 5
1
0 15 3 2
2
0 12 22 20
3
4 18 14 2 0
5 3 44 18 0
15 1 0 0
Step 12:
Use C4 = 2 3
3 0
5 0 0
43 18 0
4
0 0 0
5
Pick up an edge (i, j) with least cost index. Here (1,5) not possible. B‘coz city1
has already choosen. Then (3,5) not possible as already chosen index.
Next choose (4,5) 0
Hence, T (3,2), (1,4), (4,5)
Use C1 =
104
1 2 3 4 5
1
0 15 3 2
2
0 12 22 20
3
18 14 2 0
4
5 3 44 18 0
15 1 0 0
Pick up an edge (i, j) with least index.
(1,2) Not possible (2,1) Choose it
Hence T (3,2), (1,4), (4,5), (2,1)
Solution:
From the above list
3—>2—>1—>4—>5
Now this results , Travelling Salesperson started his tour from city3 and visited
city2,city1,city4 ,city 5 and returned to the starting city3.
Final result:
3—>2—>1—>4—>5—>3
Cost is 15+15+31+6+7=64
xi=0 or 1, 1<=I<=n
There are two types of formulation.
1.Fixed tuple size formulation
2.Variable tuple size formulation
105
For minimum cost answer node to correspond to any optimul solution, we
need to define c(x)= - pixi for every answer node x
1<=i<=n
The cost c(x)= for infeasible leaf nodes
Example
Consider the knapsack instance n=4 , (p1,p2,p3,p4)=(10,20,12,18),
(w1,w2,w3,w4) =(2,4,6,9), and m=15
There are two methods to solve the knapsack problem
LC Branch and Bound Solution
FIFO Branch and Bound Solution
Algorithm:
Algorithm Ubound(cp,cw,k,m)
// cp is the current profit total ,cw is the current weight total; k is the index of the last
removed item; and m is the knapsack size//
{
b=cp; c=cw;
for i=k+1 to n do
{
if((c+w[i])<=m) then
{
c=c+w[i]; b=b-p[i];
}
}
return b;
}
106
For any node x in the state space tree, we compute c‘(x) and u(x) and upper
For any node x , if c‘(x)>upper the node x is killed immediately
Initially the upper =u(1) and then the upper is updated by the node with
minimum u(x) value in current level.
u() for root node is calculated using the algorithm Ubound or (- fully
included objects .
u(1) =b(return value of UBound algorithm)
=-32 (objects1, 2 and 3 are fully includes so - p1+p2+p3
=-(10+10+12)=-32)
Node1:
Initially, first 3 whole objects and part of 4 th object are included in the knapsack,
that is (1,1,1,3/9)
c‘(x)= - pixi
1<=i<=n
c‘(1)= - 1*10+1*10+1*12+3/9*18
= -32
upper =u(1)
=-32(- fully(not a part of object) included objects (that is1, 2 and 3 ) -
p1+
p2+p3
=-(10+10+12)=-32)
E-node node 1 generates all its children 2,3 and becomes dead node. Now
node 2 and 3 are live nodes.
To determine the next E-node compute c‘(x),u(x) and upper for live nodes 2
and 3
3
2
Node 2
If first object is included (already the first object is included in node 1 so
same value is obtained for c‘(2) )
Status of Included objects; (1,1,1,3/9)
c‘(2)= - 1*10+1*10+1*12+3/9*18
= -38
u(1)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)
Node 3
If first object is not included ,
Status of Included( objects; (0,1,1,5/9)
c‘(3)= - 0*10+1*10+1*12+5/9*18
= -32
u(3)=-22(- fully included objects (tat is 2 and 3 ) - p2+p3 =-(10+12)=-22)
upper=min{u(2),u(3)}
=min{-32,-22}
upper=-32
107
if c‘(2) and c‘(3) < upper, so node 2 and 3 are not killed and are live
nodes
The node 3 becomes the next E-node because c‘(2)=-38<c‘(3)=-32
12
1 2 3
2 3
41 51
21 21
41 51
21 21
61 71
21 21
Node 4:
If second object is included (already the third object is included in node
1,2 so same value is obtained for c‘(4) )
status of Included( objects; (1,1,1,3/9)
c‘(4)= - 1*10+1*10+1*12+3/9*18
= -38
u(4)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)
Node 5:
If second object is not included ,
status of Included objects; (1,,0,1,7/9)
c‘(5)= - 1*10+0*10+1*12+7/9*18
= -36
u(5)=-22(- fully included objects (that is 1 and 3 ) - p1+p3 =-(10+12)=-
22)
upper=min{u(4),u(5)}
=min{-32,-22}
upper=-32
if c‘(4) and c‘(5) < upper, so node 4 and 5 are not killed and are live
nodes
Within the live nodes 4 and 5,the node 4 becomes the next E-node
because c‘(4)=-38<c‘(5)=-36
Node 4 generates all its children 6,7
108
Node 6:
If third object is included (already the third object is included in node
1,2,4,so same value is obtained for c‘(6) )
Status of Included objects; (1,1,1,3/9)
c‘(6)= - 1*10+1*10+1*12+3/9*18
= -38
u(6)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)
Node 7:
If third object is not included ,
Status of Included objects; (1,1,0,1)
c‘(7)= - 1*10+1*10+0*12+1*18
= -38
u(7)=-38(- fully included objects (tat is 1,2 and 4 ) - p1+p2 +p4=-
(10+10+18)=-38)
upper=min{u(6),u(7)}
=min{-32,-38}
upper=-38
if c‘(6) and c‘(7) = upper, so node 4 and 5 are not killed and are live
nodes
Within the live nodes 6 and 7,the node 7 becomes the next E-node
because u(7)=-38<u(6)=-32 (c‘(6)=-38 is equal to c‘(7)=-36 so compare
u(x)))
Node 7 generates all its children 8,9 1
2 3
4 5
6
7
Node 8: 8
If fourth object is included 9
Status of Included objects; (1,1,0,1)
c‘(6) = - 1*10+1*10+0*12+1*18
= -38
u(8)=-38(- fully included objects (tat is1, 2 and 4 ) -p1+ p2+p4
=-(10+10+18)=-38)
109
Node 9:
If fourth object is not included ,
status of Included objects; (1,1,0,0)
c‘(7)= - 1*10+1*10+0*12+0*18
= -20
u(7)=-20(- fully included objects (tat is 1 and 2 ) - p1+p2 =-(10+10)=-20)
upper=min{u(8),u(9)}
=min{-38,-20}
upper=-38
if c‘(9)=-20 > upper, so node and 9 is killed
Nodes 6 and 8 are live nodes with least costs . Regardless of which
becomes the next
E-node ,c‘(E)>=upper and the search terminates with node 8, the answer
node
At this time the value of upper –38 together with the path 8,7,4,2,1 is
printed
As output
11
2
3
4
5 7
6
8 9
1 1 1 1
110
The c‘(2) ,c‘(3)and u(2),u(3) are calculated
upper=min(u(2),u(3) that is upper =-32
Node 2 becomes next E-node. Its children, nodes 4 and 5, are generated
and added to the queue
The c‘(4),c‘(5) and u(4),u(5) are calculated
upper=min(u(4),u(5) that is upper =-32
Node 3 becomes next E-node. Its children, nodes 6 and 7, are generated
and added to the queue
The c‘(6),c‘(7) and u(6),u(7) are calculated
Node 7 is immediately killed because c‘(7)>upper(-28>-32)
upper=min(u(6),u(7) that is upper =-32
Node 4 becomes next E-node. Its children, nodes 8 and 9, are generated
and added to the queue
The c‘(8) ,c‘(9)and u(8) ,u(9) are calculated
upper=min(u(8),u(9) that is upper =-38
nodes 5 and 6 in the queue are the next E-nodes which are not expanded
because c‘(5) and c‘(6) >upper(-36 and –32 >-38)
Node 8 becomes next E-node. Its children, nodes 10 and 11, are
generated and added to the queue
The c‘(10) ,c‘(11)and u(10) ,u(11) are calculated
Node 10 is infeasible so it is killed. Node 11 has c‘(11)>upper so it is killed
Node 9 becomes next E-node. Its children, nodes 12 and 13, are
generated and added to the queue
The c‘(12) ,c‘(13)and u(12) ,u(13) are calculated
upper=min(u(12),u(13) )that is upper =-38
node 13 is killed because c‘(13)>upper
The only remaining node is 12. it has no children and search terminates
111