0% found this document useful (0 votes)
5 views

DAA(Design and Analysis of Algorithm) CSE

The document outlines the syllabus for the Design and Analysis of Algorithms course (CS T44) for B.Tech./CSE students, detailing various algorithm techniques and performance analysis. It covers topics such as algorithm definitions, performance analysis, sorting and searching algorithms, dynamic programming, backtracking, and branch and bound methods. Additionally, it includes course objectives, unit breakdowns, and references for further reading.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

DAA(Design and Analysis of Algorithm) CSE

The document outlines the syllabus for the Design and Analysis of Algorithms course (CS T44) for B.Tech./CSE students, detailing various algorithm techniques and performance analysis. It covers topics such as algorithm definitions, performance analysis, sorting and searching algorithms, dynamic programming, backtracking, and branch and bound methods. Additionally, it includes course objectives, unit breakdowns, and references for further reading.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 115

Design and Analysis of

Algorithm-CS T44
B.Tech./ CSE/IV SEMESTER

A.ALBERT ALEX, AP/CSE

Achariya College of Engineering Technology


(Approved by AICTE and affiliated to Pondicherry University)
An ISO 9001:2008 Certified Institution
Achariyapuram, Villianur, Puducherry – 605 110.
www.acet.edu.in

(For internal circulation only)


Contents
Chapter 1 Algorithm 1

1.1 Algorithm 1
1.2 Performance Analysis 3
1.3 Analysis 6
1.4 Asymptotic analysis 6
1.5 Analysis Of Searching 6
1.5.1 Sequential Search 6
1.5.2 Binary Search 7
1.5.3 Fibonacci Search 8
1.6 Analysis Of Sorting 9
1.6.1 Heap Sort 9
1.6.2 Radix Sort 12
1.6.3 Insertion Sort 14
1.6.4 Selection Sort 14
1.6.5 Bubble Sort 16
1.7 Recursion 17
1.8 Analysis Of Non-Recursive And Recursive Algorithms 18
1.9 Solving Recurrence Equations 19

Chapter2 29

2.1 General Method 29


2.1.1 Binary Search 30
2.1.2 Finding the Maximum and Minimum 32
2.1.3 Merge Sort 34
2.1.4. Quick Sort 38
2.1.5 Strasson’s Matrix Multiplicaion 39
2.2 Greedy Method 42
2.2.1General Method 42
2.2.2 Knapsack Problem (Fractional knapsack problem) 43
2.2.3 Minimum- Cost Spanning Trees 44
2.2.3.i. Kruskal’s Algorithm 45
2.2.3. ii.Prim’s Algorithm 48
2.2.4 SINGLE Source Shortest Paths 50
2.2.5 Job Scheduling With Dead Lines 51
2.2.6 Optimal Storage on Tapes 52
2.2.7 Optical Merge Patterns 53

Chapter 3 55

3.1Dynamic Programming 55
3.1.1 Multistage Graph 55
3.1.2 All Pair Shortest Path 61
3.1.3 0/1 Knapsack Problem 64
3.1.4 Travelling Salesman Problem 66
3.1.5. Chained Matrix Multiplication 68
3.2. Basic Search and Traversal Technique 71
3.2.1 Techniques For Graphs 72
3.2.2 Breadth First Search and Traversal 73
3.2.3 Depth First Search and Traversal 74
3.2.4 Topological Sorting 74

Chapter 4 Backtracking 77

4.1 Backtracking 77
4.1.1 Sum of subsets 78
4.1.2 Hamiltonian cycles 79
4.1.3 8-Queens Problem 81
4.1.4 Graph colouring 82
4.1.5Knapsack problem using backtracking 85

Chapter 5 Branch and Bound Method 88

5.1 Introduction 88
5.2 Least Cost (LC) Search 89
5.2.1 Control Abstractions for LC Search 91
5.2.2 15 Puzzle Problem 92
5.3 Bounding 93
5.3.1 Example: Job Scheduling with deadlines Problem 93
5.4 Travelling Salesman Problem 95
5.5 0/1 Knapsack Problem 102
5.5.1. LC Branch and Bound 103
5.5.2. FIFO Branch and Bound 107
SYLLABUS
CS T44DESIGN AND ANALYSIS OF ALGORITHMS

COURSE OBJECTIVE
 To acquaint students with algorithm techniques when programming for the
storage and manipulation of data.
 The concept of data abstraction and the problem of building implementations of
abstract data types are emphasized.

UNIT I
Algorithms: Definitions and notations: standard notations - asymptotic notations – worst
case, best case and average case analysis; big oh, small oh, omega and theta notations;
Analysis of Sorting and Searching: Heap, shell, radix, insertion, selection and bubble sort;
sequential, binary and Fibonacci search. Recursive algorithms, analysis of non-recursive
and recursive algorithms, solving recurrence equations, analyzing control structures.

UNIT II
Divide and Conquer Method: General Method – binary search –maximum and minimum –
merge sort - quick sort – Strassen’s Matrix multiplication. Greedy Method: General
method – knapsack problem – minimum spanning tree algorithms – single source shortest
path algorithm – scheduling, optimal storage on tapes, optimal merge patterns.

UNIT III
Dynamic Programming: General method – multi-stage graphs – all pair shortest path
algorithm – 0/1 Knapsack and Traveling salesman problem – chained matrix
multiplication. Basic Search and Traversal technique: Techniques for binary trees and
graphs – AND/OR graphs – biconnected components – topological sorting.

UNIT IV
Backtracking: The general method – 8-queens problem – sum of subsets – graph coloring
– Hamiltonian cycle – Knapsack problem.

UNIT V
Branch and Bound Method: Least Cost (LC) search – the 15-puzzle problem – control
abstractions for LC-Search – Bounding – FIFO Branch-and-Bound - 0/1 Knapsack
problem – Traveling Salesman Problem. Introduction to NP-Hard and NP-Completeness.

TEXT BOOKS
 Ellis Horowitz, SartajSahni, SanguthevarRajasekaran, “Fundamentals of Computer
Algorithms”, Galgotia Publications Pvt. Ltd., 2008.
REFERENCE BOOKS
 Gilles Brassard and Paul Bratley, “Fundamentals of Algorithms”, PHI, 1997.
 AnanyLevitin, “Introduction to Design and Analysis of Algorithms”, Pearson Education,
2005.
Chapter 1: Algorithm

1.1 Algorithm
Informal Definition:
An Algorithm is any well-defined computational procedure that takes
some value or set of values as Input and produces a set of values or some value as
output. Thus algorithm is a sequence of computational steps that transforms the i/p
into the o/p.
Formal Definition:
An Algorithm is a finite set of instructions that, if followed, accomplishes
a particular task. In addition, all algorithms should satisfy the following criteria.

1. INPUT  Zero or more quantities are externally supplied.


2. OUTPUT  At least one quantity is produced.
3. DEFINITENESS  Each instruction is clear and unambiguous.
4. FINITENESS  If we trace out the instructions of an algorithm, then for all
cases, the algorithm terminates after a finite number of steps.
5. EFFECTIVENESS  Every instruction must very basic so that it can be
carried out, in principle, by a person using only pencil & paper.
Issues or study of Algorithm:
 How to device or design an algorithm  creating and algorithm.
 How to express an algorithm  definiteness.
 How to analysis an algorithm  time and space complexity.
 How to validate an algorithm  fitness.
 Testing the algorithm  checking for error.
Algorithm Specification:
Algorithm can be described in three ways.
1. Natural language like English:
When this way is choused care should be taken, we should ensure that
each & every statement are definite.
2. Graphic representation called flowchart:
This method will work well when the algorithm is small& simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program,
which resembles language like Pascal &algol.
Pseudo-Code Conventions:
1. Comments begin with // and continue until the end of line.
2. Blocks are indicated with matching braces {and}.
3. An identifier begins with a letter. The data types of variables are not explicitly
declared.
4. Compound data types can be formed with records. Here is an example,
Node. Record
{
data type – 1 data-1;
.
.
.
data type – n data – n;
node * link;
}

1
Here link is a pointer to the record type node. Individual data items of a
record can be accessed with  and period.
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;
6. There are two Boolean values TRUE and FALSE.
 Logical Operators AND, OR, NOT
Relational Operators <, <=,>,>=, =,!=
7. The following looping statements are employed.
For, while and repeat-until
While Loop:
While < condition > do
{
<statement-1>
.
.
.

<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do

{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:

repeat
<statement-1>
.
.
.
<statement-n>
until<condition>
8. A conditional statement has the following forms.
 If <condition> then <statement>
 If <condition> then <statement-1>
Else <statement-1>
Case statement:
Case
{
:<condition-1>:<statement-1>
.
.
.

2
:<condition-n>:<statement-n>
: else :<statement-n+1>
}
9. Input and output are done using the instructions read & write.

10. There is only one type of procedure:


Algorithm, the heading takes the form,
Algorithm Name (Parameter lists)
 As an example, the following algorithm fields & returns the maximum of ‗n‘
given numbers:
1. algorithm Max(A,n)
2. // A is an array of size n
3. {
4. Result := A[1];
5. for I:= 2 to n do
6. if A[I] > Result then
7. Result :=A[I];
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I
are Local variables.

 Next we present 2 examples to illustrate the process of translation problem


into an algorithm.

1.2 Performance Analysis


1. Space Complexity:
The space complexity of an algorithm is the amount of memory it
needs to run to compilation.
2. Time Complexity:
The time complexity of an algorithm is the amount of computer time
it needs to run to compilation.
Space Complexity:
Space Complexity Example:
Algorithm abc(a,b,c)
{
return a+b++*c+(a+b-c)/(a+b) +4.0;
}

 The Space needed by each of these algorithms is seen to be the sum of the
following component.
1.A fixed part that is independent of the characteristics (eg:number,size)of the
inputs and outputs.
The part typically includes the instruction space (ie. Space for the code),
space for simple variable and fixed-size component variables (also called
aggregate) space for constants, and so on.

1. A variable part that consists of the space needed by component variables


whose size is dependent on the particular problem instance being solved, the

3
space needed by referenced variables (to the extent that is depends on
instance characteristics), and the recursion stack space.
 The space requirement s(p) of any algorithm p may therefore be written
as,
S(P) = c+ Sp(Instance characteristics)
Where ‗c‘ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}

 The problem instances for this algorithm are characterized by n,the


number of elements to be summed. The space needed d by ‗n‘ is
one word, since it is of type integer.
 The space needed by ‗a‘a is the space needed by variables of
tyepe array of floating point numbers.
 This is atleast ‗n‘ words, since ‗a‘ must be large enough to hold the
‗n‘ elements to be summed.
 So,we obtain Ssum(n)>=(n+s)
[ n for a[],one each for n,I a& s]
Time Complexity:
The time T(p) taken by a program P is the sum of the compile time and the
run time(execution time)
The compile time does not depend on the instance characteristics. Also
we may assume that a compiled program will be run several times without
recompilation .This rum time is denoted by tp(instance characteristics).
 The number of steps any problem statemn t is assigned depends on the
kind of statement.
For example, comments  0 steps.
Assignment statements  1 steps.
[Which does not involve any calls to other algorithms]
Interactive statement such as for, while & repeat-until Control part of the
statement.
1. We introduce a variable, count into the program statement to increment
count with initial value 0.Statement to increment count by the appropriate
amount are introduced into the program.
This is done so that each time a statement in the original program is
executes count is incremented by the step count of that statement.
Algorithm:
Algorithm sum(a,n)
{
s= 0.0;
count = count+1;
for I=1 to n do
{
count =count+1;

4
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}

 If the count is zero to start with, then it will be 2n+3 on termination. So each
invocation of sum execute a total of 2n+3 steps.

2. The second method to determine the step count of an algorithm is to build


a table in which we list the total number of steps contributes by each statement.
First determine the number of steps per execution (s/e) of the statement and the
total number of times (ie., frequency) each statement is executed.
By combining these two quantities, the total contribution of all statements, the step
count for the entire algorithm is obtained.

Statement S/e Frequency Total


1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3

Average –Case Analysis


 Most of the time, average-case analysis are performed under the more or
less realistic assumption that all instances of any given size are equally
likely.
 For sorting problems, it is simple to assume also that all the elements to
be sorted are distinct.
 Suppose we have ‗n‘ distinct elements to sort by insertion and all n!
permutation of these elements are equally likely.
 To determine the time taken on a average by the algorithm ,we could add
the times required to sort each of the possible permutations ,and then
divide by n! the answer thus obtained.
 An alternative approach, easier in this case is to analyze directly the time
required by the algorithm, reasoning probabilistically as we proceed.
 For any I,2  I  n, consider the sub array, T[1….i].
 The partial rank of T[I] is defined as the position it would occupy if the sub
array were sorted.
 For Example, the partial rank of T[4] in [3,6,2,5,1,7,4] in 3 because
T[1….4] once sorted is [2,3,5,6].

5
 Clearly the partial rank of T[I] does not depend on the order of the element
in
 Sub array T[1…I-1].

1.3 Analysis
Best case:
This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Worst case:
This analysis constrains on the input, other than size. Resulting in the fasters
possible run time
Average case:
This type of analysis results in average running time over every type of input.

1.4 Asymptotic analysis:


Expressing the complexity in term of its relationship to know function. This
type analysis is called asymptotic analysis.
Complexity:
Complexity refers to the rate at which the storage time grows as a function of
the problem size
Asymptotic notation:
Big ‘oh’: the function f(n)=O(g(n)) iff there exist positive constants c and no such
that f(n)≤c*g(n) for all n, n ≥ no.

Omega: the function f(n)=Ω(g(n)) iff there exist positive constants c and no such that
f(n) ≥ c*g(n) for all n, n ≥ no.

Theta: the function f(n)=ө(g(n)) iff there exist positive constants c1,c2 and no such
that c1 g(n) ≤ f(n) ≤ c2 g(n) for all n, n ≥ no.

1.5Analysis Of Searching
Searching
Let us assume that we have a sequential file and we wish to retrieve an
element matching with key ‗k‘, then, we have to search the entire file from the
beginning till the end to check whether the element matching k is present in the file
or not.
There are a number of complex searching algorithms to serve the purpose of
searching. The linear search and binary search methods are relatively straight
forward methods of searching.
1.5.1 Sequential Search
In this method, we start to search from the beginning of the list and examine
each element till the end of the list. If the desired element is found we stop the
search and return the index of that element. If the item is not found and the list is
exhausted the search returns a zero value.
Pseudo Code Algorithm:
Algorithm sequential(a[],n,x)
{
i:=1;
while(i<=n)
{

6
if(x==a[i])
{
write(―Element Found in ‖,i);
write (‗search successful‘);
exit( );
}
else
{
i:=i+1;
}
}
write (‗search unsuccessful‘);
}

Analysis:
Worst Case:
The worst case occurs when an item is not found or the search item is the last
th
(n ) element. For both situations we must examine all n elements of the array.So the
complexity of the sequential search is n. i.e., O(n). The execution time is proportional
to n.i.e) the algorithm is executed in linear time.
Best Case:
The best case occurs when the search item is at the first location itself. So the
time complexity is O(1).
Average Case:
The average case occurs if the search element is at the middle location. So
we must examine n/2 elements of the array. So the time complexity is O(n/2).
Example:
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]

20 35 18 8 14 41 3 39

Best Case:
If x=20 then element is found at the first location.
T(n)=O(1)
Average Case:
If x=8 then we must compare the 4 elements to find.
T(n)=O(n/2)
Worst Case:
If x=41 or 100 then we must examine all the 8 elements of the array. So the
complexity is O(n).

1.5.2 Binary Search


Binary search method is also relatively simple method. For this method it is
necessary to have the vector in an alphabetical or numerically increasing order. A
search for a particular item with X resembles the search for a word in the dictionary.
The approximate mid entry is located and its key value is examined. If the mid value
is greater than X, then the list is chopped off at the (mid-1)th location. Now the list
gets reduced to half the original list. The middle entry of the left-reduced list is
examined in a similar manner. This procedure is repeated until the item is found or

7
the list has no more elements. On the other hand, if the mid value is lesser than X,
then the list is chopped off at (mid+1) th location. The middle entry of the right-
reduced list is examined and the procedure is continued until desired key is found or
the search interval is exhausted.
The algorithm for binary search is as follows,
Algorithm : binary search
Input : A, vector of n elements
K, search element
Output : low –index of k
Method : low=1,high=n
While(low<=high-1)
{
mid=(low+high)/2
if(k<a[mid])
high=mid
else
low=mid
if end
}
while end
if(k=A[low])
{
write("search successful")
write(k is at location low)
exit();
}
else
write (search unsuccessful);
if end;
algorithm ends.

1.5.3 Fibonacci Search


The Fibonacci search technique is a method of searching a sorted array
using a divide and conquer algorithm that narrows down possible locations with the
aid of Fibonacci numbers. This method is faster than traditional binary search
algorithms, with a complexity of O(log(x)).
Algorithm
Let k be defined as an element in F, the array of Fibonacci numbers. n = Fm
is the array size. If the array size is not a Fibonacci number, let Fm be the smallest
number in F that is greater than n.
The array of Fibonacci numbers is defined where Fk+2 = Fk+1 + Fk, when k ≥
0, F1 = 1, and F0 = 0.
To test whether an item is in the list of ordered numbers, follow these steps:
1. Set k = m.
2. If k = 0, stop. There is no match; the item is not in the array.
3. Compare the item against element in Fk-1.
4. If the item matches, stop.
5. If the item is less than entry Fk-1, discard the elements from positions Fk-1 +
1 to n. Set k = k - 1 and return to step 2.

8
6. If the item is greater than entry Fk-1, discard the elements from positions 1 to
Fk-1. Renumber the remaining elements from 1 to Fk-2, set k = k - 2, and
return to step 2.

1.6Analysis Of Sorting

1.6.1 Heap Sort


Definition for Heap:
A heap is a complete binary tree with the property that the value at each node
is at least as large as the value at its children.
The definition of a max heap implies that one of the largest elements is at the
root of the heap. If the elements are distinct then the root contains the largest
item.The tree is completely filled on all levels except possibly lowest.

25

13 17

5 8 3

An array [25,13,17,5,8,3] is represented as heap above.The maximum


element in an array is always a root node.
 The indices of its parent,;left child and right child can be computed as
PARENT(i) = floor(i/2)
LEFT(i) = 2i
RIGHT(i) = 2i + 1

 All the tree levels are completely filled except possibly for the lowest
level,which is filled from the left upto a point.
 The levels above the lowest level from a complete binary tree of height h-1
contains 2h –1 nodes.
 A heap of height h has
 Maximum no. of elements =2 h+1 nodes.(When lowest level is
completely filled)
 Minimum no. of nodes=2h nodes.
Height of a node:
It is defined as no. of edges on a simple downward path from a node to leaf.
Height of a tree:
It is defined as no. of edges on a simple downward path from a root node to
leaf.
To insert an element into the heap, one adds it "at the bottom" of the heap
and then compares it with its parent, grandparent, great grandparent and so on, until
it is less than or equal to one of these values.
Example:
a[1] a[2] a[3] a[4] a[5] a[6] a[7]
40 80 35 90 45 50 70
Step1:Take the first element of an array.a[1]=40.

9
40

Step 2: Take the second element of an array.a[2]=80.


Insert a[2] to the left side of the first element.

40 80

Adjust the elements

80 40

Step 3:Take the 3rd element .a[3]=35.Add this element to the right side of the root
node.

80

40 35

Step 4:Take the 4th element .a[4]=90.Add this element to the left side of the node
40.

80 80 90

Adjust Adjust
40 35 90 35 80 35

90 40 40

Step 5:Take the 5th element .a[5]=45.Add this element to the right side of the node
80.

90

80 35

40 45

10
Step 6:Take the 6th element .a[6]=50.Add this element to the left side of the node
35.

90
90

Adjust
80 35
80 50

40 45 50
40 45 35

Step 7:Take the 7th element .a[7]=70.Add this element to the right side of the node
50.

90 90

Adjust

80 50 80 70

40 45 35 70 40 45 35 50

The figure shows one example of how insert( ) would insert a new value into
an existing heap.To do heap sort, first add the first element in the tree. Then add
the second element to the tree.If the tree does not satisfy the heap property then
adjust the nodes in the tree.Then insert the next element and adjust the nodes.
Repeat the above steps till the insertion of last element. Finally we obtain the sorted
elements in heap tree representation.
Procedures for Heap Sort
1.Build a Heap(Heapify)
2.Adjust
Procedure Heapify( )
 Create a heap from n elements.
 Maintain the heap property.Thats why we are using adjust( ) procedure to
arrange the elements.
Procedure Adjust( )
Adjust takes as input the array a[ ] and integer I and n. It regards a[1..n] as a
complete binary tree. If the subtrees rooted at 2I and 2I+1 are max heaps, then
adjust will rearrange elements of a[ ] such that the tree rooted at I is also a max
heap. The maximum elements from the max heap a[1..n] can be deleted by deleting
the root of the corresponding complete binary tree. The last element of the array, i.e.
a[n], is copied to the root, and finally we call Adjust(a,1,n-1).
Algorithm Heapsort(a,n)
{
Heapify(a,n);
//Transform the array into heap.

11
//Interc hange the new ma with the element at the end of the array.
for i=n to 2 step –1 do
{
t:=a[i];
a[i]:=a[1];
a[1]:=t;
Adjust(a,1,i-1);
}
}
Algorithm Heapify(a,n)
{
//Readjust the elements in a[1:n] to form a heap.
for i:= n/2 to 1 step –1 do
Adjust(a,I,n);
}
Algorithm Adjust(a,i,n)
{
//The complete binary trees with roots 2i and 2i+1 are
//combined with node i to form a heap rooted at i.
//No node has an address greater than n or less than 1.
j=2i;
item=a[i];
while (j<=n) do
{
if ((j<n) and (a[j]< a[j+1])) then
j=j+1;
//compare left and right child and let j be the larger child
if ( item >= a[i]) then break;
// A position for item is found
a[j/2]=a[j];
j=2j;
}
a[j/2]=item;
}
Analysis:
 Heapify() requires n operations.->o(n)
 Adjust() requires o(log n) operations for each invocation.Adjust is called n
times by Heapsort( ).So time is o(n log n).
 Heapsort = O(n) +O(n log n) =O(n).

Algorithm HeapSort( ):
T(n)=O(n)
Algorithm Heapify( ):
T(n)=O(n)
Algorithm Adjust( ):
T(n)=O(n log n)

1.6.2 Radix Sort


It is a small method.It is a list of successive sort on increasingly significant
digit position. On the first pass the entire numbers are sorted baseud on the least

12
significant digit and combine in an array. Then on the second pass ,the entire
numbers are sorted on the least significant digit and combine in an array and so on.
Each element in the nth element array A has d digits, where digit 1 is
the lower order digit and d is the highest order digit.
This algorithm is based on properties of keys.All other algorithms are named
as comparison sort.But in this algorithm instead of comaring the elements ,we sort
the elements bit by bit.
Pseudo Code Algorithm:
Algorithm radixsort(a,n,d)
{
for i:= 1 to d do
{
for j:=0 to 9 do
{
for k:=0 to 9 do
b[i][k]:=0;
for j:=1 to n do
{
k:=[ a[j] % pow(10,i) - a[i] % pow(10,i-1) ] / pow(10,i-1);
b[k][0]:=b[k][0]+1;
b[k][b[k][0]]:=a[j];
}
}
}
for j:=0 to 9 do
{
for k:=2 to b[j][1] do
{
a[b++]:=b[j][k+1];
}
}
}
Algorithm pow(a,b)
{
temp:=1;
for i:= 1 to b do
temp:=temp*a;
return temp;
}
Example:
Input Pass1 Pass2 Pass3
MSB 329 LSB 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657]
720 329 457 720
355 839 657 839

13
Analysis:
 Running time depends on the stable sort.
 Each digit is in the range of 1 to k.
 Each pass over n takes θ (n+k) time.
 There are d no. of passes.
 Total time for radix sort is θ(dn+kd) where d is constant and k is θ (n)
 The radix sort is linear in time.

1.6.3Insertion Sort
If the first few objects are already sorted, an unsorted object can be inserted
in the sorted set in proper place. This is called insertion sort.
An algorithm consider the elements one at a time, inserting each in its suitable
place among those already considered (keeping them sorted).Insertion sort is an
example of an incremental algorithm; it builds the sorted sequence one number at a
time.
Pseudo Code Algorithm:
Algorithm Insertion_Sort(A,n)
1. For j = 2 to length [A] do
2. key = A[j]
3. {Put A[j] into the sorted sequence A[1 . . j-1]
4. i ← j -1
5. while i > 0 and A[i] > key do
6. A[i+1] = A[i]
7. i = i-1
8. A[i+1] = key
Analysis:
Best-Case:
The while-loop in line 5 executed only once for each j. This happens if given
array A is already sorted.
T(n) = an + b = O(n)
T(n)=O(n)
It is a linear function of n.
Worst-Case:
The worst-case occurs, when line 5 executed j times for each j. This can
happens if array A starts out in reverse order .
T(n) = an2 + bc+ c = O(n2)
T(n)=O(n2 )
It is a quadratic function of n.
Extra Memory
This algorithm does not require extra memory.

1.6.4. Selection Sort


Selection sort is the simplest of sorting techniques . The idea in selection sort
is to find the smallest element in the array and exchange it with the element in the
first position, then find the second smallest element and exchange it with the element
in the second position, and continue in this way until the entire array is sorted.

14
Pseudo Code Algorithm:
Algorithm Selection_Sort (A,n)
{
for i := 1 to n-1 do
{
min := i;
m:=a[min];
for j: = i+1 ton do
{
if (a[j] < m) then
min = j;
}
t := a[i];
a[min] := a[i];
a[i] = t;
}
}
Example
Consider the unsorted array, n=8
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]

20 35 18 8 14 41 3 39
After sorting ,the resulting array should be
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]

3 8 14 18 20 35 39 41

When i=1
 Find the smallest element in the unsorted array.
 Place the smallest element in position of a[1]
i.e., the smallest element in the unsorted array is 3 so exchange the values of
a[1] and a[7].

a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]


3 35 18 8 14 41 20 39

When i=2
 Find the smallest from a[2] to a[8] , i.e., 8 in a[4]
 Exchange the values of a[2] and a[4].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 18 35 14 41 20 39
When i=3
 Find the smallest from a[3] to a[8] , i.e., 14 in a[5]
 Exchange the values of a[3] and a[5].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]

15
3 8 14 35 18 41 20 39
When i=4
 Find the smallest from a[4] to a[8] , i.e., 18 in a[5]
 Exchange the values of a[4] and a[5].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 35 41 20 39

When i=5
 Find the smallest from a[5] to a[8] , i.e., 20 in a[7]
 Exchange the values of a[5] and a[7].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 41 35 39
When i=6
 Find the smallest from a[6] to a[8] , i.e., 35 in a[7]
 Exchange the values of a[6] and a[7].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 35 41 39
When i=7
 Find the smallest from a[7] to a[8] , i.e., 39 in a[8]
 Exchange the values of a[7] and a[8].
a[1] a[2] a[3] a[4] a[5] a[6] a[7] a[8]
3 8 14 18 20 35 39 41
The number of moves with this technique is always of the order O(n).
Selection sort works very well for small files. It has a quite important application
because each item is actually moved at most once. It is a method of choice for
sorting files with very large objects (records) and small keys.
Analysis:
The number of moves with this technique is always of the order O(n).So the
time complexity is O(n).

1.6.5 Bubble Sort


Bubble Sort is an elementary sorting algorithm.It is an oldest,slowest and
simple method.It is a sorting by comparing each adjacent pair of items in a list,
swapping the items if necessary and repeat the process thru the list until no swaps
are done.
Steps:
1. Compare the first 2 elements.
2. If the first element is greater than second, then swap them.
3. Otherwise, leave as it is.
4. Then 2nd and 3rd pass. elements are compared and swap them if needed.
5. Thus adjacent 2 elements are compared till all the elements have been
passed through once.
6. Thus the largest element is positioned at the last position.

16
7. Then repeat the steps 2,3,4 & 5 are repeated until no swaps have been done
on the last pass.
8. At the end of the 2nd pass the 2nd largest element is positioned in the
corresponding location.
9. For n data items, the process continue for (n-1) passes or until no exchanges
are made in a single

Pseudo Code Algorithm:


Algorithm Bubblesort (A,n)
{
for i:= 1 to n do
for j:=n to i+1 step –1 do
if ( a[j] < a[j-1] ) then
{
t:=a[j] ;
a[j]:=a[j-1];
a[j-1]=t;
}
}
Analysis:
 The number of comparisons made
1 + 2 + 3 + . . . + (n - 1) = n(n - 1)/2 = O(n2)
 In this algorithm the number of comparison is irrespective of data set i.e.,
input whether best or worst.
 Time Complexity T(n) = O(n2)
 It does not require extra memory.

1.7 Recursion
Recursion may have the following definitions:
-The nested repetition of identical algorithm is recursion.
-It is a technique of defining an object/process by itself.
-Recursion is a process by which a function calls itself repeatedly until some
specified condition has been satisfied.
When to use recursion
Recursion can be used for repetitive computations in which each action is
stated in terms of previous result. There are two conditions that must be satisfied by
any recursive procedure.

1. Each time a function calls itself it should get nearer to the solution.
2. There must be a decision criterion for stopping the process.
In making the decision about whether to write an algorithm in recursive or non-
recursive form, it is always advisable to consider a tree structure for the problem. If
the structure is simple then use non-recursive form. If the tree appears quite bushy,
with little duplication of tasks, then recursion is suitable.
The recursion algorithm for finding the factorial of a number is given below,
Algorithm : factorial-recursion
Input : n, the number whose factorial is to be found.
Output : f, the factorial of n
Method : if(n=0)
f=1

17
else
f=factorial(n-1) * n
if end
algorithm ends.
The general procedure for any recursive algorithm is as follows,
1. Save the parameters, local variables and return addresses.
2. If the termination criterion is reached perform final computation and goto step
3 otherwise perform final computations and goto step 1

3. Restore the most recently saved parameters, local variable and return
address and goto the latest return address.

1.8 Analysis Of Non-Recursive And Recursive Algorithms

Demerits of recursive algorithms:


1. Many programming languages do not support recursion; hence, recursive
mathematical function is implemented using iterative methods.
2. Even though mathematical functions can be easily implemented using
recursion it is always at the cost of execution time and memory space. For
example, the recursion tree for generating 6 numbers in a Fibonacci series
generation is given in fig 2.5. A Fibonacci series is of the form
0,1,1,2,3,5,8,13,…etc, where the third number is the sum of preceding two
numbers and so on. It can be noticed from the fig 2.5 that, f(n-2) is computed
twice, f(n-3) is computed thrice, f(n-4) is computed 5 times.
3. A recursive procedure can be called from within or outside itself and to ensure
its proper functioning it has to save in some order the return addresses so
that, a return to the proper location will result when the return to a calling
statement is made.
4. The recursive programs needs considerably more storage and will take more
time.
Demerits of iterative methods :
 Mathematical functions such as factorial and Fibonacci series generation can
be easily implemented using recursion than iteration.
 In iterative techniques looping of statement is very much necessary.
Recursion is a top down approach to problem solving. It divides the problem into
pieces or selects out one key step, postponing the rest.

18
Iteration is more of a bottom up approach. It begins with what is known and from this
constructs the solution step by step. The iterative function obviously uses time that is
O(n) where as recursive function has an exponential time complexity.
It is always true that recursion can be replaced by iteration and stacks. It is also true
that stack can be replaced by a recursive program with no stack.

Figure 1.1

1.9Solving Recurrence Equations

Solving Recurrences :-( Happen again (or) repeatedly)


 The indispensable last step when analyzing an algorithm is often to solve a
recurrence equation.
 With a little experience and intention, most recurrence can be solved by
intelligent guesswork.
 However, there exists a powerful technique that can be used to solve certain
classes of recurrence almost automatically.
 This is a main topic of this section the technique of the characteristic equation.
1. Intelligent guess work:
This approach generally proceeds in 4 stages.
1. Calculate the first few values of the recurrence
2. Look for regularity.
3. Guess a suitable general form.
4. And finally prove by mathematical induction(perhaps constructive
induction).
Then this form is correct.
Consider the following recurrence,

0 if n=0
T(n) = 3T(n ÷ 2)+n otherwise

 First step is to replace n ÷ 2 by n/2


 It is tempting to restrict ‗n‘ to being ever since in that case n’2 = n/2, but
recursively dividing an even no. by 2, may produce an odd no. larger than 1.
 Therefore, it is a better idea to restrict ‗n‘ to being an exact power of 2.
 First, we tabulate the value of the recurrence on the first few powers of 2.

n 1 2 4 8 16 32
T(n) 1 5 19 65 211 665

19
* For instance, T(16) = 3 * T(8) +16
= 3 * 65 +16
= 211.
* Instead of writing T(2) = 5, it is more
useful to write T(2) = 3 * 1 +2.

Then,
T(A) = 3 * T(2) +4
= 3 * (3 * 1 +2) +4
= (32 * 1) + (3 * 2) +4
* We continue in this way, writing ‗n‘ as an explicit power of 2.
n T(n)

1 1
2 3 * 1 +2
22 32 * 1 + 3 * 2 + 2 2
23 33 * 1 + 32 * 2 + 3 * 2 2 + 2 3
24 34 * 1 + 33 * 2 + 32 * 2 2 + 3 * 2 3 + 2 4
25 35 * 1 + 34 * 2 + 33 * 2 2 + 3 2 * 2 3 + 3 * 2 4 + 2 5

 The pattern is now obvious.

T(2k ) = 3k20 + 3k-121 + 3k-222+…+312k-1 + 302k.

= ∑ 3k-i 2i

= 3k ∑ (2/3)i

= 3k * [(1 – (2/3)k + 1) / (1 – (2/3)]


= 3k+1 – 2k+1
Proposition: (Geometric Series)
Let Sn be the sum of the first n terms of the geometric series a, ar, ar 2….Then
Sn = a(1-rn)/(1-r), except in the special case when r = 1; when Sn = an.

= 3k * [ (1 – (2/3) k+1) / (1 – (2/3))]

= 3k * [((3 k+1 – 2 k+1)/ 3 k+1) / ((3 – 2) / 3)]

3 k+1 – 2k+1 3
k
= 3 * ----------------- * ----
3 k+1 1
k+1
3 – 2k+1
= 3k * -----------------
3k+1-1

= 3k+1 – 2k+1

* It is easy to check this formula against our earlier tabulation.

20
Eg : 2
0 n=0
tn = 5 n=1
3tn-1 + 4tn-2, otherwise

tn = 3tn-1 – 4tn-2 = 0  General function

Characteristics Polynomial, x2 – 3x – 4 = 0
(x – 4)(x + 1) = 0

Roots r1 = 4, r2 = -1

General Solution, fn = C1r1n + C2 r2n (A)


n=0  C1 + C2 = 0  (1)
n=1  C1r1 + C2r2 = 5  (2)

Eqn 1  C1 = -C2

sub C1 value in Eqn (2)


-C2r1 + C2r2 = 5
C2(r2 – r1) = 5
5
C2 = -------
r2 – r1
5
= ------
-1 + 4

= 5 / (-5) = -1

C2 = -1 , C1 = 1

Sub C1, C2, r1& r2 value in equation  (A)

fn = 1. 4n + (-1) . (-1)n
fn = 4n + 1n

2. Homogenous Recurrences :
* We begin our study of the technique of the characteristic equation with the
resolution of homogenous linear recurrences with constant co-efficient, i.e the
recurrences of the form,
a0tn + a1tn-1 + ….. + aktn-k = 0
where the ti are the values we are looking for.
* The values of ti on ‗K‘ values of i (Usually 0 ≤ i ≤ k-1 (or) 0 ≤ i ≤ k) are needed
to determine the sequence.
* The initial condition will be considered later.
* The equation typically has infinitely many solution.
* The recurrence is,
 linear because it does not contain terms of the form t n-i, t n-j, t2n-i, and
soon.

21
 homogeneous because the linear combination of the t n-i is equal to zero.
 With constant co-efficient because the ai are constants
* Consider for instance our non familiar recurrence for the Fibonacci sequence,
fn = f n-1 + f n-
* This recurrence easily fits the mould of equation after obvious rewriting.
fn – f n-1 – f n-2 = 0
* Therefore, the fibonacci sequence corresponds to a homogenous linear
recurrence
with constant co-efficient with k=2,a0=1&a1=a2 = -1.
* In other words, if fn&gn satisfy equation.
k
So ∑ aif n-i = 0 & similarly for gn&fn
i=0
We set tn = C fn + d gn for arbitrary constants C & d, then tn is also a solution to
equation.
* This is true because,
a0tn + a1tn-1 + … + aktn-k
= a0(C fn +d gn) + a1(C fn-1 + d gn-1) + …+ ak(C fn-k + d gn-k)
= C(a0fn + a1 fn-1 + … + akfn-k)+
d(a0 gn + a1 gn-1 + … + akgn-k)
=C*0+d*0
= 0.
1) (Fibonacci) Consider the recurrence.

n if n=0 or n=1
fn =
f n-1 + f n-2 otherwise

We rewrite the recurrence as,


fn – f n-1 – f n-2 = 0.
The characteristic polynomial is,
x2 – x – 1 = 0.
The roots are,
-(-1) ± √((-1)2 + 4)
x = ------------------------
2

1 ±√ (1 + 4)
= ----------------
2

1 ± √5
= ----------
2

1+√5 1 - √5
r1 = --------- and r2 = ---------
2 2

22
The general solution is,
fn = C1r1n + C2r2n

when n=0, f0 = C1 + C2 = 0
when n=1, f1 = C1r1 + C2r2 = 1

C1 + C2 = 0  (1)
C1r1 + C2r2 = 1  (2)

From equation (1)

C1 = -C2
Substitute C1 in equation(2)
-C2r1 + C2r2 = 1
C2[r2 – r1] = 1

Substitute r1 and r2 values

1 - √5 1 - √5
C2 --------- - --------- = 1
2 2
1 – √5 – 1 – √5
C2 --------------------- =1
2

-C2 * 2√5
-------------- = 1
2

– √5C2 = 1

C1 = 1/√5 C2 = -1/√5

Thus,
1 1 + √5 n -1 1 - √5 n

fn = ---- --------- + ---- --------


√5 2 √5 2
n n
1 1 + √5 1 – √5
= ---- --------- - ---------
√5 2 2

3. Inhomogeneous recurrence
* The solution of a linear recurrence with constant co-efficient becomes more
difficultwhen the recurrence is not homogeneous, that is when the linear combination
is not equal to zero.
* Consider the following recurrence
a0tn + a1 t n-1 + … + ak t n-k = bn p(n)
* The left hand side is the same as before,(homogeneous) but on the right-hand
side

23
we have bnp(n), where,
 b is a constant
 p(n) is a polynomial in ‗n‘ of degree ‗d‘.

Example(1)
Consider the recurrence,
tn – 2t n-1 = 3n(A)
In this case, b=3, p(n) = 1, degree = 0.
The characteristic polynomial is,
(x – 2)(x – 3) = 0
The roots are, r1 = 2, r2 = 3
The general solution,
tn = C1r1n + C2r2n
tn = C12n + C23n (1)

when n=0, C1 + C2 = t0  (2)


when n=1, 2C1 + 3C2 = t1  (3)
sub n=1 in eqn (A)
t1 – 2t0 = 3
t1 = 3 + 2t0
substitute t1 in eqn(3),
(2) * 2  2C1 + 2C2 = 2t0
2C1 + 3C2 = (3 + 2t0)
-------------------------------
-C2 = -3
C2 = 3
Sub C2 = 3 in eqn (2)
C1 + C2 = t0
C1 + 3 = t0
C1 = t0 – 3
Therefore tn = (t0-3)2n + 3. 3n
= Max[O[(t0 – 3) 2n], O[3.3n]]
= Max[O(2n), O(3n)] constants
= O[3n]

Example :(2)

tn – 2t n-1 = (n + 5)3n, n ≥ 1 (A)


This is Inhomogeneous
In this case, b=3, p(n) = n+5, degree = 1
So, the characteristic polynomial is,
(x-2)(x-3)2 = 0
The roots are,
r1 = 2, r2 = 3, r3 = 3

The general equation,


tn = C1r1n + C2r2n + C3nr3n (1)

when n=0, t0 = C1 + C2(2)


when n=1, t1 = 2C1 + 3C2 + 3C3(3)

24
substituting n=1 in eqn(A),
t1 – 2t0 = 6 . 3
t1 – 2t0 = 18
t1 = 18 + 2t0
substituting t1 value in eqn(3)
2C1 + 3C2 + 3C3 = 18 + 2t0(4)
C1 + C2 + = t0(2)

Sub. n=2 in eqn(1)


4C1 + 9C2 + 18C3 = t2 (5)

sub n=2 in eqn (A)


t2 – 2t1 = 7. 9
t2 = 63 + 2t1
= 63 + 2[18 + 2t0]
t2 = 63 + 36 +4t0
t2 = 99 +4t0

sub. t2 value in eqn(3),


4C1 +9C2 + 18C3 = 99 + 4t0(5)

solve eqn (2),(4) & (5)

n=0, C1 + C2 = t0(2)
n=1, 2C1 + 3C2 + 3C3 = 18 + 2t0(4)
n=2, 4C1 + 9C2 + 18C3 = 99 + 4t0(5)

(4) * 6  12C1 + 18C2 +18C3 = 108 + 2t0(4)


(5)  4C1 + 9C2 + 18C3 = 99 + 4t0 (5)
------------------------------------------------
8C1 + 9C2 = 9 + 8t0(6)

(2) * 8  8C1 + 8C2 = 8t0(2)


(6)  8C1 + 9C2 = 9 +8t0(6)
--------------------------
-C2 = -9
C2 = 9

Sub, C2 = 9 in eqn(2)
C1 + C2 = t0
C1 + 9 = t0
C1 = t0-9

Sub C1& C2 in eqn (4)


2C1 + 3C2 + 3C3 = 18 + 2t0
2(t0-9) + 3(9) + 3C3 = 18 + 2t0
2t0 – 18 + 27 + 3C3 = 18 + 2t0
2t0 + 9 + 3C3 = 18 + 2t0
3C3 = 18 – 9 + 2t0 – 2t0

25
3C3 = 9
C3 = 9/3
C3 = 3
Sub. C1, C2, C3, r1, r2, r3 values in eqn (1)
tn = C12n + C23n + C3.n.3n
= (t0 – 9)2n + 9.3n + 3.n.3n
= Max[O[(t0-9),2n], O[9.3n], O[3.n.3n]]

= Max[O(2n), O(3n), O(n3n)]


tn = O[n3n]

4. Change of variables:
* It is sometimes possible to solve more complicated recurrences by making a
change of variable.
* In the following example, we write T(n) for the term of a general recurrences,
and ti for the term of a new recurrence obtained from the first by a change of
variable.

Example: (1)
Consider the recurrence,

1 , if n=1
T(n) =
3T(n/2) + n , if ‗n‘ is a power of 2, n>1

 Reconsider the recurrence we solved by intelligent guesswork in the previous


section, but only for the case when ‗n‘ is a power of 2

1
T(n) =
3T(n/2) + n

* We replace ‗n‘ by 2i.


* This is achieved by introducing new recurrence t i, define by ti = T(2i)
* This transformation is useful because n/2 becomes (2 i)/2 = 2 i-1
* In other words, our original recurrence in which T(n) is defined as a function of
T(n/2) given way to one in which ti is defined as a function of t i-1, precisely
the type of recurrence we have learned to solve.
ti = T(2i) = 3T(2 i-1) + 2i
ti = 3t i-1 + 2i
ti – 3t i-1 = 2i (A)

In this case,
b = 2, p(n) = 1, degree = 0

So, the characteristic equation,


(x – 3)(x – 2) = 0

26
The roots are, r1 = 3, r2 = 2.

The general equation,


tn = C1 r1i + C2 r2i
sub. r1& r2: tn = 3nC1 + C2 2n
tn = C1 3 i + C 2 2 i

We use the fact that, T(2i) = ti& thus T(n) = tlogn when n= 2i to obtain,
T(n) = C1. 3 log2n + C2. 2log2n
T(n) = C1 . nlog23 + C2.n [i = logn]
When ‗n‘ is a power of 2, which is sufficient to conclude that,

T(n) = O(n log3) ‘n’ is a power of 2

Example: (2)

Consider the recurrence,


T(n) = 4T(n/2) + n2 (A)

Where ‗n‘ is a power of 2, n ≥ 2.


ti = T(2i) = 4T(2 i-1) + (2i)2
ti = 4t i-1 + 4i
ti – 4t i-1 = 4i

In this eqn,
b = 4, P(n) = 1, degree = 0

The characteristic polynomial,


(x – 4)(x – 4) = 0

The roots are, r1 = 4, r2 = 4.

So, the general equation,


ti = C14i + C24i.i [since i = logn]
log n logn
= C1 4 + C2. 4 . logn [since 2i = n]
= C1 . n log 4 + C2. n log4.n log1

T(n) = O(n log4) ‘n’ is the power of 2.

Example : 3

T(n) = 2T(n/2) + n logn

When ‗n‘ is a power of 2, n ≥ 2


ti = T(2i) = 2T(2i/2) + 2i .i [since 2i = n; i =logn]
ti – 2t i-1 = i. 2i

In this case,
b = 2, P(n) = i, degree = 1
(x – 2)(x – 2)2 = 0

27
The roots are, r1 = 2, r2 = 2, r3 = 2

The general solution is,


tn = C12i + C2. 2i . i + C3. i2. 2i
= nC1 + nC2 + nC3(log n22n)

tn = O(n.log22n)

5. Range Transformation:
* When we make a change of variable, we transform the domain of the recurrence.
* Instead it may be useful to transform the range to obtain a recurrence in a form
that we know how to solve
* Both transformation can be sometimes be used together.

Example: 1
Consider the following recurrence , which defines T(n). where ‗n‘ is the power
Of 2
1/3 , if n=1
T(n) =
n T2(n/2) , otherwise
The first step is a change of variable,
Let ti denote T(2i)
ti = T(2i) = 2i T2(2 i-1)
= 2i t2i-1
* This recurrence is not clear, furthermore the co-efficient 2i is not a constant.
* To transform the range, we create another recurrence by using ui to denote
lgti
ui = lgti = i + 2lg t i-1
= i + 2u i-1

 ui– 2u i-1 = i
(x – 2)(x – 1)2 = 0

The roots are, x = 2, 1, 1.


G.S,
ui = C1.2i + C2.1i + C3.i.1i

Sub. This solution into the recurrence,


For ui yields,
i = ui – 2u i-1
= C12i + C2 + C3.i – 2(C1. 2 i-1 + C2 + C3 (i-1))
= (2C3 – C2) – C3i.
C3 = -1 & C2 = 2C3 = -2
ui = C12i – i – 2

This gives us the G.S for ti& T(n)


ti = 2ui = 2C12i– i-2
T(n) = t lgn = 2C1n – logn - 2
= 2C1n / 4n

28
We use he initial condition T(1) = 1/3
To determine C1: T(1) = 2C1/ 4 = 1/3
Implies that C1 = lg(4/3) = 2 – log 3

The final solution is


T(n) = 2 2n / 4n 3n

1. Newton Raphson method: x2= x1-f(x1)/f1(x1)

29
Chapter 2
2.1 General Method
 Given a function to compute on ‗n‘ inputs the divide-and-conquer strategy
suggests splitting the inputs into ‗k‘ distinct subsets, 1<k<=n, yielding ‗k‘ sub
problems. These sub problems must be solved, and then a method must be
found to combine sub solutions into a solution of the whole.
 If the sub problems are still relatively large, then the divide-and-conquer
strategy can possibly be reapplied.
 Often the sub problems resulting from a divide-and-conquer design are of the
same type as the original problem.
 For those cases the re application of the divide-and-conquer principle is
naturally expressed by a recursive algorithm.
 D And C(Algorithm) is initially invoked as D and C(P), where ‗p‘ is the problem
to be solved.
 Small(P) is a Boolean-valued function that determines whether the i/p size is
small enough that the answer can be computed without splitting.
 If this so, the function ‗S‘ is invoked.
 Otherwise the problem P is divided into smaller sub problems.
 These sub problems P1,P2,………Pk are solved by recursive application of
DAndC.
 Combine is a function that determines the solution to p using the solutions to
the ‗k‘ sub problems.
 If the size of ‗p‘ is n and the sizes of the ‗k‘ sub problems are n1,n2,…….nk,
respectively, then the computing time of D And C is described by the
recurrence relation.
T(n) = { g(n) n small
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise.
Where T(n)  Time for D And C on any i/p of size ‗n‘.
g(n)  Time of compute the answer directly for small i/ps.
f(n)  Time for dividing P & combining the solution to sub problems.
Pseudo Code Algorithm for D&C Method:
1. Algorithm D And C(P)
2. {
3. if small(P) then return S(P);
4. else
5. {
6. divide P into smaller instances
P1,P2,…….,Pk, k>=1;
7. Apply D And C to each of these sub problems;
8. return combine (D And C(P1), D And C(P2),…….,D And C(Pk));
9. }
10. }

 The time complexity of many divide-and-conquer algorithms is given by


recurrences of the form
T(n) = T(1) ; n=1
aT(n/b)+f(n); n>1
 where a & b are known as constants.
 we assume that T(1) is known & ‗n‘ is a power of b(ie., n=b^k)

30
 One of the methods for solving any such recurrence relation is called the
substitution method.
 This method repeatedly makes substitution for each occurrence of the
function. T is the Right-hand side until all such occurrences disappear.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n.
We have,
T(n) = 2T(n/2)+n
= 2[2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4T(n/4)+2n
= 4[2T(n/4/2)+n/4]+2n
= 4[2T(n/8)+n/4]+2n
= 8T(n/8)+n+2n
= 8T(n/8)+3n

 In general, we see that T(n)=2^iT(n/2^i )+in., for any log n >=I>=1.


 T(n) =2^log n T(n/2^log n) + n log n
Corresponding to the choice of i=log n
Thus, T(n) = 2^log n T(n/2^log n) + n log n
= n.T(n/n) + n log n
= n. T(1) + n log n [since, log 1=0, 2^0=1]
= 2n + n log n

2.1.1 Binary Search


1. Algorithm Binsearch(a,n,x)
2. // Given an array a[1:n] of elements in non-decreasing
3. //order, n>=0,determine whether ‗x‘ is present and
4. // if so, return ‗j‘ such that x=a[j]; else return 0.
5. {
6. low:=1; high:=n;
7. while (low<=high) do
8. {
9. mid:=(low+high)/2;
10. if (x<a[mid]) then
11. high:=mid-1;
12. else if(x>a[mid]) then
low=mid+1;
13. else
14. return mid;
15. }
16. return 0;
17. }
 A non-recursive version of Binsearch is given below.
 This Binsearch has 3 i/psa,n, & x.
 The while loop continues processing as long as there are more elements left
to check.
 At the conclusion of the procedure 0 is returned if x is not present, or ‗j‘ is
returned , such that a[j]=x.

31
 We observe that low & high are integer Variables such that each time through
the loop either x is found or low is increased by atleast one or high is
decreased at least one .
 Thus we have 2 sequences of integers approaching each other and
eventually low becomes > than high & causes termination in a finite no. of
steps if ‗x‘ is not present.
Example:
1. Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
 Place them in a[1:14], and simulate the steps Binsearch goes through as it
searches for different values of ‗x‘.
 Only the variables, low, high & mid need to be traced as we simulate the
algorithm.
 We try the following values for x:151,-14 and 9.
for 2 successful searches & 1 unsuccessful search.

Table shows the traces of Bin search on these 3 steps:


X=151 low high mid
1 14 7
8 14 11
12 14 13
14 14 14
Found

x=-14 low high mid


1 14 7
1 6 3
1 2 1
2 2 2
2 1 Not found

x=9 low high mid


1 14 7
1 6 3
4 6 5
Found

Theorem: Algorithm Binsearch(a,n,x) works correctly.


Proof:
 We assume that all statements work as expected and that comparisons such
as x>a[mid] are appropriately carried out.
 Initially low =1, high= n,n>=0, and a[1]<=a[2]<=……..<=a[n].
 If n=0, the while loop is not entered and is returned.
 Otherwise we observe that each time thro‘ the loop the possible elements to
be checked of or equality with x and a[low], a[low+1],……..,a[mid],……a[high].
 If x=a[mid], then the algorithm terminates successfully.
 Otherwise, the range is narrowed by either increasing low to (mid+1) or
decreasing high to (mid-1).
 Clearly this narrowing of the range does not affect the outcome of the search.

32
 If low becomes > than high, then ‗x‘ is not present & hence the loop is exited.

Analysis
Time Complexity for Successful Search

Best case :T(n)=O(1)


Average case :T(n)=O(log n)
Worst case :T(n)=O(log n)
Time Complexity for Unsuccessful Search
T(n)>=O(log n)

2.1.2 Finding The Maximum And Minimum


 Let us consider another simple problem that can be solved by the divide-and-
conquer technique.
 The problem is to find the maximum and minimum items in a set of ‗n‘
elements.
 In analyzing the time complexity of this algorithm, we once again concentrate
on the no. of element comparisons .
 More importantly, when the elements in a[1:n] are polynomials, vectors, very
large numbers, or strings of character, the cost of an element comparison is
much higher than the cost of the other operations.
 Hence the time is determined mainly by the total cost of the element
comparison.
 There are two methods namely,
(i)Straight forward method
(ii)D &C method
Straight Forward Method
Algorithm
1. Algorithm MaxMin(a,n,max,min)
2. // set max to the maximum & min to the minimum of a[1:n]
3. {
4. max:=min:=a[1];
5. for i:=2 to n do
6. {
7. if(a[i]>max) then
8. max:=a[i];
9. if(a[i]<min) then
10. min:=a[i];
11. }
Time Complexity
o Straight forward method requires 2(n-1) element comparison in the best,
average & worst cases.
o An immediate improvement is possible by realizing that the comparison
a[i]<min is necessary only when a[i]>max is false.
o Hence we can replace the contents of the for loop by,
o if(a[i]>max) then max:=a[i];
o else if (a[i]<min) then min:=a[i];
Best Case
 The best case occurs when the elements are in increasing order.

33
 The no. of element comparison is (n-1).
Worst Case
 The worst case occurs when the elements are in decreasing order.
 The no. of elements comparison is 2(n-1)
Average Case
 The average no. of element comparison is < than 2(n-1)
 On the average a[i] is > than max half the time, and so, the avg. no. of
comparison is 3((n-1)/2).
Divide & Conquer Method
 A divide- and conquer algorithm for this problem would proceed as follows:
 Let P=(n,a[I],……,a[j]) denote an arbitrary instance of the problem.
 Here ‗n‘ is the no. of elements in the list (a[i],….,a[j]) and we are interested in
finding the maximum and minimum of the list.
 If the list has more than 2 elements, P has to be divided into smaller
instances.
 For example , we might divide ‗P‘ into the 2 instances,
P1=([n/2],a[1],……..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],…..,a[n])

 After having divided ‗P‘ into 2 smaller sub problems, we can solve them by
recursively invoking the same divide-and-conquer algorithm.
Algorithm: Recursively Finding the Maximum & Minimum
1. Algorithm MaxMin (i,j,max,min)
2. //a[1:n] is a global array, parameters i & j
3. //are integers, 1<=i<=j<=n.The effect is to
4. //set max & min to the largest & smallest value
5. //in a[i:j], respectively.
6. {
7. if(i=j) then max:= min:= a[i];
8. else if (i=j-1) then // Another case of small(p)
9. {
10. if (a[i]<a[j]) then
11. {
12. max:=a[j];
13. min:=a[i];
14. }
15. else
16. {
17. max:=a[i];
18. min:=a[j];
19. }
20. }
21. else
22. {
23. // if P is not small, divide P into subproblems.
24. // find where to split the set
25. mid:=(i+j)/2;
26. //solve the subproblems
27. MaxMin(i,mid,max.min);
28. MaxMin(mid+1,j,max1,min1);

34
29. //combine the solutions
30. if (max<max1) then max=max1;
31. if(min>min1) then min = min1;
32. }
33. }
 The procedure is initially invoked by the statement,
MaxMin(1,n,x,y)
 Suppose we simulate MaxMin on the following 9 elements
A: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
 A good way of keeping track of recursive calls is to build a tree by adding a
node each time a new call is made.
 For this Algorithm each node has 4 items of information : i,j,max&imin.
 Examining fig: we see that the root node contains 1 & 9 as the values of i &j
corresponding to the initial call to MaxMin.
 This execution produces 2 new calls to MaxMin, where i&j have the values 1,
5 & 6 ,9 respectively & thus split the set into 2 subsets of approximately the
same size.
 From the tree, we can immediately see the maximum depth of recursion is 4.
(including the 1st call)
 The included numbers in the upper left corner of each node represent the
order in which max & min are assigned values.
No. of element Comparison
 If T(n) represents this no., then the resulting recurrence relations is
T(n) = T([n/2]+T[n/2]+2 n>2
1 n=2
0 n=1

 when ‗n‘ is a power of 2, n=2^k for some +ve integer ‗k‘, then
T(n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
*
*
= 2^k-1T(2)+
= 2^k-1+2^k-2
= 2^k/2+2^k-2
= n/2+n-2
= (n+2n)/2)-2
T(n)=(3n/2)-2
*Note that (3n/3)-3 is the best-average, and worst-case no. of comparisions when
‗n‘ is a power of 2.

2.1.3 Merge Sort


 As another example divide-and-conquer, we investigate a sorting algorithm
that has the nice property that is the worst case its complexity is O(n log n)
 This algorithm is called merge sort
 We assume throughout that the elements are to be sorted in non-decreasing
order.

35
 Given a sequence of ‗n‘ elements a[1],…,a[n] the general idea is to imagine
then split into 2 sets a[1],…..,a[n/2] and a[[n/2]+1],….a[n].
 Each set is individually sorted, and the resulting sorted sequences are
merged to produce a single sorted sequence of ‗n‘ elements.
 Thus, we have another ideal example of the divide-and-conquer strategy in
which the splitting is into 2 equal-sized sets & the combining operation is the
mering of 2 sorted sets into one.

Algorithm For Merge Sort


1. Algorithm MergeSort(low,high)
2. //a[low:high] is a global array to be sorted
3. //Small(P) is true if there is only one element
4. //to sort. In this case the list is already sorted.
5. {
6. if (low<high) then //if there are more than one element
7. {
8. //Divide P into subproblems
9. //find where to split the set
10. mid = (low+high)/2;
11. //solve the subproblems.
12. mergesort (low,mid);
13. mergesort(mid+1,high);
14. //combine the solutions .
15. merge(low,mid,high);
16. }
17. }

Algorithm: Merging 2 sorted subarrays using auxiliary storage.


1. Algorithm merge(low,mid,high)
2. //a[low:high] is a global array containing
3. //two sorted subsets in a[low:mid]
4. //and in a[mid+1:high].The goal is to merge these 2 sets into
5. //a single set residing in a[low:high].b[] is an auxiliary global array.
6. {
7. h=high; i=low; j=mid+1;
8. while ((h<=mid) and (j<=high)) do
9. {
10. if (a[h]<=a[j]) then
11. {
12. b[i]=a[h];
13. h = h+1;
14. }
15. else
16. {
17. b[i]= a[j];
18. j=j+1;
19. }
20. i=i+1;
21. }
22. if (h>mid) then

36
23. for k=j to high do
24. {
25. b[i]=a[k];
26. i=i+1;
27. }
28. else
29. for k=h to mid do
30. {
31. b[i]=a[k];
32. i=i+1;
33. }
34. for k=low to high do
35. a[k] = b[k];
36. }

 Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423,
861, 254, 450, 520)
 Algorithm Mergesort begins by splitting a[] into 2 subarrays each of size five
(a[1:5] and a[6:10]).
 The elements in a[1:5] are then split into 2 subarrays of size 3 (a[1:3] ) and
2(a[4:5])
 Then the items in a a[1:3] are split into subarrays of size 2 a[1:2] & one(a[3:3])
 The 2 values in a[1:2] are split to find time into one-element subarrays, and
now the merging begins.
(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
 Where vertical bars indicate the boundaries of subarrays.
Elements a[1] and a[2] are merged to yield,
(285, 310|179|652, 351| 423, 861, 254, 450, 520)
 Then a[3] is merged with a[1:2] and
(179, 285, 310| 652, 351| 423, 861, 254, 450, 520)
 Next, elements a[4] & a[5] are merged.
(179, 285, 310| 351, 652 | 423, 861, 254, 450, 520)
 and then a[1:3] & a[4:5]
(179, 285, 310, 351, 652| 423, 861, 254, 450, 520)
 Repeated recursive calls are invoked producing the following subarrays.
(179, 285, 310, 351, 652| 423| 861| 254| 450, 520)
 Elements a[6] &a[7] are merged.
Then a[8] is merged with a[6:7]
(179, 285, 310, 351, 652| 254,423, 861| 450, 520)
 Next a[9] &a[10] are merged, and then a[6:8] & a[9:10]
(179, 285, 310, 351, 652| 254, 423, 450, 520, 861 )
 At this point there are 2 sorted subarrays& the final merge produces the fully
sorted result.
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)

37
Tree for calls of MergeSort
1,10

1,5 6,10

1,3 4,5 6,8 9,10

1,2 3,3 4,4 5,5 6,7 8,8 9,9 10,10

6,6 7,7
1,1 2,2

Figure 2.1


This represents the sequence of recursive calls that are produced by merge
sort when it is applied to 10 elements.
 The pair of values in each node are the values of parameters low and high.
 plitting process should be continued until a set containing a single element.
Tree for calls of merge

1,1,2 6,6,7

1,2,3 4,4,5 6,7,8 9,9,10

6,8,10
1,3,5

1,5,10

Figure 2.2
 This tree represents the merging of 2 nodes.
 It represents the calls to procedure merge by mergesort.
 The values in each node are the values of parameters low,mid and high.

38
Time Complexity
 If the time for the merging operations is proportional to ‗n‘, then the computing
time for merge sort is described by the recurrence relation.

T(n) = { a n=1,‘a‘ is constant


2T(n/2)+cn n>1,‘c‘ is constant.

 When ‗n‘ is a power of 2, n= 2^k, we can solve this equation by successive


substitution.
T(n) =2(2T(n/4) +cn/2) +cn
= 4T(n/4)+2cn
= 4(2T(n/8)+cn/4)+2cn
*
*
= 2^k T(1)+kCn.
= an + cn log n.

 It is easy to see that if s^k<n<=2^k+1, then T(n)<=T(2^k+1). Therefore,


T(n)=O(n log n)

2.1.4. Quick Sort


 The divide-and-conquer approach can be used to arrive at an efficient sorting
method different from merge sort.
 In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which
were independently sorted & later merged.
 In Quick sort, the division into 2 sub arrays is made so that the sorted sub
arrays do not need to be merged later.
 This is accomplished by rearranging the elements in a[1:n] such that a[I]<=a[j]
for all I between 1 & n and all j between (m+1) & n for some m, 1<=m<=n.

 Thus the elements in a[1:m] & a[m+1:n] can be independently sorted.


 No merge is needed. This rearranging is referred to as partitioning.
 Function partition of Algorithm accomplishes an in-place partitioning of the
elements of a[m:p-1]
 It is assumed that a[p]>=a[m] and that a[m] is the partitioning element. If m=1
& p-1=n, then a[n+1] must be defined and must be greater than or equal to all
elements in a[1:n]
 The assumption that a[m] is the partition element is merely for convenience,
other choices for the partitioning element than the first item in the set are
better in practice.
 The function interchange (a,I,j) exchanges a[I] with a[j].

Algorithm: Partition the array a[m:p-1] about a[m]


1. Algorithm Partition(a,m,p)
2. //within a[m],a[m+1],…..,a[p-1] the elements
3. // are rearranged in such a manner that if
4. //initially t=a[m],then after completion
5. //a[q]=t for some q between m and
6. //p-1,a[k]<=t for m<=k<q, and

39
7. //a[k]>=t for q<k<p. q is returned
8. //Set a[p]=infinite.
9. {
10. v=a[m];I=m;j=p;
11. repeat
12. {
13. repeat
14. I=I+1;
15. until(a[I]>=v);
16. repeat
17. j=j-1;
18. until(a[j]<=v);
19. if (I<j) then interchange(a,i.j);
20. }until(I>=j);
21. a[m]=a[j]; a[j]=v;
22. retun j;
23. }

Algorithm
1. Algorithm Interchange(a,I,j)
2. //Exchange a[I] with a[j]
3. {
4. p=a[I];
5. a[I]=a[j];
6. a[j]=p;
7. }

Algorithm: Sorting by Partitioning


1. Algorithm Quicksort(p,q)
2. //Sort the elements a[p],….a[q] which resides
3. //is the global array a[1:n] into ascending
4. //order; a[n+1] is considered to be defined
5. // and must be >= all the elements in a[1:n]
6. {
7. if(p<q) then // If there are more than one element
8. {
9. // divide p into 2 subproblems
10. j=partition(a,p,q+1);
11. //‘j‘ is the position of the partitioning element.
12. //solve the subproblems.
13. quicksort(p,j-1);
14. quicksort(j+1,q);
15. //There is no need for combining solution.
16. }
17. }

2.1.5 Strasson’s Matrix Multiplicaion


Conventional Method
 Let A and B be the 2 n*n Matrices. The product matrix C=AB is calculated by
using the formula,

40
C (i ,j ) = A(i,k)* B(k,j) for all ‗i‘ and and j between 1 and n.
 The time complexity for the matrix Multiplication is O(n^3).
Divide & Conquer Method
 Divide and conquer method suggest another way to compute the product of
n*n matrix.
 We assume that n is a power of 2 .In the case n is not a power of 2 ,then
enough rows and columns of zero can be added to both A and B .So that the
resulting dimension are the powers of two.
 If n=2 then the following formula as a computed using a matrix multiplication
operation for the elements of A & B.
 If n>2,Then the elements are partitioned into sub matrix n/2*n/2..since ‗n‘ is a
power of 2 these product can be recursively computed using the same
formula. This Algorithm will continue applying itself to smaller sub matrix until
‗n‘ become suitable small(n=2) so that the product is computed directly .
 The formulas are

A11 A12 B11 B12 C11 C12


* =
A21 A21 B21 B22 C21 C22

C11=A11B11+A12B21
C12=A11B12+A12B22
C21=A21B11+A22B21
C22=A21B12+A22B22
Example
2222 1 1 11
4 *4 = 2222 1 11 1
2222 * 1 11 1
2222 1 11 1

The Divide and conquer method

2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 * 1 1 1 1 = 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4

 To compute AB using the equation we need to perform 8 multiplication of


n/2*n/2 matrix and from 4 addition of n/2*n/2 matrix.
 Ci,j are computed using the formula in equation-----4
 As can be sum P,Q,R,S,T,U,V can be computed using 7 Matrix multiplication
and 10 addition or subtraction.
 The cij requires 8 addition or subtraction.

41
T(n)= b n<=2 a & b are constant
7T(n/2)+an^2 n>2

finally we get T(n) =O( n ^log27)


Example
4 4 4 4
*
4 4 4 4

P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
So the answer c(i,j) is 32 32

32 32

since n/2n/2 &matrix can be can be added in Cn 2 for some constant C. The overall
computing time T(n) of the resulting divide and conquer algorithm is given by the
sequence

T(n) = b n<=2 where a &b are constant


8T(n/2)+cn^2 n>2

that is T(n)=O(n^3)
Drawbacks
 No better improvement in D & C method when comared to conventional
method.
 Matrix multiplication is more expensive then the matrix adddition O(n^3).We
can attempt to reformulate the equation for cij so as to have fewer
multiplication and possibly more addition .

Strassen’s Method
 Strassen has discovered a way to compute the cij of equation (2) using only
7 multiplication and 18 addition or subtraction.
 Strassen‘s formulas are
P=(A11+A12)(B11+B22)
Q=(A12+A22)B11
R=A11(B12-B22)
S=A22(B21-B11)
T=(A11+A12)B22
U=(A21-A11)(B11+B12)
V=(A12-A22)(B21+B22)

42
Cij is computed using the following formulas:
C11=P+S-T+V
C12=R+T
C21=Q+T
C22=P+R-Q+V

 Computation of P,Q,R,S,T,U,V need 7 multiplications and 10


additions/subtractions.
 Computation of Cij need 8 additions/subtractions.

Time Complexity

T(n)= b n2
2
7T(n/2)+an n>2 where a & b are constants.
By substitution method,we get
T(n)=O(nlog27)  O(n2.81)

2.2 Greedy Method

2.2.1General Method
 Greedy method is the most straight forward designed technique.
 As the name suggests, they are short sighted in their approach taking
decision on the basis of the information immediately at the hand without
worrying about the effect these decision may have in the future.
Definition
 A problem with N inputs will have some constraints. Any subsets that satisfy
these constraints are called a feasible solution.
 A feasible solution that either maximize or minimize a given objectives
function is called an optimal solution.
 An optimal solution should have highest profit.
Control algorithm for Greedy Method
1. Algorithm Greedy (a,n)
2.//a[1:n] contain the ‗n‘ inputs
3. {
4. Solution =0;//Initialise the solution.
5. For i=1 to n do
6. {
7. x=select(a);
8. if (feasible(solution,x))then
9. Solution=union (solution,x);
10.}
11. return solution;
12.}

Function select( )
 The function select an input from a[] and removes it.
 The select input value is assigned to x.

43
Function feasible( )
 It is a Boolean value function that determines whether x can be included into
the solution vector or not.
Function union( )
 It combines x with the solution and updates the objective function.
Function greedy( )
 The function Greedy describes the essential way that a greedy algorithm will
look,once a particular problem is chosen and the functions select( ) , feasible(
) & union( ) are properly implemented.

Example
 Suppose we have in a country the following coins are available :
Dollars(100 cents)
Quarters(25 cents)
Dimes( 10 cents)
Nickel(5 Cents)
Pennies(1 cent)
 Our aim is paying a given amount to a customer using the smallest possible
number of coins.
 For example if we must pay 276 cents possible solution then,
 1 doll+7 q+ 1 pen9 coins
2 dolls +3Q +1 pen6 coins
2 dolls+7dim+1 nic +1 pen11 coins.

2.2.2. Knapsack Problem (Fractional knapsack problem)


 We are given n objects and knapsack or bag with capacity M.
 Each object i have a weight wi and profit pi, where i varies from 1 to n.
 The problem is we have to fill the bag with the help of n objects and the
resulting profit should be maximum.
 Total weight of all chosen objects to be almost M.
 Formally the problem can be stated as
Maximize ∑xipi
subject to ∑XiWi<=M where 1<=i<=n
Where xi is the fraction of object and it lies between 0 to 1.
 There are so many ways to solve this problem which will give many feasible
solutions for which we have to find the optimal solution.
 But in this algorithm, it will generate only one solution which is going to be
feasible as well as optimal.
 First we find the profit & weight rates of each and every object and sort it
according to the descending order of the ratios.
 Select an object with highest p/w ratio and check whether its weight is lesser
than the capacity of the bag.
 If so place 1 unit of the first object and decrement .the capacity of the bag by
the weight of the object you have placed.
 Repeat the above steps until the capacity of the bag becomes less than the
weight of the object you have selected .In this case, place a fraction of the
object and come out of the loop whenever you selected.

44
Algorithm
1. Algorithm Greedy knapsack (m,n)
2//P[1:n] and the w[1:n]contain the profits
3. // & weight respectively of the n objects ordered
4. //such that p[i]/w[i] >=p[i+1]/W[i+1]
5. //m is the Knapsack size and x[1:n] is the solution vector.
6. {
7. for i:=1 to n do a[i]=0.0;
8. u:=m;
9. for i: =1 to n do
10. {
11. if (w[i]>u) then break;
13. x[i]=1.0;u=u-w[i];
14.}
15. if(i<=n)then x[i]=u/w[i];
16.}

Example
There is a bag that can carry only 20kg. We have 3 objects with different
weights. We have to put the objects inside the bag so that the weight should not
exceed and it should give the maximum profit and it should not exceed the maximum
limit.
Capacity=20
N=3 ,M=20
Wi=18,15,10
Pi=25,24,15
Pi/Wi=25/18=1.36,24/15=1.6,15/10=1.5
Descending Order  Pi/Wi1.6 1.5 1.36
Pi = 24 15 25
Wi = 15 10 18
Xi = 1 5/10 0
PiXi=1*24+0.5*1531.5
The optimal solution is 31.5

2.2.3 Minimum- Cost Spanning Trees


 Let G (V, E) be an undirected connected graph with vertices ‗v‘ and edge ‗E‘.
 A sub-graph t= (V, E‘) of G is a Spanning tree of G iff ‗t‘ is a tree.
 The problem is to generate a graph G= (V, E) where ‗E‘ is the subset of E and
G is a Minimum spanning tree.
 Each and every edge will contain the given non-negative length. Connect all
the nodes with edge present in set E‘ and weight has to be minimum.

Note
 We have to visit all the nodes.
 The subset tree (i.e.) any connected graph with ‗N‘ vertices must have at least
N-1 edges and also it does not form a cycle.

45
Definition
 A spanning tree of a graph is an undirected tree consisting of only those
edges that are necessary to connect all the vertices in the original graph.
 A Spanning tree has a property that for any pair of vertices there exist only
one path between them and the insertion of an edge to a spanning tree form a
unique cycle.

Figure 2.3 Graph And Its Spanning Trees


Application of the spanning tree
1. Analysis of electrical circuits.
2. Shortest route problems.
Minimum cost spanning tree
 The cost of a spanning tree is the sum of cost of the edges in those trees.
 There are 2 methods to determine a minimum cost spanning tree. They are
1. Kruskal‘s Algorithm
2. Prim‘s Algorithm.

Figure 2.4 A Graph And Its Minimum Cost Spanning Tree

2.2.3.i. Kruskal’s Algorithm


In kruskal's algorithm the selection function chooses edges in increasing order
of length without worrying too much about their connection to previously chosen
edges, except that never to form a cycle. The result is a forest of trees that grows
until all the trees in a forest (all the components) merge in a single tree.
 In this algorithm, a minimum cost-spanning tree ‗T‘ is built edge by edge.
 Edge are considered for inclusion in ‗T‘ in increasing order of their cost.
 An edge is included in ‗T‘ if it doesn‘t form a cycle with edge already in T.
 To find the minimum cost spanning tree the edge are inserted to tree in
increasing order of their cost .
Algorithm
1. Algorithm kruskal(E,cost,n,t)
2. //Eset of edges in G has ‗n‘ vertices.
3. //cost[u,v]cost of edge (u,v).tset of edges in the minimum-cost spanning
4. // tree. The final cost is returned.
5. {
6. for i:=1 to n do parent[I]:=-1;

46
7. i:=0; mincost:=0.0;
8. While ((i<n-1) and (heap not empty)) do
9. {
10. Delete a minimum cost edge (u, v) from the heap and reheapify using Adjust;
11. j:=find (u);
12. k:=find (v);
13. if (j ≠ k) then
14. {
15. i:=i+1;
16. t[i,1]:=u;
17. t[i,2]:=v;
18. mincost:= mincost + cost[u,v];
19. union (j,k);
20. }
21. }
22. if (i ≠ n-1) then write(―No spanning tree‖);
23. else return mincost;
24. }

Example: Step by Step operation of Kurskal algorithm.


Step 1. In the graph, the Edge(g, h) is shortest. Either vertex g or vertex h could be
representative. Lets choose vertex g arbitrarily.

Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative
for second tree.

Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as
representative.

47
Step 4. Edge (a, b) creates a third tree.

Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the
representative.

Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would
be created. Vertex c is the representative of both.

Step 7. Instead, add edge (c, d).

Step 8. If we add edge (h, i), edge(h, i) would make a cycle.

48
Step 9. Instead of adding edge (h, i) add edge (a, h).

Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead
to complete the spanning tree. In this spanning tree all trees joined and vertex c is a
sole representative.

Analysis
 The time complexity of minimum cost spanning tree algorithm in worst case is
O (|E|log|E|),
where E is the set of edges in G.

2.2.3.ii. Prim’s Algorithm


Start from an arbitrary vertex (root). At each stage, add a new branch (edge)
to the tree already constructed; the algorithm halts when all the vertices in the graph
have been reached.

 Suppose v={1,2,3…n}
 Traversing starts with a set u initialized to 1.u={1}.
 It then grows a spanning tree one edge at a time.
 At each step,it finds a shortest edge (u,v) that connects u and v-u and then
adds the vertex in v-u to u. [ The next edge (u,v) or (i,j) to be added in such
that i is a vertex included in the tree, j is a vertex not yet included, and cost of
(i,j), cost[i,j] is minimum among all the edges.]
 Repeat the above step until u=v.
 The working of prims will be explained by following diagram
Algorithm
Procedure prim(c:array[1…n,1…n] of real)
{
prim print the edges of minimum cost spanning tree for a graph with vertices
{1,2,…n} and cost matrix c on edges}
var

49
LOWCOST: array (1…n) of real;
CLOSEST: array (1…n) of integer;
i, j, k, min: integer;
{
i and j are indices. During the scan of LOWCOST array, k is the index
of the closest vertex found so far and min=LOWCOST[k]
}
begin
for i:=2 to n do begin
{
initializing with only vertex1 in set v
LOWCOST[i]:=c [1, i];
CLOSEST[i]:= 1;
end
for i:=2 to n do begin
{
find the closest vertex k outside of u to same vertex in v
min:=LOWCOST[2];
k:=2;
for j:=3 to n do
{
if (LOWCOST[j]<min) then begin
min:=LOWCOST[j];
k:=j;
end
write (k,CLOSEST[k]);{ print edge}
LOWCOST[k]:=infinity ;{ k is added to v}
for j:=2 to n do{adjust cost to u}
if((c[k,j]<=LOWCOST[j])and(LOWCOST[j]<=infinity))then
begin
LOWCOST[j]:=c [k, j];
CLOSEST[j]:= k;
end
end
end {prim}

Step 1: Step 2:

50
Step 3: Step 4:

Step 5: Step 6:

Analysis
T(n)=O(n2).

2.2.4 Single Source Shortest Paths


This problem is a common path finding problem on directed graph. We
are given a directed graph G(V,E)in which each arc has a non-negative label
and one vertex is specified as the source.
Our problem is to determine the cost of the shortest path from the source
to every other vertex in V where the length of the path is the sum of the
cost of the arcs on the path. This problem is often called as the single source
shortest path problem.
To solve this problem we shall use a greedy technique often known as
Dijkstra algorithm. This algorithm works by maintaining a set S of vertices
whose shortest distance from the source is already known. Initially S contains
only the source vertex. At each step we add to So remaining vertex V whose
distance from the source is as short as possible.
Assuming all arcs have non-negative cost, we can always find a shortest
path from the source to V that passes only through vertices in S, call such a
path special. At each step of an algorithm we us an array 'D' to record the
length of the shortest special path to each vertex.
Once S includes all vertices, all paths are special. So D hold the shortest
distance from the source to each vertex.
Dijikstra's algorithm
Procedure Dijikstra
{ Dijikstra computes the cost of the shortest path from vertex 1 to every vertex of a
directed graph }
begin
1. s={1};
2. for i:=2 to n do
3. D[i]:=c[1,i];{initialize D}
4. for i=1 to n-1 do begin

51
5. choose a vertex w in V-S such that D[w] is minimum
6. Add w to S
7. for each vertex v in V-S do
8. D[V]:=min(D[v],D[w]+c[w,v]);
9. end
end{Dijikstra}

2.2.5 Job Scheduling With Dead Lines


We have n number of jobs.Their profit and deadlines will be given.our
objective is to find a sequence of job which will be completed within its deadlines
and it should yield a maximum profit.
 To complete a job ,one has to process the job or a action for one unit of time.
 Only one machine is available for processing jobs.
 A feasible solution for this problem is a subset of j of jobs such that each job
in this subject can be completed by this deadline.
 If we select a job at a time ,
o Since one job can be processed in a single m/c ,the other job has to be
in its waiting state until the job is completed and the machine becomes
free.
o So the waiting time and the processing time should be less than or
equal to the dead line of the job.

Greedy Algorithm for sequencing unit time jobs with deadlines and profits
Algorithm JS(d,j,n)
//d[i]1,1in are the deadlines,n1.
//The jobs are ordered such that p[1]>p[2]…>p[n]
//J[i] is the ith job in the optimal solution 1ik.
// Also at terminal d [ J[ i]]<=d[ J[i+1]],1ik
{
d[0]:= J[0]:=0;//Initialize
J[1]:=1;//Include job1
k:=1;
for i := 2 to n do
{
// consider jobs in non increasing order of p[i];
//find the position for i and check feasibility insertion
r=k;
while((d[J[r]]>d[i] ) and (d[J[r]]  r)do
r :=r-1;
if (d[j[r]]  d[i])and (d[i]>r)) then
{
//insert I into J[].
for q:=k to (r+1) step –1 do
J [q+1]:=J[q];
J[r+1]=i;
k=k+1;
}
}
return k;
}
52
Example
1. n=5 (P1,P2,…P5)=(20,15,10,5,1)
(d1,d2….d3)=(2,2,1,3,3)

Feasible solutionProcessing SequenceValue

(1) (1) 20
(2) (2) 15
(3) (3) 10
(4) (4) 5
(5) (5) 1
(1,2) (2,1) 35
(1,3) (3,1) 30
(1,4) (1,4) 25
(1,5) (1,5) 21
(2,3) (3,2) 25
(2,4) (2,4) 20
(2,5) (2,5) 16
(1,2,3) (3,2,1) 45
(1,2,4) (1,2,4) 40

The Solution 13 is optimal

2. n=4 (P1,P2,…P4)=(100,10,15,27)
(d1,d2….d4)=(2,1,2,1)

Feasible solutionProcessing SequenceValue

(1,2) (2,1) 110


(1,3) (1,3) 115
(1,4) (4,1) 127
(2,3) (9,3) 25
(2,4) (4,2) 37
(3,4) (4,3) 42
(1) (1) 100
(2) (2) 10
(3) (3) 15
(4) (4) 27
The solution 3 is optimal
Analysis
For JS there are 2 possible parameters in terms of which its complexity can be
measured.We can use n-the no. of jobs and s-the no. of jobs included in the solution
J.
 The total time is (sn).
 If sn,the worst case time is  (n2) .
 If the jobset pi=di=n-I+1,1in, then JS takes  (n2) time to determine J.
 The worst computing time for JS is  (n2).
 Space needed : d and  (s) amount of space for J.

53
2.2.6 Optimal Storage On Tapes
 We have n no of programs. The length of the tape l & length of each program
li, 1≤i≤n.
 The objective of the problem is to store 'n' no of programs on a tape with the
constraint : Sum of the length of all the programs should not exceed the
length of the tape l.i.e)All the programs can be stored on the tape iff the sum
of the length of the program is atmost l.
 Assume whenever a program is retrieved from the tape, tape is initially
positioned at the front.
 Time needed (tj)to retreive the program lj is proportional to ∑ li*k where k=1 to
j.
 Program P1 having length l1 takes t1 time to store it.
 Program P2 having length l2 takes t2 time to store it.
 Program P3 having length l3 takes t3 time to store it.
 We have to store l1,l2,l3 in l;
 If all the programs are equally retrieved,then
Mean retreival time d = (1/n) ∑ tj ,1≤j≤n
d = (t1+t2+.....+tn)/n
 Our aim is to minimize the mean retreival time. To minimize MRT we have to
find the permutation for n programs.i.e)We must consider the order in which
programs are saved.
Example
P1---l1---> 5
P2---l2--->10
P3---l3---> 3
Possible orderings=n!=3!=6

I d(i) MRT
1,2,3 5+15+!8 38
1,3,2 5+8+18 31
2,3,1 10+13+18 41
2,1,3 10+15+18 43
3,1,2 3+8+18 29(min)
3 , 2 ,1 3+13+18 34

The 5th order 3,2,1 takes minimum MRT 29.So that if the programs are
stored in the sequence 3,2,1 then it is an optimal solution.
Analysis
The greedy method simply requires to store the programs in non decreasing
order of their lengths. This ordering can be carried out in O(n log n) time using an
efficient sorting algorithm(heap sort).

2.2.7 Optical Merge Patterns


 We have 2 sorted files containing n and m records. The file f1 contains n
records. The file f2 contains m records.
 These 2 files could be merged together and resultant file is in also sorted
order.

54
 Time taken for merging these two files is O(m+n).
 If we have more than 2 files,then we can merge the files in some order.
 Let us take 4 files say x1,x2,x3&x4. We can merge these 4 files in any order.
 For example:
1.Merge x1 & x2 => y1
Merge y1 & x3 => y2
Merge y2 & x4 => y3
2.Merge x1 & x2 => y1
Merge x3 & x4 => y2
Merge y1 & y2 => y3
 Given n sorted files. We have to sort the n sorted files & store it in a single file
in sorted order.
 We can merge the files in any order, but it should take minimum amount of
computing time.
Example 1
n=3
File x1 length=30
File x2 length=20
File x3 length=10
Way 1
Merge x1 & x2 - 50 record moves
Merge(x1 & x2)&x3 - 60 record moves
Total no of record moves -110 record moves

55
Chapter 3
3.1Dynamic Programming
The General Method
 It is an algorithm design method that can be used when the solution to a
problem can be viewed as the result of a sequence of decisions.
 The idea of dynamic programming is thus quit simple: avoid calculating the
same thing twice, usually by keeping a table of known result that fills up a sub
instances are solved.
 Divide and conquer is a top-down method. When a problem is solved by
divide and conquer, we immediately attack the complete instance, which we
then divide into smaller and smaller sub-instances as the algorithm
progresses.
 Dynamic programming on the other hand is a bottom-up technique. We
usually start with the smallest and hence the simplest sub- instances. By
combining their solutions, we obtain the answers to sub-instances of
increasing size, until finally we arrive at the solution of the original instances.
 The essential difference between the greedy method and dynamic
programming is that the greedy method only one decision sequence is ever
generated. In dynamic programming, many decision sequences may be
generated. However, sequences containing sub-optimal sub-sequences can
not be optimal and so will not be generated.
 Because of principle of optimality,decision sequences containing
subsequences that are suboptimal are not considered. Although the total
number of different descision sequences is exponential in the number of
descisions(if there are d choices for each of the n descisions to be made then
there are dn possible decision sequences),Dynamic programming algorithms
often have a polynomial complexity.

3.1.1 Multistage Graph


1. A multistage graph G = (V,E) is a directed graph in which the vertices are
portioned into k > = 2 disjoint sets Vi, 1 <= i<= k.
2. In addition, if <u,v> is an edge in E, then u < = Vi and V  Vi+1 for some i, 1<= i <
k.
3. If there will be only one vertex, then the sets Vi and Vk are such that [Vi]=[Vk] =
1.
4. Let ‗s‘ and ‗t‘ be the source and destination respectively.
5. The cost of a path from source (s) to destination (t) is the sum of the costs of the
edger on the path.
6. The multistage graph problem is to find a minimum cost path from ‗s‘ to ‗t‘.
7. Each set Vi defines a stage in the graph. Every path from ‗s‘ to ‗t‘ starts in stage-
1, goes to stage-2 then to stage-3, then to stage-4, and so on, and terminates in
stage-k.
8. This Mulistage Graph problem can be solved in 2 ways.
a) Forward Method.
b) Backward Method.
Forward Method
Assume that there are ‗k‘ stages in a graph. In this Forward approach, we will
find out the minimum cost path starting from the 1st stage to k th stage

56
Procedure
 Maintain a cost matrix cost(n) which stores the distance from any vertex to the
destination.
 If a vertex is having more than one path, then we have to choose the
minimum distance path and the intermediate vertex which gives the minimum
distance path will be stored in the distance array ‗D‘.
 In this way we will find out the minimum cost path from each and every vertex.
 Finally cost(1) will give the shortest distance from source to destination.
 For finding the path, start from vertex-1 then the distance array D(1) will give
the minimum cost neighbour vertex which in turn give the next nearest vertex
and proceed in this way till we reach the Destination.
 For a ‗k‘ stage graph, there will be ‗k‘ vertex in the path.
 In the above graph V1……V5 represent the stages. This 5 stage graph can
be solved by using forward approach as follows,
Algorithm
Algorithm FGraph (G,k,n,p)
// The input is a k-stage graph G=(V,E) with ‗n‘ vertex indexed in order of stages.
// E is a set of edges.
// c[i,j] is the cost of <i,j>
// p[1:k] is a minimum cost path.
{
cost[n]=0.0;
for j=n-1 to 1 step-1 do
{
//compute cost[j],
Let ‗r‘ be the vertex such that <j,r> is an edge of ‗G‘ & c[j,r]+cost[r] is
minimum.
cost[j] = c[j+r] + cost[r];
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j=2 to k-1 do
p[j]=d[p[j-1]];
}

57
V1 V2 V3 V4
V5

4 6

2 2
5 4
9 1
4

7
7 3 2
t
s
3

11 5 5

2
11 6

Figure 3.1

Trace Out
n=12 k=5

cost (12)=0 d (12)=0


for j = 11 to 1
When j=11
cost (11)=5 d (11)=12
When j=10
cost (10)=2 d (10)=12
When j=9
cost ( 9)=4 d ( 9)=12
Note:
If there is more than one path from vertex j to next stage vertices,then we find
the closest vertex by the following formula:
 For forward approach,
Cost (i,j) = min {C (j,l) + Cost (i+1,l) }
l  Vi + 1
(j,l) E

When j=8
cost(8) = min {C (8,10) + Cost (10), C (8,11) + Cost (11) }
= min (5 + 2, 6 + 5)

58
= min (7,11)
=7
cost(8) =7 d(8)=10
When j=7
cost(7) = min(c (7,9)+ cost(9),c (7,10)+ cost(10))
= min(4+4,3+2)
= min(8,5)
=5
cost(7) = 5 d(7) = 10
When j=6
cost(6) = min (c (6,9) + cost(9),c (6,10) +cost(10))
= min(6+4 , 5 +2)
= min(10,7)
=7
cost(6) = 7 d(6) = 10
When j=5
cost(5) = min (c (5,7) + cost(7),c (5,8) +cost(8))
= min(11+5 , 8 +7)
= min(16,15)
= 15
cost(5) = 15 d(5) = 18
When j=4
cost(4) = min (c (4,8) + cost(8))
= min(11+7)
= 18
cost(4) = 18 d(4) = 8
When j=3
cost(3) = min (c (3,6) + cost(6),c (3,7) +cost(7))
= min(2+7 , 7 +5)
= min(9,12)
=9
cost(3) = 9 d(3) = 6
When j=2
cost(2) = min (c (2,6) + cost(6),c (2,7) +cost(7) ,c (2,8) +cost(8))
= min(4+7 , 2+5 , 1+7 )
= min(11,7,8)
=7
cost(2) = 7 d(2) = 7
When j=1
cost(1)= min (c (1,2)+cost(2) ,c (1,3)+cost(3) ,c (1,4)+cost(4)
c(1,5)+cost(5))

= min(9+7 , 7 +9 , 3+18 , 2+15)


= min(16,16,21,17)
= 16
cost(1) = 16 d(1) = 2

p[1]=1 p[5]=12
for j=2 to 4

59
When j=2 p[2]=d[p[1]]=d[1]=2
When j=3 p[3]=d[p[2]]=d[2]=7
When j=4 p[4]=d[p[3]]=d[7]=10

The path through which you have to find the shortest distance from vertex 1 to vertex
12.

(ie)

p[1] p[2] p[3] p[4] p[5]

Start from vertex - 1


D ( 1) = 2
D ( 2) = 7
D ( 7) = 10
D (10) = 12
So, the minimum –cost path is,

9 2 3 2

The cost is 9+2+3+2+=16

Analysis
The time complexity of this forward method is O ( V + E )

Backward Method
 If there are ‗k‘ stages in a graph then using backward approach, we will find
out the minimum cost path from destination to source (ie)[from stage k to
stage 1]
Procedure
1. It is similar to forward approach, but differs only in two or three ways.
2. Maintain a cost matrix to store the cost of every vertices and a distance matrix
to store the minimum distance vertex.
3. Find out the cost of each and every vertex starting from vertex 1 up to vertex
k.
4. To find out the path star from vertex ‗k‘, then the distance array D (k) will give
the minimum cost neighbor vertex which in turn gives the next nearest
neighbor vertex and proceed till we reach the destination.

Algorithm
Algorithm BGraph (G,k,n,p)
// The i/p is a k-stage graph G=(V,E) with ‗n‘ vertex indexed in order of stages
// E is a set of edges.
// c[i,j] is the cost of<i,j>

60
// p[1:k] is a minimum cost path.
{
bcost[1]=0.0;
for j=2 to n do
{
//compute bcost[j],
Let ‗r‘ be the vertex such that <r,j> is an edge of ‗G‘ &bcost[r]+c[r,j] is
minimum.
bcost[j] = bcost[r] + c[r,j];0
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j= k-1 to 2 do
p[j]=d[p[j+1]];
}
Trace Out
n=12 k=5
bcost (1)=0 d (1)=0
for j = 2 to 12
When j=2
bcost(2) = 9 d(2)=1
When j=3
bcost(3) = 7 d(3)=1
When j=4
bcost(4) = 3 d(4)=1
When j=5
bcost(5) = 2 d(5)=1
When j=6
bcost(6) =min(c (2,6) + bcost(2),c (3,6) + bcost(3))
=min(13,9)
bcost(6) = 9 d(6)=3
When j=7
bcost(7) =min(c (3,7) + bcost(3),c (5,7) + bcost(5) ,c (2,7) + bcost(2))
=min(14,13,11)
bcost(7) = 11 d(7)=2
When j=8
bcost(8) =min(c (2,8) + bcost(2),c (4,8) + bcost(4) ,c (5,8) +bcost(5))
=min(10,14,10)
bcost(8) = 10 d(8)=2
When j=9
bcost(9) =min(c (6,9) + bcost(6),c (7,9) + bcost(7))
=min(15,15)
bcost(9) = 15 d(9)=6
When j=10
bcost(10)=min(c(6,10)+bcost(6),c(7,10)+bcost(7)),c(8,10)+bcost(8))
=min(14,14,15)
bcost(10)= 14 d(10)=6
When j=11

61
bcost(11) =c (8,11) + bcost(8))
bcost(11) = 16 d(11)=8
When j=12
bcost(12)=min(c(9,12)+bcost(9),c(10,12)+bcost(10),c(11,12)+bcost(11)
)
=min(19,16,21)
bcost(12) = 16 d(12)=10

p[1]=1 p[5]=12
for j=4 to 2
When j=4 p[4]=d[p[5]]=d[12]=10
When j=3 p[3]=d[p[4]]=d[10]=6
When j=2 p[2]=d[p[3]]=d[6]=3

The path through which you have to find the shortest distance from vertex 1 to vertex
12.
p[5]=12
p[4]= =10
p[3]= =6
p[2]= =3
p[1]=1
So the minimum cost path is,

7 2 5 2
1 3 6 10 12

The cost is 16.

3.1.2 All Pair Shortest Path


 Let G=<N,A> be a directed graph ‘N‘ is a set of nodes and ‗A‘ is the set of
edges.
 Each edge has an associated non-negative length.
 We want to calculate the length of the shortest path between each pair of
nodes.
 Suppose the nodes of G are numbered from 1 to n, so N={1,2,...N},and
suppose G matrix L gives the length of each edge, with L(i,j)=0 for
i=1,2...n,L(i,j)>=for all i & j, and L(i,j)=infinity, if the edge (i,j) does not exist.
 The principle of optimality applies: if k is the node on the shortest path from i
to j then the part of the path from i to k and the part from k to j must also be
optimal, that is shorter.
 First, create a cost adjacency matrix for the given graph.
 Copy the above matrix-to-matrix D, which will give the direct distance between
nodes.

62

We have to perform N iteration after iteration k.the matrix D will give you the
distance between nodes with only (1,2...,k)as intermediate nodes.
 At the iteration k, we have to check for each pair of nodes (i,j) whether or not
there exists a path from i to j passing through node k.
Cost Adjacency Matrix
D0 =L= 0 5 
50 0 15 5
30  0 15
15  5 0

1 7 5  11 12 - -
2 7  2 21 - - 24
3  3  - 32 - -
4 4  1  41 – 43 -

vertex 1
7 5  11 12 - -
7 12  2 21 212 - 24
 3 - 32 - -
4 9 1 41 412 43 –

vertex 2
7 5 7 11 12 - 124
7 12  2 21 212 - 24
10 3 5321 32 - 324
4 9 1 11 41 412 43 4124

vertex 3

7 5  7 11 12 - 124
7 12  2 21 212 - 24
10 3  5 321 32 - 324
4 4 1 6 41 432 43 4324

vertex 4:

7 5 8 7 11 12 1243 124
663 2 241 2432243 24
9 3 6 5 3241 32 3243 324
4 4 1 6 41 432 43 4324

 At 0th iteration it nil give you the direct distances between any 2 nodes

D0= 0 5 
50 0 15 5
30  0 15
15  5 0

63
 At 1st iteration we have to check the each pair(i,j) whether there is a path
through node 1.if so we have to check whether it is minimum than the
previous value and if I is so than the distance through 1 is the value of
d1(i,j).at the same time we have to solve the intermediate node in the matrix
position p(i,j).

0 5 
50 0 15 5 p[3,2]= 1
D1= 30 35 0 15 p[4,2]= 1
15 20 5 0

15

30
5
5 50 5 15

15

Figure 3.2 floyd’s algorithm and work

 likewise we have to find the value for N iteration (ie) for N nodes.

0 5 2010 P[1,3] = 2
D2= 50 0 15 5 P[1,4] = 2
30 35 0 15
15 20 5 0

0 5 20 10
D3= 45 0 15 5 P[2,1]=3
30 35 0 15
15 20 5 0

0 5 15 10
20 0 10 5 P[1,3]=4
D4= 30 35 0 15 P[2,3]=4
15 20 5 0

 D4 will give the shortest distance between any pair of nodes.


 If you want the exact path then we have to refer the matrix p.The matrix will be,
0042
3040 0 direct path
P= 0 1 0 0
0100

64
 Since,p[1,3]=4,the shortest path from 1 to3 passes through 4.
 Looking now at p[1,4]&p[4,3] we discover that between 1 & 4, we have to go to
node 2 but that from 4 to 3 we proceed directly.
 Finally we see the trips from 1 to 2, & from 2 to 4, are also direct.
 The shortest path from 1 to 3 is 1,2,4,3.
Algorithm
Function Floyd (L[1..r,1..r]):array[1..n,1..n]
array D[1..n,1..n]

D=L
For k = 1 to n do
For i = 1 to n do
For j = 1 to n do
D [ i , j ] = min (D[ i, j ], D[ i, k ] + D[ k, j ]
Return D

Analysis
This algorithm takes a time of  (n3)

3.1.3 0/1 Knapsack Problem


 This problem is similar to fractional knapsack problem. But the only difference
is : In fractional knapsack problem we may take a fraction of an object. But in
0/1 knapsack problem either the object will be fully placed (xi=1) or the object
is not placed at all. (xi =0).
 We are given ‗ n ‗ objects with weight wiand profits pi where i varies from 1 to
n and also a knapsack with capacity ‗ M ‗.
 The problem is, we have to fill the bag with the help of ‗ n ‗ objects and the
resulting profit has to be maximum.
 Formally, the problem can be stated as,
n
Maximize  xi pi
i=l
n
subject to  xi wi M
i=l
 Where xi are constraints on the solution xi {0,1}. (i.e) xi is required to be 0 or
1. if the object is selected then the unit in 1. if the object is rejected than the
unit is 0. that‘s why it is called as 0/1, knapsack problem.
 To solve the problem by dynamic programming we form up a table T[1…n,
0…M] (ie) the size is n. where ‗n‘ is the no. of objects and column starts with
‗0‘ to capacity (ie) ‗M‘.
 In the table T[i,j] will be the maximum valve of the objects i varies from 1 to n
and j varies from 0 to M.
Rules To Fill The Table
 If i=l and j < w(i) then
T(i,j) =0.
 If i=l and j  w (i) then
T (i,j) = p(i)

65
The cell is filled with the profit p[i], since only one object can be selected to
the maximum.
 If i>l and j < w(i) then
T(i,j) = T (i-l,j) .
The cell is filled the profit of previous object since it is not possible with the
current object.
 If i>l and j  w(i) then
T (i,j) = {p(i) +T(i-l,j-w(i))
Since only ‗l‘ unit can be selected to the maximum. It is the current profit + profit
of the previous object to fill the remaining capacity of the bag.
 After the table generation, it will give the details of the profit.
Rules To Get The Combination Of Object
 Start with the last position of i and j, T[i,j].i.e)T[n,M].
If T[i,j] = T[i-l,j] then
no object of ‗i‘ is required so move up to T[i-l,j].

 After moved, Check


if T[i,j]=T[i-l,j-w(i)]+ p[I] then
one unit of object ‗i‘ is selected and move up to the position T[i-l,j-w(i)]
 Repeat the same process until we reach T[i,0], then there will be nothing to fill
the bag stop the process.

Example
Capacity of the bag M=11 No.of objects n=5
W1 W2 W3 W4 W5
1 2 5 6 7
P1 P2 P3 P4 P5
1 6 18 22 28

Table
0 1 2 3 4 5 6 7 8 9 10 11
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 23 28 29 29 40
5 0 1 6 7 7 19 22 28 29 34 35 35
The selected objects are 2 5

Analysis
A time (nM) is necessary to construct the tble V, and thst the
composition of the optimal load can be detemined in atime O(n+M).

3.1.4 Travelling Salesman Problem


 Let G(V,E) be a directed graph with edge cost c ij.
 cij. is defined such that cij>0 for all i and j and cij = ,if <i,j> E.
 Let V =n and assume n>1.
 The traveling salesman problem is to find a tour of minimum cost.
 A tour of G is a directed cycle that include every vertex in V.
66
i.e)The tour is the shortest path that starts and ends at the same vertex (ie) 1.
 The cost of the tour is the sum of cost of the edges on the tour.
Application
1. Suppose we have to route a postal van to pick up mail from the mail boxes
located at ‗n‘ different sites.An n+1 vertex graph can be used to represent the
situation.One vertex represent the post office from which the postal van starts
and return.Edge<i,j> is assigned a cost equal to the distance from site ‗i‘ to
site ‗j‘.The route taken by the postal van is a tour and we are finding a tour of
minimum length.
Every tour consists of an edge <1,k> for some k  V-{} and a path from vertex
k to vertex 1.The path from vertex k to vertex 1 goes through each vertex in V-
{1,k} exactly once.
The function which is used to find the path is
g(1,V-{1}) = min{ cij + g(j,s-{j})}

 g(i,s) be the length of a shortest path starting at vertex I going through all
vertices in S and terminating at vertex 1.
g(i,s) =min{cij+g(j,s-{j})
 The function g(1,v-{1}) is the length of an optimal tour.
Steps To Find The Path
1. Find g(i,) =ci1, 1<=i<=n, hence we can use equation(2) to obtain g(i,s) for all
s to size 1.
2. That we have to start with s=1,(ie) there will be only one vertex in set ‗s‘.
3. Then s=2, and we have to proceed until |s| <n-1.
4. For example consider the graph.

5 10
8 6 8
13

15 9
20 10

12 9
Cost Matrix

0 10 15 20
5 0 9 10
6 13 0 12
8 8 9 0

g(i,s) starting position

set of nodes/vertex have to visited.

s = {1}
v ={2,3,4}

67
g(i,s) =min{cij+g(j,s-{j})

Step 1:
When s = 0 and i =1 to n.
g(1,) = c11 => 0
g(2,) = c21 => 5
g(3,) = c31 => 6
g(4,) = c41 => 8

g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
=min{10+25,15+25,20+23}
=min{35,35,43}
=35
Step2:
When s = 1 and i =2 to 4

g(3,{4}) = min{c34 +g{4,}}


=12+8
=20
g(4,{3}) = min{c43 +g{3,}}
= 9+6
=15
g(2,{4}) = min{c24 +g{4,}} = 10+8 =18
g(4,{2}) = min{c42 +g{2,}}
= 8+5
=13
g(2,{3}) = min{c23 +g{3,}}
=9+6
=15
g(3,{2}) = min{c32 +g{2,}}
= 13+5
=18
Step3:
When s = 2 and i  1, 1 s and i  s.

g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}
= min{9+20,10+15}
=min{29,25}
=25
g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}
=min{13+18,12+13}
= min{31,25}
=25
g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}
=min{8+15,9+18}
= min{23,27}
=23

68
Step4:
When s =3
g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
=min{10+25,15+25,20+23}
=min{35,35,43}
=35
An optimal cost is 35
The shortest path is,
g(1,{2,3,4}) = c12 + g(2,{3,4}) => 1->2
g(2,{3,4}) = c24 + g(4,{3}) => 1->2->4
g(4,{3}) = c43 + g(3{}) => 1->2->4->3->1
So the optimal tour is 1  2  4 3  1

Analysis:
 An algorithm that proceeds to find an optimal tour by using above 2 equations
require (n22n) time.
 Space needed by this algorithm is O(n.2 n).This is too large even for modest
values of n. This is a one main drawback of this problem.

3.1.5. Chained Matrix Multiplication


If we have a matrix A of size pq and B matrix of size qr. The product
of these two matrix C is given by,
cij = aikbkj , 1 i  p, 1 j  r , 1 k q.
It requires a total of pqr scalar multiplications.
Matrix multiplication is associative.i.e)(AxB).C=A.(BxC). So if we want to
calculate the product of more then 2 matrixes m= m1m2……. mn. For example,
multiplication of 4 matrices A x B x C x D.The order of the matrices are given below.
A = 13  5
B = 5  89
C = 89  3
D = 3  34
We can multiply these 4 matrices in any no. of the sequences. Now, we will
see some of the sequences:
1st Sequence: M = (((A.B).C).D)
A.B C
= (13 * 5 * 89) * (89 * 3)
A.B.C. D
= (13 * 89 * 3) * (3 * 34)
A.B.C.D
= 13 * 3 * 34
No.of multiplications required = 13 * 5 * 89 + 13 * 89 * 3 + 13 * 3 * 34
= 10,582
2nd Sequence: M = (A * B) * (C * D)
= 13 * 5 * 89 + 89 * 3 * 34 + 13 * 89 * 34
= 54201 no. of multiplications
3rd Sequence: M = (A.(BC)) . D
= 5 * 89 * 3 + 13 * 5 * 3 + 13 * 3 *34
= 2856 no. of multiplications
For comparing all these sequences, (A(BC)).D sequence requires less no. of
multiplications.
69
For finding the no. of multiplications we going for Dynamic programming method

Straight Forward Method


Our aim is to find the total no. of scalar multiplications required to
compute the product of n matrices. Let M= (M1,M2,……Mn) be the chain of matrix
to be calculated using the dynamic programming method. First simply parenthesize
the expression in every possible fashion. Then count each time how many
multiplications are required.
 To find the product of n matrices:
 M= (M1,M2,……Mn).
 Parenthesize the product of n matrices.i.e) Split it in-between ith and i+1th
matrix.
 M=(M1,M2,……Mi) and (Mi+1,Mi+2,……Mn )

 In dynamic programming we always start with the smallest instances and


continue till we reach the required size.
 To compute the matrix product in optimal way using dynamic programming
method, we construct a table mij, 1 i  j n, Where mij gives the optimal
solution.
 Sizes of all the matrixes are stored in the array d[0..n]
 Build the table diagonal by diagonal;
 Diagonal s contains the elements mijsuch that j-1 =s.

Rules To Fill The Table Mij


s =0,1,……n-1
 If s=0 then
m(i,i) =0 for i =1,2,……n
 If s=1 then
m(i,i+1) = d(i-1)*di *d(i+1) for i=1,2,……n-1.

 If 1< s <n then


m(i,i+s)= min(mik+mk+1,i+s+di-1dkdi+s) for i k i+s and i = 1,2,……n-
s
Example
A=>135
B=>589
C=>893
D=>334
Single dimension array is used to store the sizes.
d[0]=13
d[1]=5
d[2]=89
d[3]=3
d[4]=34
If s=0 then
m(1,1)=0
m(2,2)=0
m(3,3)=0
m(4,4)=0

70
If s=1 then mi,i+1 = d(i-1)*di *d(i+1)
for i= 1 to 3
when i=1 m12 = d0*d1*d2 =13*5*89=5785
when i=2 m23 = d1*d2*d3 =5*89*3=1335
when i=3 m34 = d2*d3*d4 =89*3*34=9078

if s=2 then mi,i+s=min(mik+mk+1,i+s+di-1dkdi+s)


for i= 1 to 2
when i=1 for k=1 to 2
when i=1,k=1
m13 =min(m11+m23+d0d1d3 , m12+m33+d0d2d3)
=(0+1335+(13*5*3),5785+0+(13*89*3))
=min(1530,9256)
=1530
when i=1,k=2
m13 =min(m22+m34+d1d2d4 , m23+m44+d1d3d4)
=(0+9078+(5*89*34),1335+0+(5*3*34))
=min(24208,1845)
=1845
When i=2 for k= 2 to 3
When i=2 ,k=2
m24 = m22+m34+d1d2d4
=0+9048+5 x 89 x34
=24208
When i=2 ,k=3
m24 = m23+m44+d1d3d4
=1335+0+5 x 3 34
=1845
m24 =min(24208,1845)
=1845
if s=3 then m(i,i+s)= min(mik+mk+1,i+s+di-1dkdi+s) for i k i+s and i = 1,2,……n-s

i=1 to 4-3 and k= 1 to 3


k=1 k=2 k=3
m14 =min(m11+m24+d0d1d4 , m12+m34+d0d2d4, m13+m44+d0d3d4)
=min(4055,54201,2856)
=2856
The matrix table mijwill look like,

1 2 3 4

1 0 5785 1530 2856 s=3

2 0 1335 1845 s=2

3 0 9078 s=1

4 0 s=0

71
Here there is no need to fill the lower diagonal. The solution to the original problem is
given by m1n in the table. The execution time of the algorithm is (n3).

3.2. Basic Search And Traversal Technique


3.2.1. The Technique
It is divided into 2 categories. They are,
1.Traversal Method 2.Search Method
Traversal Method
 Traversal means visiting all the nodes of the graph
 Applicable only to binary trees.
 It examines every node in the given data object instance.
Search Method
 In search method all the nodes of the graph are visited, until the searching
node is found.
 Applicable to graphs.
 Not examine all the vertices.
 If any node is examined, then it is said to be visited.
3.2.1.1.Techniques For Binary Trees
To solve many problems, it needs manipulation of binary trees/ graphs/ trees.
This manipulation requires to determine a vertex/subset of vertices that satisfies a
given problem.
Example 1:
Let B be a one binary tree. It consists of many nodes. Now our objective is to
find all the vericves in the binary tree with a data value less than x.
Example 2:
To find all the vertices in a graph G that can be reached from another given
vertex v.
The solution to these problems can be found by examining all the vertices in
graph/tree. This takes the form of search a data in the data object. When the search
involves the examination of every vertex in the object being searched, it is called as
a traversal.
Operations Performed on a Binary Tree:
 Traversing a tree => Visiting each node in the tree exactly once.
When traversing a tree ,we can visit the nodes in the following orders.
Pre Order - DLR
Post Order - LRD
In order - LDR

INORDER:
First visit the left node of the root node.Then visit the root node and right node
of the root node.Do this process continuously till we traverse all the nodes in the
tree.
Recursive formulation of inorder traversal
treenode=record
{
Type data;//Type ios the data type of data.
Treenode *lchild;Treenode *rchild;
}
Algorithm InOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
72
{
if t ≠ 0 then
{
InOrder(t->lchild);
Visit(t);
InOrder(t->rchild);
} }
Algorithm PreOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
{
if t ≠ 0 then
{
Visit(t);
PreOrder(t->lchild);
PreOrder(t->rchild);
} }
Algorithm PostOrder(t)
//t is a binary tree.Each of t has three fields:lchild,data and rchild.
{
if t ≠ 0 then
{
PostOrder(t->lchild);
PostOrder(t->rchild);
Visit(t);
}}
A

B C

D E

E F

H I

Figure 3.2.1.

The above picture shows a binary tree.Visit(t) – Visiting a node requires only the
printing of its data field.The output resulting from inorder,preorder and postorder
traversal is FDHGIBEAC, ABDFGHIEC and FHIGDEBCA respectively.
Analysis
The total time taken by the traversal is θ(n).the additional space needed is
that for the recursion stack.If t has depth d,then this space is θ(d).For an n-node
binary tree,d≤n and so S(n)=O(n).

73
3.2.1.2. Techniques For Graphs
 The fundamental problem concerning graphs is the reachability problem.
 In it simplest from it requires us to determine whether there exist a path in the
given graph, G =(V,E) such that this path starts at vertex ‗v‘ and ends at
vertex ‗u‘.
 A more general form is to determine for a given starting vertex v €V all
vertices u such that there is a path from v to u.
 This problem can be solved by starting at vertex ‗v‘ and systematically
searching the graph ‗G‘ for vertex that can be reached from ‗v‘.
 We describe 2 search methods for this.
 Breadth first Search and Traversal.
 Depth first Search and Traversal.

3.2.2. Breadth First Search And Traversal


 In Breadth First Search we start at a vertex ‗v‘ and mark it as having been
reached (visited).
 The vertex ‗v‘ is at this time said to be unexplored.
 A vertex is said to have been explored by an algorithm when the algorithm
has visited all vertices adjust from it.
 All unvisited vertices adjacent from ‗v‘ are visited next. These are new
unexplored vertices.
 Vertex ‗v‘ has now been explored. The newly visit vertices haven‘t been
explored and are put on the end of a list of unexplored vertices.
 The first vertex on this list in the next to be explored. Exploration continues
until no unexplored vertex is left.
 The list of unexplored vertices operates as a queue and can be represented
using any of the start queue representation.

2 3

4 5 6 7

Figure 3.2.2.

Psuedo Code Algorithm for breadth first search


Algorithm BFS (v)
// A breadth first search of ‗G‘ is carried out.
// beginning at vertex-v; For any node i, visited[i]=1..
// if ‗i‘ has already been visited. The graph ‗G‘
// and array visited [] are global; visited []
// initialized to zero.
{

74
u=v; // q is a queue of unexplored vertices
visited (v)= 1;
repeat
{
for all vertices ‗w‘ adjacent from u do
{
if (visited[w]=0) then
{
Add w to q;
visited[w]=1
}
}
if q is empty then return;// No unexplored vertex,delete u from q;
//get first unexplored vertex
} until (false)//q becomes empty
}

Here the time and space required by BFT on an n-vertex, e-edge graph
one O(n+e) and O(n) resp if adjacency list is used. if adjacency matrix is used then
the bounds are O(n2) and O(n) respectively.

3.2.3. Depth First Search And Traversal


A depth first search of a graph differs from a breadth first search in that the
exploration of a vertex v is suspended as soon as a new vertex is reached. At this
time the exploration of the new vertex u begins. When this new vertex has been
explored, the exploration of u continues. The search terminates when all reached
vertices have been fully explored. This search process is best-described recursively.
A depth first search of the graph of figure 3.2.2. starting at vertex 1
results in the vertices being visited in the order 1,2,4,8,5,6,3,7.
A depth first traversal of a grpah is carried out by repeatedly calling DFS
,with anew unvisited starting each time.The algorithm for this (DFT ) differs from BFT
only in that the call to BFS(i) is replaced by a call to DFS(i).

Psuedo Code Algorithm for Depth First Search of a graph:


Algorithm DFS(V)
//given an undirected (directed graph)
//G=(V,E) with n vertices and an array visited[]
//initially set to zero, this algorithm of visits all the vertices reachable
//from v.
//G and visited[] are global
{
visited[v]:=1;
for each vertex w adjacent from v do
{
if(visited[w]=0) then DFS(w);
}
}

75
BFS & DFS are 2 fundamentally different search methods. In BFS, a node is
fully explored before the exploration of any other node begins. The next node to
explore is the first unexplored node remaining. In DFS the exploration of a node is
suspended as soon as a new unexplored node is reached. T he exploration of this
new node is immediately begun.

3.2.5. Topological Sorting


In graph theory, a topological sort or topological ordering of a directed acyclic
graph (DAG) is a linear ordering of its nodes which is compatible with the partial
order R induced on the nodes where x comes before y (xRy) if there's a directed
path from x to y in the DAG. An equivalent definition is that each node comes before
all nodes to which it has outbound edges. Every DAG has one or more topological
sorts.
Example:
The canonical application of topological sorting (topological order) is in
scheduling a sequence of jobs. The jobs are represented by vertices, and there is an
edge from x to y if job x must be completed before job y can be done (for example,
washing machine must finish before we put the clothes to dry). Then, a topological
sort gives an order in which to perform the jobs. This has applications in computer
science, such as in instruction scheduling, ordering of formula cell evaluation in
spreadsheets, dependencies in makefiles, and symbol dependencies in linkers.

The graph shown above has many valid topological sorts, including:
 7,5,3,11,8,2,10,9
 7,5,11,2,3,10,8,9
 3,7,8,5,11,10,9,2
 3,5,7,11,10,2,8,9
In this problem, the functions are:
i) Decide whether a vertex has any predecessor.
ii) Decide a vertex together with all its incident edges.

 This can be efficiently done if for each vertex a count of the no of its immediate
predecessor is kept.
 This can be easily implemented if the network is represented by its adjacency
lists.

76
The following algorithm assumes that the network is represented by
adjacency lists.
Algorithm:
Procedure Topological_order (count,vertex,link,n)
//the n vertices of an AOV-network are listed in topological order .
the network is represented as a set of adjacency lists with
COUNT(i)=the in-degree of vertex i//
1) top 0 //initialize stack//
2) for i  1 to n do //create a linked stack of vertices with no predecessors //
3) if COUNT (I)=0 then [ COUNT(i) top;topI]
4) end
5) for i 1 to n do //print the vertices in topological order //
6) if top = 0 then [print (‗network has a cycle ‗);stop]
7) j top;topCOUNT (top);print(j)//unstack a vertex//
8) ptrLINK(j)
9) while ptr<> 0 do
10) //decrease the count of successor vertices of j//
11) K VERTEX(ptr) // K is a successor of j//
12) COUNT(K)  COUNT(K)-1 // Decrease count//
13) If COUNT(K)=0 // Add vertex K to stack//
14) Then[COUNT(K)  top; topK]
15) PtrLINK(ptr)
16) End
17) End
18) End TOPOLOGICAL_ORDER.

COUNT LINK VERTEX LINK


0
1
1
1
3 0
2 0
The head nodes of these lists contain two fields : COUNT & LINK. The COUNT
field contains the in-degree of that vertex and LINK is a pointer to the first node on
the adjacency list. Each list node has 2 fields:VERTEX& LINK.
COUNT fields can be set at the time of i/p. When edge < i ,j > is i/p the
count of vertex j is incremented by 1. The list of vertices with zero count is
maintained as a stack.

77
Chapter 4: Backtracking

4.1 Backtracking
 It is one of the most general algorithm design techniques.
 Many problems which deal with searching for a set of solutions or for a
optimal solution satisfying some constraints can be solved using the
backtracking formulation.
 To apply backtracking method, tne desired solution must be expressible as an
n-tuple (x1…xn) where xi is chosen from some finite set Si.
 The problem is to find a vector, which maximizes or minimizes a criterion
function P(x1….xn).
 The major advantage of this method is, once we know that a partial vector
(x1,…xi) will not lead to an optimal solution that (mi+1………..mn) possible test
vectors may be ignored entirely.
 Many problems solved using backtracking require that all the solutions satisfy
a complex set of constraints.
 These constraints are classified as:
i) Explicit constraints.
ii) Implicit constraints.

1) Explicit constraints:
Explicit constraints are rules that restrict each Xi to take values only from
a given set.
Some examples are,
Xi  0 or Si = {all non-negative real nos.}
Xi =0 or 1 or Si={0,1}.
Li  Xi  Ui or Si= {a: Li  a  Ui}
 All tupules that satisfy the explicit constraint define a possible solution space
for I.

2) Implicit constraints:
The implicit constraint determines which of the tuples in the solution
space I can actually satisfy the criterion functions.

Algorithm:
Algorithm IBacktracking (n)
// This schema describes the backtracking procedure .All solutions are generated in
X[1:n]
//and printed as soon as they are determined.
{
k=1;
While (k  0) do
{
if (there remains all untried
X[k]  T (X[1],[2],…..X[k-1]) and Bk (X[1],…..X[k])) is true ) then
{
if(X[1],……X[k] )is the path to the answer node)
Then write(X[1:k]);
k=k+1; //consider the next step.
}
78
else k=k-1; //consider backtracking to the previous set.
}
}
 All solutions are generated in X[1:n] and printed as soon as they are
determined.

 T(X[1]…..X[k-1]) is all possible values of X[k] gives that X[1],……..X[k-1] have


already been chosen.
 Bk(X[1]………X[k]) is a boundary function which determines the elements of
X[k] which satisfies the implicit constraint.
 Certain problems which are solved using backtracking method are,

1. Sum of subsets.
2. Graph coloring.
3. Hamiltonian cycle.
4. N-Queens problem.
5. Knapsack problem.

4.1.1 Sum of subsets


 We are given ‗n‘ positive numbers called weights and we have to find all
combinations of these numbers whose sum is M. this is called sum of subsets
problem.
 If we consider backtracking procedure using fixed tuple strategy , the
elements X(i) of the solution vector is either 1 or 0 depending on if the weight
W(i) is included or not.
 If the state space tree of the solution, for a node at level I, the left child
corresponds to X(i)=1 and right to X(i)=0.
Example:
 Given n=6,M=30 and W(1…6)=(5,10,12,13,15,18).We have to generate all
possible combinations of subsets whose sum is equal to the given value
M=30.
 In state space tree of the solution the rectangular node lists the values of s, k,
r, where s is the sum of subsets,‘k‘ is the iteration and ‗r‘ is the sum of
elements after ‗k‘ in the original set.
 The state space tree for the given problem is,

79
S, n, r
0,1,73

X(1)=1 x(1)=0

5,2,68 0,2,68

X(2)=1 x(2)=0 x(2)=1 x(2)=0


5,3,58 5,3,58 10,3,587 0,3,58

X(3)=1 x(3)=0 x(3)=1 x(3)=0


27,4,46 15,4,46 17,4,4 5,4,4 10,4,4 C
6 6
X(4)=1 x(4)=0
X(4)=0
B
15,5,33 5,5,33 10,5,33

X(5)=1 x(5)=1
A 20,6,18

Ist solution is A -> 1 1 0 0 1 0


IInd solution is B -> 1 0 1 1 0 0
III rd solution is C -> 0 0 1 0 0 1

 In the state space tree, edges from level ‗i‘ nodes to ‗i+1‘ nodes are labeled
with the values of Xi, which is either 0 or 1.
 The left sub tree of the root defines all subsets containing Wi.
 The right subtree of the root defines all subsets, which does not include Wi.
Generation of state space tree:
 Maintain an array X to represent all elements in the set.
 The value of Xi indicates whether the weight Wi is included or not.
 Sum is initialized to 0 i.e., s=0.
 We have to check starting from the first node.
 Assign X(k)<- 1.
 If S+X(k)=M then we print the subset b‘coz the sum is the required output.
 If the above condition is not satisfied then we have to check
S+X(k)+W(k+1)<=M. If so, we have to generate the left sub tree. It means
W(t) can be included so the sum will be incremented and we have to check for
the next k.
 After generating the left sub tree we have to generate the right sub tree, for
this we have to check S+W(k+1)<=M.B‘coz W(k) is omitted and W(k+1) has to
be selected.
 Repeat the process and find all the possible combinations of the subset

80
Algorithm:
Algorithm sumofsubset(s,k,r)
{
//generate the left child. note s+w(k)<=M since Bk-1 is true.
X{k]=1;
If (S+W[k]=m) then write(X[1:k]); // there is no recursive call here
W[j]>0,1<=j<=n.
Else if (S+W[k]+W[k+1]<=m) then sum of sub (S+W[k], k+1,r- W[k]);
//generate right child and evaluate Bk.
If ((S+ r- W[k]>=m)and(S+ W[k+1]<=m)) then
{
X{k]=0;
sum of sub (S, k+1, r- W[k]);
}
}

4.1.2 Hamiltonian cycles


 Let G=(V,E) be a connected graph with ‗n‘ vertices. A HAMILTONIAN CYCLE
is a round trip path along ‗n‘ edges of G which every vertex once and returns
to its starting position.
 If the Hamiltonian cycle begins at some vertex V1 belongs to G and the vertex
are visited in the order of V1,V2…….Vn+1,then the edges are in E,1<=I<=n
and the Vi are distinct except V1 and Vn+1 which are equal.
 Consider an example graph G1.

1 2 3 4

8 7 6 5

The graph G1 has Hamiltonian cycles:

->1,3,4,5,6,7,8,2,1 and
->1,2,8,7,6,5,4,3,1.
 The backtracking algorithm helps to find Hamiltonian cycle for any type of
graph.
Procedure:
1. Define a solution vector X(Xi……..Xn) where Xi represents the I th visited vertex
of the proposed cycle.
2. Create a cost adjacency matrix for the given graph.
3. The solution array initialized to all zeros except X(1)=1,b‘coz the cycle should start
at vertex ‗1‘.
4. Now we have to find the second vertex to be visited in the cycle.
5. The vertex from 1 to n are included in the cycle one by one by checking 2
conditions,

81
1.There should be a path from previous visited vertex to current vertex.
2.The current vertex must be distinct and should not have been visited
earlier.
6. When these two conditions are satisfied the current vertex is included in the cycle,
else the next vertex is tried.
7. When the nth vertex is visited we have to check, is there any path from nth vertex
to first 8vertex. if no path, the go back one step and after the previous visited node.
8. Repeat the above steps to generate possible Hamiltonian cycle.

Algorithm:(Finding All Hamiltonian Cycle)


Algorithm Hamiltonian (k)
{
Loop
Next value (k)
If (x (k)=0) then return;
{
If k=n then
Print (x)
Else
Hamiltonian (k+1);
End if

}
Repeat
}

Algorithm Nextvalue (k)


{
Repeat
{
X [k]=(X [k]+1) mod (n+1); //next vertex
If (X [k]=0) then return;
If (G [X [k-1], X [k]]  0) then
{
For j=1 to k-1 do if (X [j]=X [k]) then break;
// Check for distinction.
If (j=k) then //if true then the vertex is distinct.
If ((k<n) or ((k=n) and G [X [n], X [1]]  0)) then return;
}
} Until (false);
}

4.1.3 8-Queens Problem


This 8 queens problem is to place n-queens in an ‗N*N‘ matrix in such a way
that no two queens attack each otherwise no two queens should be in the same row,
column, diagonal.
Solution:
 The solution vector X (X1…Xn) represents a solution in which Xi is the column
of the th row where I th queen is placed.
 First, we have to check no two queens are in same row.

82
 Second, we have to check no two queens are in same column.
 The function, which is used to check these two conditions, is [I, X (j)], which
gives position of the Ith queen, where I represents the row and X (j) represents
the column position.
 Third, we have to check no two queens are in it diagonal.
 Consider two dimensional array A[1:n,1:n] in which we observe that every
element on the same diagonal that runs from upper left to lower right has the
same value.
 Also, every element on the same diagonal that runs from lower right to upper
left has the same value.
 Suppose two queens are in same position (i,j) and (k,l) then two queens lie on
the same diagonal , if and only if |j-l|=|I-k|.

Steps to generate the solution:


 Initialize x array to zero and start by placing the first queen in k=1 in the first
row.
 To find the column position start from value 1 to n, where ‗n‘ is the no. Of
columns or no. Of queens.
 If k=1 then x (k)=1.so (k,x(k)) will give the position of the k th queen. Here we
have to check whether there is any queen in the same column or diagonal.
 For this considers the previous position, which had already, been found out.
Check whether
X (I)=X(k) for column |X(i)-X(k)|=(I-k) for the same diagonal.
 If any one of the conditions is true then return false indicating that k th queen
can‘t be placed in position X (k).
 For not possible condition increment X (k) value by one and precede d until
the position is found.
 If the position X (k)  n and k=n then the solution is generated completely.
 If k<n, then increment the ‗k‘ value and find position of the next queen.
 If the position X (k)>n then k th queen cannot be placed as the size of the
matrix is ‗N*N‘.
 So decrement the ‗k‘ value by one i.e. we have to back track and after the
position of the previous queen.
Algorithm:
Algorithm place (k,I)
//return true if a queen can be placed in k th row and I th column. otherwise it returns //
//false .X[] is a global array whose first k-1 values have been set. Abs® returns the
//absolute value of r.
{
For j=1 to k-1 do
If ((X [j]=I) //two in same column.
Or (abs (X [j]-I)=Abs (j-k)))
Then return false;
Return true;
}

83
Algorithm Nqueen (K,N)
//using backtracking it prints all possible positions of n queens in ‗n*n‘ chessboard.
So
//that they are non-tracking.
{
For I=1 to n do
{
If place (k,I) then
{
X [k]=I;
If (k=n) then write (X [1:n]);
Else nquenns(k+1,n) ;
}
}
}

Example: 4 queens.
Two possible solutions are

Q Q
Q Q

Q Q

Q Q

Solutin-1 Solution 2
(2 4 1 3) (3 1 4 2)

4.1.4 Graph coloring


 Let ‗G‘ be a graph and ‗m‘ be a given positive integer. If the nodes of ‗G‘ can
be colored in such a way that no two adjacent nodes have the same color. Yet
only ‗M‘ colors are used. So it‘s called M-color ability decision problem.
 The graph G can be colored using the smallest integer ‗m‘. This integer is
referred to as chromatic number of the graph.
 A graph is said to be planar iff it can be drawn on plane in such a way that no
two edges cross each other.
 Suppose we are given a map then, we have to convert it into planar. Consider
each and every region as a node. If two regions are adjacent then the
corresponding nodes are joined by an edge.
Consider a map with five regions and its graph.

4 5

2
1

84
1 is adjacent to 2, 3, 4.
2 is adjacent to 1, 3, 4, 5
3 is adjacent to 1, 2, 4
4 is adjacent to 1, 2, 3, 5
5 is adjacent to 2, 4

2 3

5 4

Steps to color the Graph Al


 First create the adjacency matrix graph(1:m,1:n)
g for a graph, if there is an
o
edge between i,j then C(i,j) = 1 otherwise C(i,j) =0.
 The Colors will be represented by the integers
ri 1,2,…..m and the solutions will
be stored in the array X(1),X(2),………..,X(n) t ,X(index) is the color, index is
the node. h
 He formula is used to set the color is, m
X(k) = (X(k)+1) % (m+1) m
 First one chromatic number is assigned ,after assigning a number for ‗k‘ node,
C
we have to check whether the adjacent nodes has got the same values if so
ol
then we have to assign the next value.
o
 Repeat the procedure until all possible combinations of colors are found.
ri
 The function which is used to check the adjacent nodes and same color is,
If(( Graph (k,j) == 1) and X(k) = nX(j))
g(
Example k)

1 2 //
t
h
e
4 3 gr
N= 4 a
M= 3 p
h
Adjacency Matrix: is
r
0 1 0 1 e
1 0 1 0 p
0 1 0 1
r
1 0 1 0
e
s
85 e
n
t
e
 Problem is to color the given graph of 4 nodes using 3 colors.
Node-1 can take the given graph of 4 nodes using 3 colors.
 The state space tree will give all possible colors in that ,the numbers which are
inside the circles are nodes ,and the branch with a number is the colors of the nodes.
State space tree

Algorithm:
Algorithm mcoloring(k)
// the graph is represented by its Boolean adjacency matrix G[1:n,1:n] .All
assignments //of 1,2,……….,m to the vertices of the graph such that adjacent
vertices are assigned //distinct integers are printed. ‘k‘ is the index of the next vertex
to color
{
repeat
{
// generate all legal assignment for X[k].
Nextvalue(k); // Assign to X[k] a legal color.
If (X[k]=0) then return; // No new color possible.
If (k=n) then // Almost ‗m‘ colors have been used to color the ‗n‘
vertices
Write(x[1:n]);
Else mcoloring(k+1);
}until(false);
}
Algorithm Nextvalue(K)
// X[1],……X[k-1] have been assigned integer values in the range[1,m] such that
//adjacent values have distinct integers. A value for X[k] is determined in the
//range[0,m].X[k] is assigned the next highest numbers color while maintaining
//distinctness form the adjacent vertices of vertex K. If no such color exists, then X[k]
is 0.
{

repeat
{

86
X[k] = (X[k]+1)mod(m+1); // next highest color.
If(X[k]=0) then return; //All colors have been used.
For j=1 to n do
{
// Check if this color is distinct from adjacent color.
If((G[k,j]  0)and(X[k] = X[j]))
// If (k,j) is an edge and if adjacent vertices have the same color.
Then break;
}

if(j=n+1) then return; //new color found.


} until(false); //otherwise try to find another color.
}

 The time spent by Nextvalue to determine the children is  (mn)


Total time is =  (mn n).

4.1.5Knapsack problem using backtracking


 The problem is similar to the zero-one (0/1) knapsack optimization problem is
dynamic programming algorithm.
 We are given ‗n‘ positive weights Wi and ‘n‘ positive profits Pi, and a positive
number ‗m‘ that is the knapsack capacity, the is problem calls for choosing a
subset of the weights such that,

WiXi  m
1i  n
and  PiXi
1i  n
is Maximized.

Xi Constitute Zero-one valued Vector.


 The Solution space is the same as that for the sum of subset‘s problem.
 Bounding functions are needed to help kill some live nodes without expanding
them. A good bounding function for this problem is obtained by using an
upper bound on the value of the best feasible solution obtainable by
expanding the given live node.
 The profits and weights are assigned in descending order depend upon the
ratio.

(i.e.) Pi/Wi  P(I+1) / W(I+1)


Solution:
 After assigning the profit and weights ,we have to take the first object weights
and check if the first weight is less than or equal to the capacity, if so then we
include that object (i.e.) the unit is 1.(i.e.) K 1.
 Then We are going to the next object, if the object weight is exceeded that
object does not fit. So unit of that object is ‗0‘.(i.e.) K=0.
 Then We are going to the bounding function ,this function determines an
upper bound on the best solution obtainable at level K+1.
 Repeat the process until we reach the optimal solution.

87
Algorithm:
Algorithm bknap(k,cp,cw)
// ‗m‘ is the size of the knapsack; ‗n‘ no.of weights & profits. W[]&P[] are the
//weights & weights. P[I]/W[I]  P[I+1]/W[I+1].
//fwFinal weights of knapsack.
//fp final max.profit.
//x[k] = 0 if W[k] is not the knapsack,else X[k]=1.
{
// Generate left child.
If((W+W[k]  m) then
{
Y[k] =1;
If(k<n) then Bnap(k+1,cp+P[k],Cw +W[k])
If((Cp + p[w] >fp) and (k=n)) then

{
fp = cp + P[k];
fw = Cw+W[k];
for j=1 to k do X[j] = Y[j];
} }

if(Bound(cp,cw,k)  fp) then


{
y[k] = 0;
if(k<n) then Bnap (K+1,cp,cw);
if((cp>fp) and (k=n)) then
{
fp = cp;
fw = cw;
for j=1 to k do X[j] = Y[j];
} }}
Algorithm for bounding function:
Algorithm Bound(cp,cw,k)
// cp current profit total.
//cw current weight total.
//kthe index of the last removed item.
//mthe knapsack size.
{
b=cp;
c=cw;
for I =- k+1 to n do
{
c= c+w[I];
if (c<m) then b=b+p[I];
else return b+ (1-(c-m)/W[I]) * P[I];
}return b;}

88
Example:
M= 6 Wi = 2,3,4 4 2 2
N= 3 Pi = 1,2,5 Pi/Wi (i.e.) 5 2 1

Xi = 1 0 1
The maximum weight is 6
The Maximum profit is (1*5) + (0*2) + (1*1)
 5+1
 6.
Fp = (-1)
 1  3 & 0+4  6
cw = 4,cp = 5,y(1) =1
k = k+2
 2  3 but 7>6
so y(2) = 0
 So bound(5,4,2,6)
B=5
C=4
I=3 to 3
C=6
6 6
So return 5+(1-(6-6))/(2*1)
 5.5 is not less than fp.
So, k=k+1 (i.e.) 3.
3=3 & 4+2  6
cw= 6,cp = 6, y(3)=1.
K=4.
 If 4> 3 then
Fp =6,fw=6,k=3 ,x(1) 1 0 1
The solution Xi  1 0 1
Profit  6
Weight 6.

89
Chapter 5: Branch And Bound Method

5.1. Introduction
The design technique known as branch and bound is very similar to
backtracking(seen in unit 4) in that it searches a tree model of the solution space
and is applicable to a wide variety of discrete combinatorial problems.
Each node in the combinatorial tree generated in the last Unit defines a problem
state. All paths from the root to other nodes define the state space of the problem.
Solution states are those problem states 's' for which the path from the root to 's'
defines a tuple in the solution space. The leaf nodes in the combinatorial tree are the
solution states.
Answer states are those solution states 's' for which the path from the root to 's'
defines a tuple that is a member of the set of solutions (i.e.,it satisfies the implicit
constraints) of the problem.
The tree organization of the solution space is referred to as the state space tree.
A node which has been generated and all of whose children have not yet been
generated is called a live node.
The live node whose children are currently being generated is called the E-node
(node being expanded).
A dead node is a generated node, which is not to be expanded further or all of
whose children have been generated.
Bounding functions are used to kill live nodes without generating all their children.
Depth first node generation with bounding function is called backtracking. State
generation methods in which the E-node remains the E-node until it is dead lead to
branch-and-bound method.
The term branch-and-bound refers to all state space search methods in which all
children of the E-node are generated before any other live node can become the E-
node.
In branch-and-bound terminology breadth first search(BFS)- like state space
search will be called FIFO (First In First Output) search as the list of live nodes is a
first -in-first -out list(or queue).
A D-search (depth search) state space search will be called LIFO(Last In First
Out)search, as the list of live nodes is a list-in-first-out list (or stack).
Bounding functions are used to help avoid the generation of subtrees that do not
contain an answer node.
The branch-and-bound algorithms search a tree model of the solution space to
get the solution. However, this type of algorithms is oriented more toward
optimization. An algorithm of this type specifies a real -valued cost function for each
of the nodes that appear in the search tree.
Usually, the goal here is to find a configuration for which the cost function is
minimized. the branch-and-bound algorithms are rarely simple. they tend to be quite
complicated in many cases.
Example [4-queens] Let us see how a FIFO branch-and-bound algorithm would
search the state space tree for the 4-queens problem.

90
Figure 5.1
Initially, there is only one live node, node1. This represents the case in which no
queen has been placed on the chessboard. This node becomes the E-node.
It is expanded and its children,nodes2,18,34,and 50,are generated.
These nodes represent a chessboard with queen1 in row 1and columns 1, 2, 3,
and 4 respectively.
The only live nodes 2, 18, 34, and 50.If the nodes are generated in this order,
then the next E-node is node 2.
It is expanded and the nodes 3,8, and 13 are generated. Node 3 is immediately
killed using the bounding function. Nodes 8 and 13 are added to the queue of live
nodes.
Node 18 becomes the next E-node. nodes 19,24, and 29 are generated. Nodes
19 and 24 are killed as a result of the bounding functions. Node 29 is added to the
queue of live nodes.
Now the E-node is node 34.Figure 8.1 shows the portion of the tree of Figure 7.2
that is generated by a FIFO branch-and-bound search. Nodes that are killed as a
result of the bounding functions are a "B" under them.
Numbers inside the nodes correspond to the numbers in Figure 7.2.Numbers
outside the nodes give the order in which the nodes are generated by FIFO branch-
and-bound.
At the time the answer node, node 31, is reached, the only live nodes remaining
are nodes 38 and 54.

5.2. Least Cost (LC) Search


 In LIFO Method, the selection (generation) of next E-node is a rigid one. The
next E-node is selected in terms of level by level.
 The nodes in current level are expanded first and then go to the next level.
 This process is repeated until the nearest answer node is found
 In FIFO, it takes more time to search the nearest answer node, because of
expansion of all nodes in each level. So we go for Least cost search(LC)
method

91
 In LC search method, we can speed up the searching by ranking (cost)
function.

Figure 5.2
In Example 5.1, when node 30 is generated, it should have become obvious to the
search algorithm that this node will lead to answer node in one move. However, the
rigid FIFO rule first requires the expansion of all live nodes generated before node
30 was expanded.
The search for an answer node can often be speeded by using an "intelligent"
ranking function (.) for live nodes. The next E-node is selected on the basis of this
ranking function.
. In the 4-queens example if we use a ranking function that assigns node 30 a
better rank than all other live nodes, then node 30 will become E-node following
node 29.The remaining live nodes will never become E-nodes as the expansion of
node 30 results in the generation of an answer node (node 31).
The ideal way to assign ranks would be on the basis of the additional
computational effort (or cost) needed to reach an answer node from the live
node.For any node x, this cost could be
(1) The number of nodes on the sub-tree x that need to be generated before
any answer node is generated or, more simply,
(2) The number of levels the nearest answer node (in the sub-tree x) is from x
Using cost measure (2), the cost of the root of the tree of Figure 8.1 is 4 (node 31
is four levels from node 1).The costs of nodes 18 and 34,29 and 35,and 30 and 38
are respectively 3, 2, and 1.The costs of all remaining nodes on levels 2, 3, and 4
are respectively greater than 3, 2, and 1.
Using these costs as a basis to select the next E-node, the E-nodes are nodes 1,
18, 29, and 30 (in that order).The only other nodes to get generated are nodes 2, 34,
50, 19, 24, 32, and 31.
The difficulty of using the ideal cost function is that computing the cost of a node
usually involves a search of the sub-tree x for an answer node. Hence, by the time
the cost of a node is determined, that sub-tree has been searched and there is no
need to explore x again. For this reason, search algorithms usually rank nodes only
on the basis of an estimate (.) of their cost.

92
Let (x) be an estimate of the additional effort needed to reach an answer node
from x. node x is assigned a rank using a function (.) such that (x)=f(h(x))+
(x),where h(x) is the cost of reaching x from the root and f(.) is any non-decreasing
function.

 A search strategy that uses a cost function c’(x)=f(h(x)+g’(x)) to select


the next E-node (current live node is chosen as next E-node) with least
Cost. Hence such a strategy is called Least Cost (LC) search.
h(x) – cost of reaching x from root (no of levels form root to any node (x)
g(x)-cost of reaching a nearest answer node(answer node with
minimum cost) from any node x(no of levels form any node x to a
nearest answer node)
 Assign the rank (cost) to the node requires
 Computation of cost needed to search an answer node from the live
 For any node x, the cost is the number of levels the nearest answer
node is from x.(that is number of levels from x to the nearest
answer node)
Cost function c(.) is defined as,if x is an answer node ,then c(x) is the cost
(level, computational difficulty,etc.) of reaching x from the root of the state space tree
. If x is not an answer node, then c(x)=infinity, providing the sub-tree x contains no
answer node; otherwise c(x) is equals the cost of a minimum cost answer node in
the sub-tree x. It should be easy to see that (.) with f(h(x))=h(x) is an approximation
to c(.).

5.2.1. Control Abstractions for LC Search


Let t - state space tree
c( ) - cost function for the nodes in t
If x is a node in t, then c(x) is the minimum cost of any answer node in the subtree
with root x. Thus , c(t) – cost of a minimum cost answer node in t.Wecan not find c(.)
easily, so heuristic C‘() is used.

listnode = record
{
listnode *next, *parent;
Float cost;
}

Algorithm LC Search(t)
\\ Search t for an answer node
{
If *t is an answer node then output *t and return;
E = t ; \\ E-node
Initialize the list of live nodes to be empty;
Repeat
{
For each child x of E do
{
If x is an answer node the output the path
from x to t and return

93
Add(x) ; \\ x is a new live node.
(x->parent):=E; \\Pointer for path to root.
}
If there are no more live nodes then
{
Write(―No answer node‖); return;
}
E=Least();
} until (false);
}

 The LC Search algorithm uses c‘ to find an answer node


 Two function Add(x) and Least() are used
o Add(x) -> Adds new live node x to the list of live nodes
o Least() -> finds a live node with least c‘(). This node is deleted form
the list of live nodes and returned.
 It outputs the path from the answer node it finds to the root node t
o If each x becomes live node , then associate a field parent which
gives the parent of node x
o When answer node g is found, the path from g to t can be determined.
g - answer node
x - live node
t - root node
Algorithm explanation
1) Variable E point the current E-node
2) Initially root node t is the first E-node ie E = t ;
3) Initially the list of live nodes are empty
4) For loop examines all the children of the E-node
a. If x (one of the children) is answer node then output the path from x to
t and exit
b. If x is not a answer node then x becomes a live node and add it to
the list of live nodes. Its parent pointer is set to E
5) When all the children are generated, E-node becomes a dead node
6) If there are no more live nodes then print no answer node and exit
else call Least() to choose the next E-node
7) repeat the steps 4 and 6 till if any answer node is found or if entire
space tree has been searched

5.2.2. Example : 15 Puzzle Problem


 The 15 puzzle consists of 15 numbered tiles on a square frame with a
capacity of 16 tiles.
 Initial arrangement of the tiles are given, the objective of the problem is to
transform this arrangement into the goal arrangement through a series of
legal moves.
 The legal moves are ones in which a tile adjacent to the empty spot is moved
to Empty Spot

94
Problem State
 From the initial arrangement, four moves are possible
 We can move any one of the tiles numbered 2,3,5,6 to the empty spot
 These arrangements are called states of the puzzle
 To speed up the search, we can associate the cost c(x) with each node x in
the state space tree.
 C‘(x)=f(x)+g‘(x) where f(x) – length of the path from root to node x
g‘(x) – an estimate of the length of a shortest path from
x to a goal node in the subtree with root x
(or)
g‘(x) – number of nonblank tiles not in their goal
position

Part of the state space tree for the 15 puzzle


 An LC search start with root node (node 1) as E-node
 The E-node node 1 generates all its children 2,3, 4 and 5
 After E-node node 1 has generated all its children it becomes dead node
 The live nodes are 2,3 4 and 5
 To determine the next E-node, C‘(x) is computed for all live nodes and the live
node with least C‘(x)is selected as next E-node
C’(x)=f(x)+g’(x)

C’(2)=1+4 = 5
C’(3)=1+4 = 5
C’(4)=1+2 = 3 (least cost node)
C’(5)=1+4 = 5
 The live node with least C‘ is node 4, which becomes the next E-node.
It generates all its children 10,11 and becomes dead node
 The live nodes are 2,3,5,10,11 and 12
 To determine the next E-node, C‘(x) is computed for all live nodes and the live
node with least C‘(x)is selected as next E-node
C’(x)=f(x)+g’(x)

C’(2)=1+4 = 5
C’(3)=1+4 = 5
C’(5)=1+4 = 5
C’(10)=2+1 = 3 (least cost node)
C’(11)=2+3 = 5
 The live node with least C‘ is node10, which becomes the next E-
node. It generates all its children 22 and 23 and becomes dead node
 The node 23 is determined as goal node and the search terminates.
In this case LC search is as efficient as using exact function C‘().The LC search will
be far more selective than any other search methods

95
5.3 Bounding
 A branch -and-bound searches the state space tree using any search
mechanism in which all the children of the E-node are generated before
another node becomes the E-node.
 We assume that each answer node x has a cost c(x) associated with it and
that a minimum-cost answer node is to be found. Three common search
strategies are FIFO, LIFO, and LC.
 A cost function (.) such that (x) <=c(x) is used to provide lower bounds on
solutions obtainable from any node x.
 If upper is an upper bound, then all live nodes x with (x)>upper may be
killed as all answer nodes reachable from x have cost c(x)>=c‘(x)>upper6
 The starting value for upper can be set to infinity.
 Each time a new answer node is found, the value of upper can be updated
 C‘(x) and upper and c(x) can be computed using formulas

5.3.1 Example: Job Sheduling with deadlines Problem


We are given n jobs and one processor. Each job i has associated with it a
three tuple ( ).job i requires units of processing time .if it's processing is
not completed by the deadline , then a penalty is incurred.
We are given n jobs and one processor. Each job i has associated with it a 3
tuples (Pi,di,ti),where
ti - processing time in units
di - deadline
Pi - penalty. If the job i is not completed within the deadline di, then
penalty Pi is incurred
Objective of the problem:
To select the subset J of n jobs such that all jobs in J can be completed by
their deadlines. Hence a penalty is incurred only on those jobs not in J. The subset
J should have minimum penalty among all possible subsets J. such that a J is
optimal.
Let us assume that n=4
(P1,d1,t1) = (5,1,1)
(P2,d2,t2) = (10,3,2)
(P3,d3,t3) = (6,2,1)
(P4,d5,t4) = (3,1,1)
 The solution space for this instance consist of all possible subsets of the job
index set {1,2,3,4 }
job index : 1,2,3,4
solution space : all possible subset of job index
ie {1,2,3} {2,3} {1,4} {1,3,4} ,……..
 The solution space can be organized into a tree by means of two formulations
1. variable tuple size formulation
2. fixes tuple size formulation
1.Variable Tuple Size Formulation

node – answer node

node - infeasible subsets

96
Compute c(x) using the following rule:
1. if x is a leaf node,
c(x) – summation of penalty of the jobs which are not in the path from root node to
any node x
2. if x is a root node
c(x) - minimum penalty corresponding to any node in the subtree within root x
(or)
c(x)=min{c(child-1),c(child-2),….c(child-n) ,where x is ode in subtree

c(x) -  for node

Figure 8.6 corresponds to the variable tuple size formulations while figure 8.7
corresponds to the fixed tuple size formulation. In both figures square nodes
represent infeasible subsets. In figure 8.6 all non-square nodes are answer nodes.
Node 9 represents an optimal solution and is the only minimum-cost answer node
.For this node j={2,3} and the penalty (cost) is 8. In figure 8.7 only non-square leaf
nodes are answer nodes. Node 25 represents the optimal solution and is also a
minimum-cost answer node. This node corresponds to j={2,3} and a penalty of 8.

97
The costs of the answer nodes of figure 8.7 are given below the nodes.

5.4 Travelling Salesman Problem


Introduction
It is algorithmic procedure similar to backtracking in which a new branch is
chosen and is there (bound there) till new branch is choosing for advancing.
This technique is implemented in the traveling salesman problem [TSP]
which are asymmetric (Cij<>Cij) where this technique is an effective procedure.
 Let G(V,E) be a directed graph defining an instance of the traveling salesman
problem.
 Let Cij be the cost of edge<i,j>.Cij=infinity if<i,j> is not in E.
 Let v=n.
 Without loss of generality we can assume that every tour starts and ends at
vertex 1.
 So solution space is given by S={1, ,1}
 is a permutation of {2,3,…n}
Then S= (n-1)!
 The size of S can be reduced by restricting S .So that (1,i 1,i2….in-1,1)  S iff
(ij,ij+1) E, 0≤j≤n-1 and i0=in=1.
 In TSP Cij<>Cji

Procedure :
Step 0: Generate cost matrix C for the given graph G
Step 1: ROW REDUCTION: For all rows do step 2
Step 2: Find least cost in a row and negate it with rest of the elements.
Step 3: COLUMN REDUCTION:Use cost matrix Row reduced one.For all columns
do
Step 4.
Step 4: Find least cost in a column and negate it with rest of the elements.
Step 5: Preserve cost matrix C [in which row reduced first and then column reduced
next] for the i th time.
Step 6: Enlist all edges (i, j) having cost = 0.
Step 7: Calculate effective cost of the edges.
 Cij = least cost in the ith row excluding (i, j) + least cost in the j th
column excluding (i, j).
Step 8: Compare all effective cost and pick up the largest l(EC). If two or more
have the same cost then arbitarily choose any one among them.
Step9: Delete (i, j) : delete ith row and jthcolumn.Change (j, i) value to infinity.(to
avoid infinite loop formation). If (i,j) not present,leave it.
Step 10: Repeat step 1 to step 9 until the resultant cost matrix having order of 2*2
and reduce it. (Both R.R and C.C)
Step 11: Use preserved cost matrix Cn, Cn-1, ……………,C1.Choose an edge [i, j]
having value =0 at the first time from a preserved matrix and leave that
matrix.
Step 12:Use result obtained in Step 11 to generate a complete tour.

98
Example: Given graph G

22 5
27 1 25
9 19
25
5 2
8
1
50
10 31 17 15
6 40
30
4 7 3
6
24

Step 0:Generate Cost Matrix


1 2 3 4 5
 25 40 31 27
1
5  17 30 25
2
19 15  6 1
3
9 50 24  6
4
22 8 7 10 
5

Phase 1:
Step1: Row Reduction in C.
In R1,an edge <1,2> = 25
In R2,an edge <2,1> = 5
In R3,an edge <3,5> = 1
In R4,an edge <4,5> = 6
In R5,an edge <5,3> = 7
Subtract these elements from other elements.
1 2 3 4 5
 0 15 6 2
1
0  12 25 20
2
18 14  5 0
3
3 44 18  0
4
15 1 0 3 
5

99
Step 3: Column Reduction in C1
In C1,an edge <2,1> = 0
In C2,an edge <1,2> = 0
In C3,an edge <5,3> = 0
In C4,an edge <5,4> = 3
In C5,an edge <3,5>&<4,5> = 7
Subtract these elements from other elements.

1 2 3 4 5
 0 15 3 2
1
0  12 22 20
2
18 14  2 0
3
3 44 18  0
4
15 1 0 0 
5

Step 5:
Preserve the above in C1,

1 2 3 4 5
 0 15 3 2
1
0  12 22 20
2
18 14  2 0
3
3 44 18  0
4
5 15 1 0 0 

Step 6: List all edges having cost =0.


L= (1,2), (2,1), (3,5), (4,5), (5,3), (5,4)

Step 7: Calculate effective cost of edges [E.C]


EC for <1,2> = 2+1 =3
EC for <2,1> = 12+3 = 15
EC for <3,5> = 2+0 =2
EC for <4,5> = 3+0 = 3
EC for <5,3> = 0+12 = 12
EC for <5,4> = 0+2 = 2
Step 8 : EC (l) <2,1>is having the largest length or highest cost.
Step9 : Delete (2,1) from C1 .Change (1,2)  if exists.
i.e: remove 2nd row and 1st column and <1,2>=infinity.
Now Cost Matrix =

100
2 3 4 5
 15 3 2
1
14  2 0
3
44 18  0
4
1 0 0 
5

Step 10: The order of reduced Cost matrix  2 x 2 .Therefore goto step 1.

Phase 2:
Step 1 : Do RR
2 3 4 5
1
 13 1 0
3 14  2 0
4 44 18  0

5 1 0 0 

Step 3: Do CR
2 3 4 5
1  13 1 0
3 13  2 0
4 43 18  0

5 0 0 0 

Step 5: Preserve the above in C2

C2 = 2 3 4 5
1  13 1 0
3 13  2 0
4 43 18  0

5 0 0 0 

Step 6: L={(1,5), (3,5), (4,5), (5,2), (5,3), (5,4)}

101
Step 7: calculation of E.C.

EC of edge (1,5) = 1+0 =1


EC of edge(3,5) = 2+0 =2
EC of edge (4,5) = 18+0 =18
EC of edge (5,2) = 0+13 =13
EC of edge (5,3) = 0+13 =13
EC of edge (5,4) = 0+1 =1
Step 8: L having an edge (4,5) is the largest.
Step 9: Delete (4,5) from C2 and make change in it as (5,4) =  if exists.
Now, cost matrix =
2 3 4
1
 13 1
3
13  2
5
0 0 

Step 10: The order of cost matrix  2x2 hence goto step 1

Phase 3:
Step 1: Do RR
2 3 4
1
 12 0
3 11  0
5 0 0 

Step 3: Do CR
2 3 4
1
 12 0
3
5
11  0
0 0 
Step 5: Preserve the above in C3

2 3 4
1
 0 12
11 0 
0  0
Step 6: L={(1,4), (3,4), (5,2), (5,3)}
102
Step 7: Calculation of E.C
EC of edge (1,4)=12+0=12
EC of edge (3,4)=11+0=11
EC of edge (5,2)=0+11=11
EC of edge (5,3)=0+12=12
Step 8: Here we are having two edges (1,4) and (5,3) with cost = 12. Hence
arbitrarily
choose (1,4)
Step 9: Delete (1,4).Change (4,1) =  if exists.
Now cost matrix is
2 3

2 11 
3 0 0

Step 10: The order of the reduced cost matrix = 2x2

Do RR:
2 3

3 0 
5 0 0

Do CR:
2 3

3 0 
5 0 0

Therefore C4 =
2 3

3 0 
5 0 0
Step 11: List C1, C2, C3 & C4

C4 : 2 3
3
5
0 
0 0

C3: 2 3 4
1
 12 0
3
5
11  0
0 0 

103
C2 :
2 3 4 5
1  13 1 0
3
13  2 0
4
5 43 18  0
0 0 0 
C1:
1 2 3 4 5
1
 0 15 3 2
2
0  12 22 20
3
4 18 14  2 0
5 3 44 18  0
15 1 0 0 

Step 12:
Use C4 = 2 3

3 0 
5 0 0

Pick up an edge (i, j)=0 having least index. Here (3,2) =0


Hence, T (3,2)
Use C3 = 2 3 4
1
 12 0
3
5
11  0
0 0 

Pick up an edge (i, j)=0 having least index.Here (1,4) =0


Hence, T(3,2), (1,4)
Use C2=
2 3 4 5
1  13 1 0
3 13  2 0

43 18  0
4
0 0 0 
5

Pick up an edge (i, j) with least cost index. Here (1,5)  not possible. B‘coz city1
has already choosen. Then (3,5)  not possible as already chosen index.
Next choose (4,5) 0
Hence, T (3,2), (1,4), (4,5)
Use C1 =

104
1 2 3 4 5
1
 0 15 3 2
2
0  12 22 20
3
18 14  2 0
4
5 3 44 18  0
15 1 0 0 
Pick up an edge (i, j) with least index.
(1,2)  Not possible (2,1)  Choose it
Hence T (3,2), (1,4), (4,5), (2,1)
Solution:
From the above list
3—>2—>1—>4—>5
Now this results , Travelling Salesperson started his tour from city3 and visited
city2,city1,city4 ,city 5 and returned to the starting city3.
Final result:
3—>2—>1—>4—>5—>3
Cost is 15+15+31+6+7=64

5.5 0/1 Knapsack Problem


 To use branch and bound technique to solve any problem, it is first necessary
to conceive of a state space tree for the problem.
 Branch and bound technique is used to solve minimization problem(eg 15
puzzle, 4queen, node with minimum cost is selection criteria)
 The branch and bound technique can not be applied directly in the knapsack
problem , because the knapsack problem is a maximization problem(eg
maximum profit is the selection criteria)
 This difficulty is easily overcome by replacing the objective function PiXi by
the function -PiXi
 The modified knapsack problem is stated as
n
Minimize - pixi
i=1
n
subject to wixi<=m
i=1

xi=0 or 1, 1<=I<=n
There are two types of formulation.
1.Fixed tuple size formulation
2.Variable tuple size formulation

Fixed tuple size formulation


 Every leaf node in the state space tree representing an assignment for which
wixi<=m is an answer node
1<=i<=n
 All other leaf nodes are infeasible

105
 For minimum cost answer node to correspond to any optimul solution, we
need to define c(x)= - pixi for every answer node x
1<=i<=n
 The cost c(x)=  for infeasible leaf nodes

Example
Consider the knapsack instance n=4 , (p1,p2,p3,p4)=(10,20,12,18),
(w1,w2,w3,w4) =(2,4,6,9), and m=15
 There are two methods to solve the knapsack problem
 LC Branch and Bound Solution
 FIFO Branch and Bound Solution

5.5.1. LC Branch and Bound


The search begins with root as the E-node. That is node1(root) becomes E-node.

Algorithm:
Algorithm Ubound(cp,cw,k,m)
// cp is the current profit total ,cw is the current weight total; k is the index of the last
removed item; and m is the knapsack size//
{
b=cp; c=cw;
for i=k+1 to n do
{
if((c+w[i])<=m) then
{
c=c+w[i]; b=b-p[i];
}
}
return b;
}

Algorithm Explanation with Example:


Ubound(0,0,0,15)
b=0; c=0;
for i= 1 to 4
i=1
if ((c+w[1])<=m} (c=0 and w[1]=2 and 0+2=>2<=15) true
c=0+2= 2
b=0-10= -10
i=2
if ((c+w[2])<=m} (c=2 and w[2]=4 and 2+4=6<=15) true
c=2+4=6
b=-10-10= -20
i=3
if ((6+w[3])<=m} (c=6 and w[2]=6 and 6+6=12<=15) true
c=6+6=12
b=-20-12= -32
i=4
if ((15+w[4])<=m} (c=12 and w[2]=9 and 12+9=21<=15) False
return b(b=-32); (root node)=b

106
 For any node x in the state space tree, we compute c‘(x) and u(x) and upper
 For any node x , if c‘(x)>upper the node x is killed immediately
 Initially the upper =u(1) and then the upper is updated by the node with
minimum u(x) value in current level.
 u() for root node is calculated using the algorithm Ubound or (- fully
included objects .
 u(1) =b(return value of UBound algorithm)
=-32 (objects1, 2 and 3 are fully includes so - p1+p2+p3
=-(10+10+12)=-32)
Node1:
Initially, first 3 whole objects and part of 4 th object are included in the knapsack,
that is (1,1,1,3/9)
c‘(x)= - pixi
1<=i<=n
c‘(1)= - 1*10+1*10+1*12+3/9*18
= -32
upper =u(1)
=-32(- fully(not a part of object) included objects (that is1, 2 and 3 ) -
p1+
p2+p3
=-(10+10+12)=-32)
 E-node node 1 generates all its children 2,3 and becomes dead node. Now
node 2 and 3 are live nodes.
 To determine the next E-node compute c‘(x),u(x) and upper for live nodes 2
and 3

3
2
Node 2
 If first object is included (already the first object is included in node 1 so
same value is obtained for c‘(2) )
 Status of Included objects; (1,1,1,3/9)
 c‘(2)= - 1*10+1*10+1*12+3/9*18
= -38
 u(1)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)

Node 3
 If first object is not included ,
 Status of Included( objects; (0,1,1,5/9)
 c‘(3)= - 0*10+1*10+1*12+5/9*18
= -32
 u(3)=-22(- fully included objects (tat is 2 and 3 ) - p2+p3 =-(10+12)=-22)
 upper=min{u(2),u(3)}
=min{-32,-22}
upper=-32

107
 if c‘(2) and c‘(3) < upper, so node 2 and 3 are not killed and are live
nodes
 The node 3 becomes the next E-node because c‘(2)=-38<c‘(3)=-32

 Node 3 generates all its children 4, 5


12
1

12
1 2 3

2 3
41 51
21 21
41 51
21 21
61 71
21 21

Node 4:
 If second object is included (already the third object is included in node
1,2 so same value is obtained for c‘(4) )
 status of Included( objects; (1,1,1,3/9)
 c‘(4)= - 1*10+1*10+1*12+3/9*18
= -38
 u(4)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)
Node 5:
 If second object is not included ,
 status of Included objects; (1,,0,1,7/9)
 c‘(5)= - 1*10+0*10+1*12+7/9*18
= -36
 u(5)=-22(- fully included objects (that is 1 and 3 ) - p1+p3 =-(10+12)=-
22)
 upper=min{u(4),u(5)}
=min{-32,-22}
upper=-32
 if c‘(4) and c‘(5) < upper, so node 4 and 5 are not killed and are live
nodes
 Within the live nodes 4 and 5,the node 4 becomes the next E-node
because c‘(4)=-38<c‘(5)=-36
 Node 4 generates all its children 6,7

108
Node 6:
 If third object is included (already the third object is included in node
1,2,4,so same value is obtained for c‘(6) )
 Status of Included objects; (1,1,1,3/9)
 c‘(6)= - 1*10+1*10+1*12+3/9*18
= -38
 u(6)=-32(- fully included objects (tat is1, 2 and 3 ) -p1+ p2+p3 =-
(10+10+12)=-32)
Node 7:
 If third object is not included ,
 Status of Included objects; (1,1,0,1)
 c‘(7)= - 1*10+1*10+0*12+1*18
= -38
 u(7)=-38(- fully included objects (tat is 1,2 and 4 ) - p1+p2 +p4=-
(10+10+18)=-38)
 upper=min{u(6),u(7)}
=min{-32,-38}
upper=-38
 if c‘(6) and c‘(7) = upper, so node 4 and 5 are not killed and are live
nodes
 Within the live nodes 6 and 7,the node 7 becomes the next E-node
because u(7)=-38<u(6)=-32 (c‘(6)=-38 is equal to c‘(7)=-36 so compare
u(x)))
 Node 7 generates all its children 8,9 1

2 3

4 5

6
7

Node 8: 8
 If fourth object is included 9
 Status of Included objects; (1,1,0,1)
 c‘(6) = - 1*10+1*10+0*12+1*18
= -38
 u(8)=-38(- fully included objects (tat is1, 2 and 4 ) -p1+ p2+p4
=-(10+10+18)=-38)

109
Node 9:
 If fourth object is not included ,
 status of Included objects; (1,1,0,0)
 c‘(7)= - 1*10+1*10+0*12+0*18
= -20
 u(7)=-20(- fully included objects (tat is 1 and 2 ) - p1+p2 =-(10+10)=-20)
 upper=min{u(8),u(9)}
=min{-38,-20}
upper=-38
 if c‘(9)=-20 > upper, so node and 9 is killed
 Nodes 6 and 8 are live nodes with least costs . Regardless of which
becomes the next
E-node ,c‘(E)>=upper and the search terminates with node 8, the answer
node
 At this time the value of upper –38 together with the path 8,7,4,2,1 is
printed
As output

5.5.2. FIFO Branch and Bound


 for any node x in the state space tree, we compute c‘(x) and u(x) and upper
 for any node x , if c‘(x)>upper the node x is killed immediately
 initially the upper =u(1) and then the upper is updated by the node with
minimum u(x) value in current level
u() for root node is calculated using the algorithm Ubound or (- fully included
objects
u(1)=b(return value of UBound algorithm)=-32 (objects1, 2 and 3 are fully
includes so - p1+p2+p3 =-(10+10+12)=-32)
 Initially the root node, node1 is the E-node and queue of live nodes are
empty
 Since this is not a solution node, upper is initialize to u(1)
 Assume that, the children of a node are generated left to right. Node 2 and
3 are generated and added to the queue

11

2
3

4
5 7
6

8 9

1 1 1 1

110
 The c‘(2) ,c‘(3)and u(2),u(3) are calculated
 upper=min(u(2),u(3) that is upper =-32
 Node 2 becomes next E-node. Its children, nodes 4 and 5, are generated
and added to the queue
 The c‘(4),c‘(5) and u(4),u(5) are calculated
 upper=min(u(4),u(5) that is upper =-32
 Node 3 becomes next E-node. Its children, nodes 6 and 7, are generated
and added to the queue
 The c‘(6),c‘(7) and u(6),u(7) are calculated
 Node 7 is immediately killed because c‘(7)>upper(-28>-32)
 upper=min(u(6),u(7) that is upper =-32
 Node 4 becomes next E-node. Its children, nodes 8 and 9, are generated
and added to the queue
 The c‘(8) ,c‘(9)and u(8) ,u(9) are calculated
 upper=min(u(8),u(9) that is upper =-38
 nodes 5 and 6 in the queue are the next E-nodes which are not expanded
because c‘(5) and c‘(6) >upper(-36 and –32 >-38)
 Node 8 becomes next E-node. Its children, nodes 10 and 11, are
generated and added to the queue
 The c‘(10) ,c‘(11)and u(10) ,u(11) are calculated
 Node 10 is infeasible so it is killed. Node 11 has c‘(11)>upper so it is killed
 Node 9 becomes next E-node. Its children, nodes 12 and 13, are
generated and added to the queue
 The c‘(12) ,c‘(13)and u(12) ,u(13) are calculated
 upper=min(u(12),u(13) )that is upper =-38
 node 13 is killed because c‘(13)>upper
 The only remaining node is 12. it has no children and search terminates

111

You might also like