A Genetic Algorithm For The Vehicle Routing Problem
A Genetic Algorithm For The Vehicle Routing Problem
A Genetic Algorithm For The Vehicle Routing Problem
12-1993
Recommended Citation
Wester, Vickie Dawn, "A Genetic Algorithm for the Vehicle Routing Problem. " Master's Thesis, University
of Tennessee, 1993.
https://trace.tennessee.edu/utk_gradthes/4850
This Thesis is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and
Creative Exchange. It has been accepted for inclusion in Masters Theses by an authorized administrator of TRACE:
Tennessee Research and Creative Exchange. For more information, please contact trace@utk.edu.
To the Graduate Council:
I am submitting herewith a thesis written by Vickie Dawn Wester entitled "A Genetic Algorithm
for the Vehicle Routing Problem." I have examined the final electronic copy of this thesis for
form and content and recommend that it be accepted in partial fulfillment of the requirements
for the degree of Master of Science, with a major in Management Science.
ARRAY(0x7f6ff7b01c98)
/J .
I .
Signature VA 06 Le wcf<s ·UL-
Date 11 loo Jq3
A GENETIC ALGORITHM FOR
THE VEHICLE ROUTING PROBLEM
A Thesis
Presented for the
Master of Science
Degree
The University of Tennessee, Knoxville
11
TABLE OF CONTENTS
PAGE
INTRODUCTION ....................................... 1
CHAPTER
I. THE GENETIC ALGORITHM AND THE VEHICLE
ROUTING PROBLEM........................... 4
The Vehicle Routing Problem.........................4
The Genetic Algorithm ............................. 9
The Genetic Algorithm Related to the Vehicle Routing
Problem ..................................... 12
Data Format .............. ................ ... 13
Encoding of Solutions .......................... 14
Evaluating the Solutions ........................ 17
Reproduction ................. ............... 17
II. EXPERIMENTATION RES ULTS ................... 23
Penalization for Infeasibility ........................ 25
Sampling Procedures ............................. 27
Reproduction ...... . .............. ............. 28
Parameters ............... . .....................28
Infeasibility .................................... 31
Duplicate Solutions .............................. 32
Reverse Paths .................................. 32
Feasibility vs.Infeasibility ......................... 33
Gene Selection Parameter .......................... 34
Summary...................................... 35
III. ANALYSIS .................................... 36
Parameter Settings................................ 36
Initial Population................................. 39
Search Space.................................... 40
Elitism........................................ 41
Encoding of Solutions............................. 42
Crossover Methods............................... 43
Convergence............................. ... .... 44
111
The Final Program ............................... 50
Results........................................ 52
Proposed Improvements............................ 56
Conclusion... . ................................. 58
REFERENCES...................................... 60
APPENDICES ...................................... 66
APPENDIX 1. GENETIC ALGORITHM .................. 67
APPENDIX 2. REPRESENTATION OF TOURS ........... 80
VITA ............................................ 92
IV
LIST OF TABLES
TABLE PAGE
V
Introduction
1
Genetic algorithms first appeared in theory in the early 1970's, but
John Holland is said to have founded the field of genetic algorithms in
1975. In more recent years, work on genetic algorithms has been focused
on application [Davis 1991]. Holland's original idea in developing the
algorithm was to create a program which would adapt to its environment
[Goldberg 1988]. The genetic algorithm was first used in industry to
optimize the design of a communications network [Davis 1987]. There are
several areas in which GA performance has been studied. The following is
a partial list of areas for which genetic algorithms have been studied:
1. Davis (1985) - Job shop scheduling
2. Glover (1987) - Keyboard configuration systems
3. Goldberg (1983) - Optimizing gas pipeline systems
4. Grefenstette (1985) - Traveling salesman problem
5. Nygard and Kadaba (1990) - Multi-vehicle routing
problem [Nygard 1992].
Atidel Ben Hadj-Alouane (1992) at the University of Michigan successfully
used a genetic algorithm to solve multiple choice integer programs with
nonlinear relaxation. The algorithm successfully solved 100% dense
problems and had computation times superior to IBM's Optimization
Subroutine Library (OSL). Hadj-Alouane noted three advantages of the
genetic algorithm when compared to OSL after running the genetic
algorithm on three facility location problems:
1. The optimal solution was found for all three, yet less time was taken
than with OSL.
2. There was a small variation in solutions for different random seeds.
2
3. The GA was more scalable (i.e. The time needed to solve the problem
was predictable based on the size of the problem.) [Hadj-Alouane and
Bean 1992].
Genetic algorithms have been shown to work ef fectively on problems
such as function optimization problems, but only recently has
experimentation moved into combinatorial optimization problems, such as
the Traveling Salesman Problem (TSP) or the Vehicle Routing Problem
[Suh and Gucht 1987]. The purpose of this research is to present and test a
genetic algorithm for the vehicle routing problem. In addition, this work
examines the obstacles encountered when applying GA's to vehicle routing
problems, as well as possible methods for handling them. The first chapter
gives an overview of the vehicle routing problem and genetic algorithm
and provides a discussion of how the two are related. Chapter 2 describes
key elements of the GA which were studied in order to gain an
understanding of their impact on algorithm performance. Chapter 3
presents a final version of the GA as well as other alternatives which may
be studied in future research. The results of this version of the GA are
also given for selected data sets which have appeared in various VRP
articles. These results are compared with the best known solutions
obtained by the other methods of solving the VRP.
3
CHAPTER 1
THE GENETIC ALGORITHM AND THE
VEHICLE ROUTING PROBLEM
4
goes from customer i to customer j and O otherwise. A formulation for the
VRP is given below:
(1 ) mm LmLij di j Ymi j
(2) S.t. LmLi,i<>j Ymij + LmLk Ymjk = 2 for all j
(3) Lj Ym0j = 1 for all m
(4) Li,i<>j Ymij � 1 for a11 m, j
(5) Li Ymi0 = 1 for all m
(6) Li Ymij - Lk Ymjk = 0 for all m, j
(7) LiL j Ymij � ISi -1 for all subsets S, for all m
(8) Li,j Ymij Wj � w for all m
(9) Li,j (dij + s) Ymij � t for all m
[Noon, et al 1 991 ]. Constraint (2) ensures that exactly one vehicle visits
and leaves each customer. Each vehicle is forced to leave the depot by
constraint (3). Constraint (4) ensures that a vehicle does not visit a
particular customer more than once and constraint (5) ensures that the
vehicle returns to the depot fNoon, et al 1991]. Flow conservation for each
vehicle tour is enforced by constraint (6). Subtours are eliminated by
constraint (7). The capacity constraint (8) ensures that the total amount of
weight picked up by the vehicle does not exceed a weight limit of w.
Constraint (9) ensures that the total time on the route for each vehicle can
not exceed atime limit oft, where the time on a route is calculated by the
distance on the route plus the sum of all the stop times on the route.
"The vehicle routing problem is a hard combinatorial problem and
to this day, only relatively small VRP instances can be solved to
optimality." [Gendreau, et al 1991 ] There are four groups in which
heuristic methods for solving the VR P can be divided. They are
5
constructive, two-phase, incomplete optimization, and improvement
algorithms. The first of these four is the constructive algorithm in which
an unrouted city is selected to be added to the tour based on some criterion
[Gendreau, et al 199 1]. One of the most common of these methods is the
Clarke and Wright method in which n back and forth routes between a city
and the depot are merged according to a savings criterion. The major
disadvantage with this method is the amount of time required to find a near
optimal solution. However, data structures can be used to reduce the
amount of time to run the algorithm [Gendreau, et al 1 991].
There are four types of two-phase algorithms. The first of these is
the cluster first - route second method in which each vehicle is first
assigned the customers which it must visit. The TSP is then used to
sequence each of the routes. The TSP is a combinatorial optimization
problem in which a single vehicle leaves a depot and must visit each
customer exactly once and return to the depot. The objective is to sequence
the customers in such a way that the distance traveled is minimized. The
second method is the route first - cluster second approach in which the TSP
is first used to sequence the customers. The route is then broken into
feasible segments for each vehicle available. The third method is an
integer linear programming approach using the Generalized Assignment
Problem and the TSP. This approach was developed by Fisher and
Jaikumar ( 198 1). Finally, the fourth method of two-phase algorithms is a
Lagrangean Relaxation Approach used by Noon, Mittenthal, and Pillai
( 199 1) [Gendreau, et al 199 1]. The method relies on solving a Traveling
Salesman Subset Tour Problem with one additional constraint. The TSSP
is a variant of the TSP in which the constraints that require each customer
6
to be visited are relaxed. The idea is to have a dispatcher who assigns each
vehicle an initial customer to visit and then assigns reward values to every
other customer. The objective when preassigning these customers is to
maximize the minimum distance between any two of them. Each vehicle
driver then decides which customers to visit and the corresponding
sequence of visits. The objective of the dispatcher is to assign the rewards
so that each customer will be visited by exactly one vehicle. The major
difference between this approach and that of Fisher and Jaikumar is that the
dispatcher in the Fisher method decides which customers each driver visits,
and the driver is only responsible for sequencing the route. In the
Lagrangean Relaxation approach, the driver has the additional
responsibility of deciding which customers to visit [Noon, et al 1991].
The third heuristic method is Incomplete Optimization. This
approach uses an enumerative algorithm to find a good solution by means
of an incomplete search tree [Gendreau, et al 1991].
Finally, the fourth heuristic method used is Improvement Methods
which is the category in which tabu search falls. Among the tabu search
methods that exist are one developed by Pureza and Franca (1991) in which
cities are swapped between two routes and one developed by Semet and
Taillard (1991) in which a city is moved from one route to an alternate
route [Gendreau, et al 1991]. The algorithm developed by Gendreau,
Hertz, and Laporte inserts a node into a tour from another tour using a
generalized insertion procedure (GENI). A tour improvement procedure
which was also developed by Gendreau, Hertz, and Laporte, is used to
improve each route. Once a customer is taken out of a particular vehicle's
tour, it cannot be put back into that tour for a certain number of iterations.
7
One difference between this method and the other tabu search methods is
that it allows infeasible solutions whereas the others do not. An advantage
to using this method, called TABROUTE, is that the risk of converging to
a local optima is reduced in two ways. The first is by allowing infeasible
solutions through the use of a penalty function. The second is by using
GENI to perform the insertion of the customer into a different route
[Gendreau, et al 199 1].
Another improvement method which has successfully been used was
developed by Ibrahim Osman at the University of Canterbury [Osman
1993] and solves the vehicle routing problem using simulated annealing and
tabu search. This method finds a route by first using a heuristic followed.
by an improvement method in which a portion of one route is exchanged
with a portion of a second route. An insertion/deletion procedure is used
to recalculate the objective value, and the 2-opt arc exchange heuristic of
Lin [Osman 1993] is used to correct any paths that are crossed. There are
two selection strategies used for selecting alternative solutions: best
improvement and first improvement. The tabu search consists of a
forbidding strategy, a freeing strategy, a short-term strategy, and a
stopping criterion. The forbidding strategy keeps a list of the moves which
are forbidden. The freeing strategy removes the moves from the tabu list
after a certain number of iterations. The short-term strategy uses an
aspiration criterion to overrule the tabu list and includes two possible
selection strategies: Best Admissible (BA) and First Best Admissible
(FBA). BA selects the move resulting in the greatest improvement or the
least nonimprovement. FBA selects the first move resulting in an
improvement in the objective value if one exists; otherwise, the best
8
nonimproving move is selected. The tabu list sizes are calculated as a
function of population size, number of vehicles, and capacity ratio of
required demands to vehicle capacities. The stopping criterion is based on
a maximum number of iterations in which the best solution does not
improve after the best solution was found. One potential drawback of tabu
search is that quality of the final solution depends on the initial solution.
Hence, the method sometimes finds a local optimal which is not close to the
global optimal. This is the same problem which Gendreau addressed in his
method by allowing infeasible solutions. Osman's method uses simulated
annealing (SA) to overcome this problem. SA accepts a nonimprovement
move based on a certain probability which is determined by a control
parameter which decreases according to a schedule [Osman 1993].
9
This chromosome represents a solution in which the sequence of customer
visits is 5,1,4,3,6,2. Each position of this chromosome is cal led a gene,
and the val ue of each gene is called an allele. For example, 5 is the allele
of the first gene [Nygard 1992]. At each generation there are a number of
methods which can be used to produce a new population of solutions.
Although all genetic algorithms use some form of reproduction, crossover,
and mutation, there are many different ways of carrying out these
operations. The next section describes some of the alternate methods.
First, there are a number of different ways to select the
chromosomes to be added to the mating pool. The challenge is to select the
parents in such a way that the good parents reproduce enough to survive, .
but not so much as to cause the population to prematurely converge
[DeJong 1985]. There is still disagreement among researchers on the best
method of parent selection. Four of the most common methods are listed
below.
1. Random selection of the chromosomes.
2. Roulette sampling in which the probability of selecting a
particular chromosome increases with its fitness.
3. Rank based sampling which uses the roulette wheel to select two
chromosomes, of which the one with the best fitness is added to
the mating pool.
4. Tournament sampling in which solutions are sequentially chosen
with the one having the higher fitness being added to the mating
pool [Nygard 1992].
Next, there are several different methods of crossing over the two
parent chromosomes. There is the one-point crossover in which a point on
10
the chromosome is randomly selected, and the two chromosomes exchange
the genes following this point. The disadvantage of using the one-point
crossover is that if good genetic material is at both ends of the
chromosome, these two good traits will be separated during the crossover.
The two-point crossover solves the one-point crossover problem by
enabling two genes on opposite ends of the chromosome to remain on the
same chromosome after the crossover. This is accomplished since two
points are randomly selected on the chromosome, and the genes between
these two points are exchanged between the two chromosomes. However,
this still may present a problem if, for example, all of the good traits are
on one of the chromosomes [Davis 1991]. The best crossover method
seems to be the uniform crossover in which a random number (between 1
and 100) is generated for each gene. If this number is less than a certain
user defined number (which is defined at the beginning of the GA as the
gene selection parameter) , the child will receive this gene from the first
parent. If the number is greater than the gene selection parameter, the
child receives the gene from the second parent. Unlike the one-point and
two-point crossovers, this crossover method has the ability to combine
good traits irrespective of where they are located on the chromosome
[Davis 1991].
Also, there are differences in the methods of producing mutations.
One method is to simply mutate a single gene at a certain rate (i.e. 1 out of
every 1000 genes). However, since mutation is the main means of
producing variation in the population, mutating a single gene does not seem
to be very efficient. Another, more efficient method is to mutate the entire
chromosome for a low percentage of the chromosomes in the population
11
[Davis 199 1 ]. This method of mutation is referred to as immigration [Bean
1992]. This seems to provide more diversity in the population.
Two very important links to the genetic algorithm and the problem
to be solved are the method of evaluating new solutions and the method
used to encode a solution [Davis 199 1]. These two aspects are crucial
because they must be tailored to the problem being solved. The other
aspects of the GA such as parameter values, selection methods, crossover
methods, and mutation methods do not represent the problem being solved
and may be exactly the same for a variety of different problems. The
simple genetic algorithm seems to be powerful despite the lack of
knowledge of the problem to be solved [Goldberg and Richardson 1987].
12
format of the data set to be used by the genetic algorithm, the method of
encoding a solution, the method of evaluating a solution, and the method of
reproduction.
Data Format
The geneti c algorithm code first reads VRP problem data in the
following format. The first line of any problem data set contains the
following information :
1. The number of customers + 1 (for the depot)
2. The weight limit of each vehi cle
3. The number of vehi cles
4. The amount of time at ea ch stop
5. The time limit of ea ch route
The next n lines of the data set (where n is the number of customers)
contain the following :
1. The x coordinate of the customer
2. The y coordinate of the customer
3. The amount of weight to be picked up at the customer
4. The customer number
The last line of the data set contains the x and y coordinates of the depot.
Figure 1 . 1 is an example data set for the 32 customer, 3 vehi cle VRP. The
first line indi cates that there are 32 customers plus 1 f or the depot, there is
a weight limit of 3 8000 units per vehi cle , there are 3 vehicles, there is a
stop time of 20 units of time at each stop, and there is a time limit of 1000
units per vehi cle. Lines 2 through 3 3 consist of the x and y coordinates of
each customer, the amount of weight to be pi cked up at each customer, and
13
the customer number. The last line indicates the x and y coordinates of the
depot.
33 38000 3 20 1 000
1 0 260 3500 1
65 248 1 260 2
22 255 629 3
50 249 250 4
205 254 2267 5
275 34 447 6
269 262 1 847 7
293 269 1 437 8
333 2 1 2 3720 9
304 202 1 1 1 5 1 0
286 207 273 1 1
288 1 9 1 5494 1 2
295 235 1 944 1 3
467 67 7 1 3 1 4
484 1 79 1 500 1 5
447 1 89 3585 1 6
2 1 5 204 1 40 1 7
3 1 3 3 8 2 25705 1 8
267 3 1 6 479 1 9
39 1 1 96 1 7456 20
399 1 22 1 143 21
363 1 87 1919 22
355 236 826 23
378 203 3264 24
458 2 1 8 1 570 25
383 1 8 1 22 1 5 26
240 326 1 239 27
273 349 580 28
278 374 5000 29
352 27 1 1 00 30
324 295 20 1 3 1
249 250 6747 32
250 200 0 DEPOT
Encoding of So lutions
The a lgorithm uses a method of encoding a solution called random
keys which was developed by Bean [Bean 1992]. Random keys is a
14
technique designed to address the problem of producing infeasible solutions
during reproduction because some customers are visited more than once
while some are not visited at all [Bean 1992]. To illustrate the problem of
infeasibility during reproduction consider a problem with only one vehicle
and six customers. When the solution is encoded using the customer
number, reproduction can lead to infeasible offspring as shown in the
example below. Consider two parents whose solutions are encoded as
sequences of customer numbers.
parent 1 : (5 1 4 3 6 2)
parent 2: (4 5 1 3 6 2)
In parent 1 , the sequence of customer visits is 5, 1 , 4, 3, 6, 2. In parent 2,
the vehicle visits customers in the order 4, 5, 1 , 3, 6, 2. When these two
undergo reproduction, a child is produced by selecting a gene from each
parent with a certain pre-defined probability. For each gene, a random
number is generated. If the random number is greater than the probability
assigned to parent number 1 , the gene is taken from parent number 2;
otherwise, the gene is taken from parent number l . If, for example, the
probability of selecting a gene from parent 1 is .70 and .30 from parent 2,
the following situation might occur:
random number: .86 .55 .40 .12 .73 .23
parent number: 2 1 1 1 2 1
child : 4 1 4 3 6 2
The child produced is infeasible since customer number 4 is visited twice
during the tour and customer number 5 is not visited at an [Bean 1 992].
Random keys is designed to prevent this type of reproduction
infeasibility. The idea behind random keys is to generate a random
15
number between O and 1 for each customer in a solution . The order in
which the customers are visited is represented by sorting the random
numbers in ascending order. For example , in the previous problem, parent
1 might be represented as follows:
( . 3 1 .95 .76 .5 1 . 1 5 .85)
where customer number 5 is the first visited, cu stomer number 1 is the
second v isited, etc. The two parents would then be represented as follows :
parent 1 : ( . 3 1 .95 .76 . 5 1 . 1 5 .85)
parent 2: ( . 33 .83 .49 .08 .25 .7 1 )
These two parents reproduce as follows :
random number: .86 .55 .40 . 1 2 .73 .23
parent number: 2 1 1 1 2 1
child : .33 .95 .76 .5 1 .25 .85
The new chi ld solution is now feasible, with the order in which the
cu stomers are visited represented by the order of the random numbers
[Bean 1 992] .
The problem of representing multiple vehicles in a solution can also
be solved using random keys. A random integer, between 1 and the
number of vehicles, is added to each random number. The integer
represents the vehicle which visits that cu stomer. For example , if two
vehicles are available, the solution might be represented as:
(2. 3 1 2 .95 1 .76 1 .5 1 1 . 1 5 2.85)
where vehicle 1 visits customer 5 followed by customer 4, then customer 3 ,
and vehicle 2 visits cu stomers 1 , 6 , and 2.
According to Bean, "we have successfully generalized this approach
to the job shop with precedence, release times, sequence dependent setups,
16
and nonregular measures such as a sum of weighted earliness and
tardiness. " [Bean 1992] Random keys also appears to be an efficient means
of encoding solutions for the vehicle routing problem; therefore, it is the
method used for our research.
Reproduction
The method of reproduction is another important aspect of the
genetic algorithm. The method which Bean uses in his algorithm keeps a
consistent number of solutions in the population throughout the algorithm.
A certain percentage of the top solutions are copied to the next generation.
This is called elitism or clonal propagation and enables the best solutions to
be preserved [Davis 1991 J . Another percentage of the new generation is
produced by mutation. The method of mutation used is to randomly
17
generate completely ne w chromosomes by the same method in which the
population was initialized at the beginning of the algor ithm. S ince the
elitist strategy is being used , a fa irly high mutation rate m ust be used in
order to maintain divers ity in the populat ion [Bean 1992]. The remaining
percentage of solutions in the new populat ion are produced through
reproduction . The parent chromosomes are selected randomly with equal
probability of being selected and then the un iform crossover method is
used for best results .
EXAMPLE 1.1.
In order to illustrate some of the important aspects of the genet ic
algor ithm wh ich were d isc ussed in th is chapter , consider an example VRP
which consists of 5 customers and 2 veh icles with a best known solution of
63. Suppose at the beginning of the algorithm the following parameters
are defined :
5 members in the populat ion
25 generations
1 solut ion copied into the next generat ion
1 solution mutated each generat ion
Gene selection parameter of 50
At the beginning of the algorithm an initial population is randomly
generated . The solut ions are eval uated and ordered so that the solution
with the best evaluation is first and the solution with the worst evaluation is
last . S uppose that after the solutions are ordered, the initial populat ion is
as follows :
18
Solution E valuation
# 1 : ( 1 .53 2. 1 4 2.07 1 . 1 0 1 .76) 97
#2: ( 1 .08 1 .56 2.58 2. 1 3 2.33 ) 1 13
#3 : (2.82 2.9 1 2.67 2. 1 5 1 .01 ) 1 25
#4 : (2.6 1 1 .50 1 . 39 2.43 1 .59) 1 56
#5 : ( 1 . 89 2. 1 7 1 .25 2.95 1 .52) 208
Solution # 1
Th e best solution from th e initial popu lation is copi ed to form on em em ber
of th en ew population. (Th enum ber to be copi ed was defin ed at th e
beginnin g of th e a l gorithm to be 1 . )
( 1 .53 2. 1 4 2.07 1 . 1 0 1 .76) E valuation = 97
Solution #2
Solutions #2 an d #5 ar e ran domly s el ect ed from th einitial population to
r epro duc e with each oth er. R epro duction occurs in th efollowin gmann er
usin g uniform crosso ver.
19
Solution #3
Solutions #1 and #2 are randomly selected from the initial population to
reproduce with each other. Reproduction occurs in the following manner.
Random number: 58 64 19 36 79
Parent #1 : (1.53 2.14 2.07 1. 10 1.76)
Parent #2: (1.08 1.56 2.58 2.13 2.33)
Child: (1.08 1.56 2.07 1.10 2.33) Evaluation = 150
Solution #4
Solutions #3 and #1 are randomly selected to reproduce with each other.
Random number: 70 18 83 95 34
Parent #1: (2.82 2.91 2.67 2.15 1.01)
Parent #2: (1.53 2. 1 4 2.07 1.10 1.76)
Child: (1.53 2.91 2.07 1.10 1.01) Evaluation = 90
Solution #5
This solution is formed by mutation which means a random number is
generated fo r each gene on the chromosome. (The number of solutions to
be mutated was defined at the beginning of the algorithm to be 1.) The
following numbers are generated:
(2.54 2.99 1.76 2.11 1.69) Evaluation = 200
20
Sol ution Eval uation
#1 : (1.53 2.91 2.07 1.10 1 .01 ) 90
#2 : ( 1.53 2. 14 2.07 1.10 1.76) 97
#3 : ( 1.08 2.17 2 .58 2.13 1.52) 128
#4 : (1.08 1.56 2.07 1.10 2.33) 1 50
#5 : (2.54 2.99 1.76 2.1 1 1 .69) 200
Solution Evaluation
# 1 : (1.53 2.91 2.07 1.10 1.01 ) 90
#2 : (1.53 2.91 2.07 1.10 1.0 1 ) 90
#3 : (1.53 2.91 2.07 1 .1 0 1.76) 90
#4 : (1.23 1 .1 4 2.1 9 2.57 1 .18) 97
#5: (2.89 2.75 1 .28 2.1 5 1.87) 2 10
21
that the first two solutions are identical, and the third solution is only
different on the fifth gene. The last two chromosomes are now the only
means for diversity in this population. This example will be referred to
later in the thesis to demonstrate the dynamics of the population under
certain conditions.
22
CHAPTER 2
EXPERIMENTATION RESULTS
In the previous chapter the VRP and the GA were described, and an
example was given to illustrate some of the important aspects. The purpose
of chapter 2 is to describe some of the experimentation which was
performed on the GA. This chapter should provide an idea of the various
ways in which the GA can be altered and the effects these alterations may
have on the results. Figure 2. 1 provides a summary of the aspects which
were altered during the experimentation.
23
The purpose of the experimentation was to produce a version of the
genetic algorithm which would find a nearly optimal solution for most
vehicle routing problem data sets of the format discussed in Chapter 1.
There were two data sets most commonly used for the experimentation.
The first data set was a 50 customer, 6 vehicle, VRP with a weight limit of
160 per vehicle, a time limit of 200 per vehicle and a stop time of 10. The
best known solution from the literature has a total cost (distance) of 555.43
[Gendreau, et al 1991] . The best solution known for this same data set with
only 5 vehicles and no time constraint is 524.61 [Gendreau, et al 1991].
The second data set had 32 customers and 3 vehicles with a weight limit of
38000 per vehicle, a time limit of 1000 per vehicle, and a stop time of 20.
The best known solution for this data set has a total rounded cost of 2086.
Without the time constraint, the best known solution to this problem is
2009.31 [Noon, et al 1991] . There were several procedures in the GA
which were believed to have some effect on the performance of the
algorithm; namely, calculating the fitness of the chromosome, infeasibility
of solutions, sampling, reproduction, operator fitness, format of the
solutions (duplicates, reverse paths), and the gene selection parameter value
(probability of selecting a gene from chromosome number 1).
First, the program was run using Bean's algorithm with the
exception of a few changes necessary to run the algorithm on the vehicle
routing problem rather than the machine scheduling problem. These
changes primarily involved the input of the data set and the evaluation of
solutions. Also, Bean suggested that a few of the parameter values be
changed. The parameter values were as follows:
24
N umber of generati ons - 2000
N umber of members i n the popul ati on - 100
N umber of chromosomes repeated i n the next generati on - 20
N umber of mutati ons each generati on - 2
Gene sel ecti on parameter val ue- 70.
The method of sampli ng and reproducti on remai ned unal tered. The
proceduref or cal cul ati ng chromosomefitn ess was to cal cul ate the total
di stance travel ed on the tour, wi th no penal ty f or i nfe asi bili ty of the
sol uti on, and assi gn thi s val ue as thefitn ess val ue. (N ote: F or thei ni ti al
experi mentati on, rounded sol uti ons are gi ven. The exact sol uti ons are
gi venf or the fi nal versi on of the program i n the resul ts secti on of Chapter
3.) Thi s versi on produced a sol uti on wi th a cost of 555 f or the 50
customer, 6 vehi cl e probl em whi ch was a good sol uti on; however, i t was
i nfe asi bl e wi th3 vehi cl es exceedi ng the it meli mi t and one exceedi ng the
wei ght limi .
t The sol uti onf or the 32 customer, 3 vehi cl e probl em was
20 14; however, i t was al soi nf easi bl e wi th 1 vehi cl e over the ti meli mi t and
2 over the wei ghtli mi .
t Si nce the al gori thm seemed to be produci ng good
resul ts wi th the excepti on of i nfe asi bili ty, thefi rst task undertak en was to
devel op a method of penalizi ngf ori nfeasi bili ty i n order to all ow the
f easi bl e sol uti ons to move to the top of the gene pool.
25
eff ective reproduction because the samel arge number was assigned to each
sol ution regardl ess of its degree of infe asibil ity. A t the star t of the
al gorithm virtuall y all sol utions are inf easibl e since they have all been
randoml y generated. Therefore, the " survival- of- the- fittest" phil osophy
with this ty pe of penal iz ation is not very eff ective since no sol ution' s fitness
is better than any other. The next method of assigning af itn ess to the
sol ution was to add a penal ty to the total distance travel ed on the tour based
on ty pe and degree of infeasibil ity of the sol ution. The weight penal ty was
cal cul ated by summing the amount each vehicl e exceeded the weightl imit
and raising this val ue to some power. The total penal ty was eval uated by
adding the weight penal ty to the time penal ty which was cal cul ated by
summing the amount each vehicl e exceeded the timel imit and raising this
val ue to the same power. F irst, a power of two was used which produced a
sol ution of 6 17 for the 50 customer probl em; however, the sol ution was
still sl ightl y infe asibl e with one vehicl e being one unit over the timel imit.
The sol ution to the32 customer probl em was23 17 and wasfe asibl e. Using
a power of three, the sol ution to the 50 customer probl em was654 and was
feasibl e, and the sol ution to the32 customer probl em was2267 and was
al so feasibl e. F inall y, the sol ution to the two probl ems using a power of
four was 639 for the 50 customer probl em and 2344 for the 32 customer
probl em with both sol utionsbe ing feasibl e. The next method of
penal iz ation was the same as the previous method using a power of2,
except that the penal ty was mul tipl ied by the number of vehicl es which
were infeasibl e with respect to weight pl us the number vehicl es which wer e
infe asibl e with respect to time. This method gave an improvement in the
sol utions of both data sets. The sol ution to the50 customer, 6 vehicl e
26
problem was60 1 , and the solution to the32 custom er, 3 vehicle problem
was2298.
Sampli n2 Procedures
N ext, the sam pling procedure was changed to observe the eff ects this
would have on the solution. Thefi rst sam pling procedure tested was the
roulette wheel sam pling m ethod [D avis 199 1] . Below is a list of the steps
followed in this m ethod:
1. F ind the largest solution value of the n solutions in the
population, lval.
2. F or each solution' s value, sval(i) , find fval(i) = lval -
sval(i). This is a m easure of thefi tness of solutioni relative
to the other solutions in the population.
3. A ssign each solution a range of num bers, the siz e of which
corresponds to the siz e of its relativef itness value.
Range(i) = [I,i -1 fval(k), I,i fval(k)]
4. Generate a random num ber between 1 and I, 0 fval(i).
5. Pick the solution whose range contains this random num ber.
O n the data sets tested, the perform ance of the roulette wheel sam pling
procedure was infe rior to the perform ance of the sam pling procedure in
which all solutions had eq ual weight. The best solution for the 50 custom er
problem was807 and was inf easible. A tourn am ent sam pling m ethod also
was used in which eight solutions were chosen to reproduce with each other
resulting in four solutions which reproduced to form two. These two
solutions then reproduced to form one solution which was added to ht e
population. Th is procedure was only ru n onth e 50 custom er problem due
27
to time limitations. The resulting solution was relatively good with value,
602, and was feasible; however, the extensive amount of time it took to run
the algorithm with this sampling procedure made it very impractical.
Reproduction
Two alternate methods of reproduction were tested to determine
their impact on performance. These two methods, single crossover and
double crossover reproduction, were described in Chapter 1 along with
their disadvantages. Although these methods have disadvantages, the
experiments were run in order to observe the changes in the results by
using these methods. The results of the double crossover method were
significantly better than the single crossover; however, it produced a
solution of 2370 for the 32 customer problem and 849 for the 50 customer
problem, which were significantly worse than with the uniform crossover
previously used.
Parameters
Two important parameters which seem to significantly affect the
results of the algorithm are nrep and nmut . The parameter nrep controls
how many chromosomes are copied from one generation to the next. The
parameter nmut controls how many solutions will be mutated in each
generation. A key consideration for nrep is that it must be high enough to
keep the best solutions but not so high as to cause the population to
converge prematurely. For instance, consider Example 1 . 1 . If the number
of solutions to be copied is increased from 1 to 2, the population might
have the following appearance after the 1 0th generation.
28
Solution Evaluation
#1: ( 1 .53 2.91 2.07 1 . 10 1 .0 1 ) 90
#2: ( 1 .53 2.91 2.07 1 . 10 1 .0 1 ) 90
#3 : ( 1 .53 2.9 1 2.07 1 . 10 1 .0 1 ) 90
#4: ( 1 .53 2.91 2.07 1 . 10 1 . 18) 101
#5: (2.89 2.75 1 .28 2. 1 5 1 .87) 210
By setting nrep too high, the population may prematurely converge more
quickly than it would have otherwise. However, if the number to be copied
was reduced to 0, the best solution might be lost.
The mutation operator, nmut, must be high enough to induce
variation but not so high as to cause the population to converge towards
poor solutions. Consider Example 1 . 1 again. Suppose the number of
mutations per generation is increased from 1 to 2. After 1 0 generations,
the population might have the following appearance.
Solution Evaluation
#1: (1 .53 2.91 2.07 1 . 10 1 .0 1 ) 90
#2: ( 1 .09 1 .5 1 2.07 1 . 10 2.58) 1 02
#3 : (2.54 2.32 1 . 14 1 .39 1 .21) 1 25
#4: ( 1 .67 1 .28 2. 1 9 2.57 1 . 1 8) 1 53
#5 : (2.89 2.75 1 .28 2. 1 5 1 .87) 210
In this situation, the increased mutations provides such diversity that the
better solutions are overwhelmed by the poorer solutions which were
produced by mutation.
29
According to Davis, the speed of convergence of the population and
the nearness of the individuals to local optima are related to these two
parameter values [Davis 199 1] . In order to prevent premature
convergence of the population, experiments were run to determine which
mixture of parameter values gave the best results for the two data sets.
The version of the GA used for these experiments included the sampling
procedure in which all solutions had an equal probability of being selected
and employed uniform crossover as the reproduction method. The method
of penalization for infeasibility for these experiments took into
consideration the amount the time and weight limits were exceeded and the
number of vehicles not meeting the constraints. After testing the operators
over a range of values, the best results were found when nrep was set at 20,
and nmut changed as a function of the generation count. The operator,
nmut, was initially set at 2 and changed to 5 at generation count 500 and
then to 8 at generation count 1000. The best solution for the 32 customer
problem was found to be 2321, and the best solution for the 50 customer
problem was 587, but was infeasible because one vehicle was over the time
limit by one unit of time. Another method tested was to change the nmut
parameter value based on the number of generations without an
improvement in the best solution. For this version, the nmut parameter
was increased by ten each time the number of generations without
improvement exceeded 100. However, the results for this version were not
an improvement over the previous results.
30
Infeasibility
S everal methods for handli ng inf eas ibility were examined. O ne
experiment involved replacin g all inf eas ible s olutions with the bes t s olution
when the number of infeas ible s olutions fe ll below a certain pre- defi ned
number. The res ults of the method were s atis fa ctory but not very
encouraging becaus e there were no cons is tent improvements over previ ous
methods. The primary reas on for us ing this criterion for replacement was
to avoid certain problems which would be ass ociated with other criterion.
F or example, if the infeas ible s olutions were replaced as a res ult of the
generation count, all of thes olutions may s till be inf eas ible at that
particular generation. I n this s itu ation, al l s olutions in the population
would have been replaced with the bes ts olution.
A s econd experim ent allowed the program to ru n through the fi rs t
2000 generations and then took the bes ts olution and replicated it 100
times , eff ectively replacing the current population with thes e replicas. The
algorithm was then ru n through another 2000 generations. A gain, the
res ults of the attempt did not s ignifi cantly changefr om the previous
res ults . However, one interes tin g dis covery was made while runni ng the
s econd experiment. A fter obs erv ing the populations ofs uccess ive
generations , it became apparent an increas ing number of s olutions became
identical to the bes t s olution until the maj ority of the population had
converged to this s olution. This would explai n why replication of s olutions
did not produce s ignificantly diff erent res ults . The population naturally
converges and by the 2000 th generation mos t of the s olutions are already
identical to the bes ts olution. This dis covery led to the next experiment
which removes duplicate s olutions from the population. This is becaus e
31
once the mating pool is dominated by a single solution , there is insufficient
variation in the population to allow improvement over the current best
solution.
Duplicate Solutions
This experiment involved removing duplicate solutions from the
population. In the first experiment, an entire chromosome was mutated if
it was found to be a replica of one which alre ady existe d in the population .
This version did not show improvement; therefore , a second vers ion was
develope d. In this version , rather than mutating the entire chromosome of
the replicates , two genes on the chromosome were randomly selected to be
mutated. This caused a significant increase in the amount of time to run
the algorithm which was a key disadvantage of remov ing duplicate
solutions . The program took significantly longer to run since each solution
pro duced must be checked to make sure it does not already exist. Because
o f this time factor, the population size had to be decre ase d from 100 to 50.
There were no significant improvements in the solution. The only
advantage this version displayed over the other versions was for the 50
customer, 6 vehicle problem. After 4000 generations , a feasible solution
was found with a value of 580, the best feasible solution which had been
found in the experimentation up to this point.
Reverse Paths
Another version of the algorithm was develope d to insure against
reverse paths . For example , problems can occur if one solution represents
a vehicle making a tour, and another solution rep resents the vehicle making
32
the s ame tour only the customers are visited in reverse order from the first
one . If these two s olutions reproduce with each other, the child w ill
probably not represent an e fficient solution even though both parents may
have represented g ood solutions . When the program w as run which
che cked for reverse paths , the results again did not show improvement, and
the time factor incre ase d significantly . The population s ize had to again be
reduce d to 50 in order to decrease the time to run the algorithm.
33
solutionsfe asible. This ve rsion produce d the solutions597 and232 1 for
the 50 and32 custome r proble ms re spe ctive ly.
34
with out improvement exceeded 1 00. Th e next version was to ch ange th e
gene selection parameter value by 1 0 each time th e number of generations
with out improvement exceeded 1 00. N eith er of th ese versions sh owed
improvement.
Summary
I n summary, th ere were some aspects of th e experimentation wh ich
provided improvements to ht e algorith m obtainedfr om Bean. F ollowing
are afe w of th ese aspects.
1 . Penaliz ing infe asible solutions
2. Ch anging th e number of mutations as th e number of generations
m creases
3. E nsuring all infeasible solutions evaluate better th an all infeasible
solutions
Th e gene selection parameter was also observed to be an important factor
in determining th e solution fo r a particular problem. H owever, no
consistency was found for using ht e same gene selection parameter over a
nu mber of diff erent problems.
35
CHAPTER 3
ANALYSIS
Parameter Settin2s
The settings of the parameter values appear to have a great influence
on the genetic algorithm for the VRP. Parameters that are commonly
known to have significant effects on the outcome of the algorithm are
population size, crossover rate and mutation rate [Schaffer, et al 1 989] . In
addition to these, an additional parameter of interest in this study was the
gene selection parameter. For the 32 customer, 3 vehicle problem with
both time and weight constraints, the best solution found with the gene
36
selection parameter set at 50 was226 1 , much larger than the current best
solutionk nown of 2086. However, using the same algorithm witht he gene
selection parameter being dropped to40 , the solution improved to 2 130 ,
which is within about 2% of the bestk nown solution. LawrenceD avis
developed a procedure which evaluates the eff ectiveness of the parameter
settings on a particular problem and changes the parameters accordingly t o
produce the best resultsf or the problem. A ccording to D avis, "Genetic
al gorithms are stochastic, and the same parameter settings used on the same
problems by the same genetic algorithm generally y ield diff erent results.
A conseq uence of this fa ct is that it can tak e a tremendous amount of
computer time to fi nd good parameter settings across a number of
problems." [D avis 1989] . If a method could be developedf or determining
the best parameter values f or a particular problem, the perf ormance of the
genetic algorithm should improve significantly.
There is also concern about whether the parameter values should
change during the ru n of the genetic algorithm and what should initiate the
change [D eJong 1985] . A ccording to a study by D eJong: "I ncreasing the
population siz e was shown to reduce the stochastic eff ects [ of random
sampling on af inite population] and improve long- term perf ormance at t he
expense of slower initial response. . ,. and reducing the crossover rate
resulted in an overall m
i provement in perf ormance, suggesting that
producing a generation of completely new individuals was too high a
sampling rate. " [S chaff er, et al 1989].
This led to a set of experiments involving changing the parameter
values on the genetic algorithm in order to observe the eff ects these
changes would have on the results of theVRP data sets selected.
37
Popul ation siz e was one parameter which, when al tered, had a consistent
eff ect on the resul ts. All previou s experiments discu ssed in earl ier chapters
u sed a popul ation siz e of 100. Consistentl y , for all data sets tested,
increasing the popul ation siz e to200 gave better resul ts and decreasing the
popul ation siz e to 50 gave worse resul ts than the popul ation siz e of 100.
This seemed to be consistent for small probl ems, as well as, l arge
probl ems.
A nother parameter tested was the mu tation parameter. F or the final
version of the program, an additional met hod of incorporating mu tation
into the genetic al gorithm wasu sed al ong with the method discu ssed
earl ier. I n the earl ier method, a certain nu mber of compl etel y new
solu tions are randoml y generated in each generation. This new method
work s by perf orming a cou nt every tenth generation to determine how
mu ch the popul ation has converged. F or each gene of the best solu tion, the
all el e is compared to the corresponding all el e on each of the other solu tions
in the popul ation. E ach time an all el e is fou nd to be identical to the all el e
on the best solu tion, the cou nter is incremented by 1. I f this cou nter
exceeds a certain nu mber (70 wasu sed for this program) , then randoml y
repl ace a cert ain nu mber of these genes(20 wasu sed in this case). Th e top
solu tions, which were au tomaticall y copied into the next generation, were
exclu dedfr om this random repl acement. Th e obj ective was to maintain
diversity in the popul ation and hel p to prevent prematu re convergence. I t
was observ ed for the 32 cu stomer probl em that after onl y 30 generations
there were6 genes which were repeated on atl east70 chromosomes, and
this nu mber continu ed to increaseu ntil it stabil iz ed in the range between28
and 32. I n addition to this method of mu tation, another method wasu sed
38
whi ch has previ ously been menti oned. Thi s met hodi nvolvedi ncreasi ng the
number of mut ati ons ast he number of generati ons i ncreased. Thi s met hod
of i ncreasi ng t he number of mut ati ons di d not seemt o work well i n
combi nati on wit ht he met hod of mut ati ng converged alleles. F or all of t he
dat a set s for whi cht hi s combi nati on wast est ed, t he result s were eit her
worse t hant he result s when t he program was run wit hout t hi s change, or
werei nf easi ble wheret he previ ous result s had beenfe asi ble.
I n experi menti ng wit ht he number of best soluti ons copi ed int ot he
next generati on, it appearst hat therei s no consi st ent eff ect ont he result by
i ncreasi ng or decreasi ngt hi s number. Thi si s probably becauset he
soluti ons have such at endency t o converget hat t hey mai nt ai nt hemselves
wit hout being copi ed int ot he next generati on. However, for these dat a set s
the result s seemedt o be more consi st ent wit h thi s number set at 20, sot hi s
i s t he number used for the experiment ati on.
Initial Population
A not her aspect oft he geneti c algorit hm whi ch si gnifi cant ly aff ect s
t he final soluti oni st he generati on oft he initi al populati on. Li epins, et al
st udi ed howt he i niti al populati on aff ect edt he result si nt hei r
experi ment ati on wit h t he crossover method fort heTS P. They di scovered
t hat by changingt he initi al populati on, a 13% t o 17% vari ati on was
observed wit h a conventi onal crossover, and an approxi mat ely 8%
vari ati on was observ ed wit h a greedy crossover [ Li epins, et al 1987] . It
appears, however, t hat a randomly generat ed i niti al populati on produces
sati sfact ory result s si ncet he populati oni s het erogeneous at t he beginni ng of
t he algorit hm [D avi s 199 1]. A s observ ed by Li epi ns, t here appearst o be
39
little benefit in seeding the population with locally optimal solutions. In an
experiment by Booker, he was able to find better results when all initial
solutions were randomly generated [Liepins and Potter 1 99 1 ] . For the
genetic algorithm used in our experiments, the initial population was
always randomly generated in order to introduce diversity into the
population. For the smaller problems with time and capacity constraints
and the larger problems with only capacity constraints, a randomly
generated initial population did not seem to present a problem. However,
for larger problems with both time and weight constraints, a feasible
solution could not be found. A possible solution to this problem would be
to place a few feasible solutions in the initial population while still
randomly generating most solutions.
Search Space
One of the problems encountered in using genetic algorithms is the
size of the search space. The search space here refers to the number of
combinations of possible solutions for the given VRP. An example of the
problem of a large search space was presented in an article by Cleveland
and Smith involving experiments they had performed on scheduling flow
shop releases [Cleveland and Smith 1 989] . The Hinton and Nowlan Model
[Belew 1 989] attempts to solve problems with binary solutions and claims
an improved solution if learning is combined with evolution. They refer to
a problem which has 2L (where L is the number of genes on the
chromosome) possible combinations as a "needle-in-a-haystack" problem
and do not feel that the genetic algorithm alone would perform very well.
However, by combining the genetic algorithm with learning, the search
40
space could be narrowed [Belew 1 989] . The difficulty of inc orporating
lea rning is that it is not always easy to determine which criterion help
define a g ood solution [Belew 1 989].
Vehicle routing problems have large search spaces . As an example,
for an n customer, m vehicle problem, the possible number of
combinations of only selecting which vehicles w ill visit which customers is
mn. This does not include the large number of sequencing p ossibilities
w ithin each route . In spite of the fact that there is such a large search
space , the genetic algorithm seems to produce results which are relatively
close to the best known s olutions for the smaller problems . However, for
the larger problems , the quality of the solutions and the likelihood of
finding feas ible s olutions decrease .
In a ddition to changing parameter values , there are several other
aspects of the genetic a lgorithm which are believed to have a s ignificant
impact on the performance of the GA. Among these are the representation
of s olutions , the issue of elitism, and methods of crossover. The following
are alternate methods of dealing with these aspects w hich were presented in
the lite rature .
Elitism
Elitism is the idea of preserving the best members of the p opulation
by c opying them into future generations . An a lternative to elitism is to use
a "refresh" operator which works by copy ing the best member of the
population to a location other than the current population. This copy is
mainta ined and occasionally brought back into the p opulation [Sirag and
Weisser 1 987] . It was decided for our experiments to use elitism rather
41
than this method in or der tok eep h
t _ eb est memb er s in the population at all
times. A n altern ative method of r epr esenting the solution in the Tr aveling
S alesman Pr ob lem isb y assigning a customer numb er to each gene. The
or der of the tour is then the or der of the customer number s on the
chr omosome [S ir ag andW eisser 1987] . D ur ing cr ossover, a cr ossover
point isr andomly selected and all the alleles up to this point ar e copied
fr om par ent# 1; ther emaining alleles ar e copied fr om par ent# 2. O ne
pr oblem with this method is the incr eased amount of time the cr ossover
tak esb ecause when copy ingfr om par ent# 2 , each allele mustb e check ed to
see if it has alr eady b een copiedfr om par ent# 1. S ince some genes on
par ent# 2 ar e sk ipped ( only the ones which have not alr eady been copied
fr om par ent# 1 ar e copied) , the r esulting chr omosome may not be
r epr esentative of either par ent [S ir ag andW eisser 1987] . The pr oblems
with this method ar e also incr eased when dealing with theVRP which has
the addedr eq uir ement ofr epr esenting which vehicle visits which
customer s.
Encodin2 of solutions
The method of encoding solutions used for this GA was the r andom
k ey s method which was discussed inChapter 1. This appear ed to be the
most effi cient means ofr epr esentingVRP solutions since cr ossover s could
b e per form ed without the added task of ensur ing that each customer was
visited exactly once. This constr aint was automatically met with the
r andomk ey s r epr esentation.
42
Crossover Methods
There is no agreement on bes t method of cross over. This s ection
will dis cuss the greedy cross over and the uniform cross over. A ccording to
D eJong, the number of cross over points req uired to produce better
s olutions s eems to increas e with the length of the chromos ome [D eJ ong
1985] . Uniform cross over appears to be better than one- point or two- point
cross over; even t hough, in theory, the other two cross over methods s hould
perform better than uniform. The reas on for this is the s chema s urv ival
rate is better for the one and two point cross over. O ne advantage with
uniform cross over is it does not need to be combined with invers ion
( revers ing the order of the genes on a s egment of the chromos ome). This
is becaus e alleles which aref ar apart on the chromos ome have an eq ual
chance of s tay ing together on the new chromos ome as alleles which are
clos e together [S ys werda 1989] .
The greedy cross over is one ty pe of cross over which is a poss ible
area of exploration for future res earch. This cross over was developed by
Liepins, et al [ Liepins, et al 1987] and us es t he idea of a greedy algorithm
which, according to L. D avis is " an optimiz ation algorithm that proceeds
through a s eries of altern atives by mak ing the bes t decis ion, as computed
locall y, at each point in thes eries." [D avis 199 1]. They compared the
performance of this cross over method with the conventional cross over on
theTS P. This cross over method is a modifi cation of one which
Grefe ns tette developed. I t begins by s tarting the tour with the s am e city
every time. A t this point, the s hortes t edge is s electedfr om the two
parents , if a cy cle is not introduced. I f a cy cle is introduced, the edge is
s electedfr om the other parent, unless it als o caus es a cy cle. I f the choice
43
of either parents results in a cy cle, th e tour is extended by a random city.
This process is repeated until the tour is completed. The advantage to
using the greedy crossover is that it " all ows problem specific information
to be used in the crossover operation." Greedy genetics seem to perform
better when the greedy algorithm being used is powerfu l, meaning it finds
a good solution with only one ru n of the algorith m. However,
conventional genetics perf orm s better when the greedy algorithm is weak
[ Liepins, et al 1987].
I n summation, forth is genetic algorithm, the only method of
encoding a solution which was used in the experimentation was the random
k ey s representation. The method of elitism used was to copy the top20
solutions into the next generation. Th e uniform crossover was preferred
over one and two point crossovers. This is because the uniform crossover
can produce a greater number of combinations of solutions, providing
more diversity within the population. The greedy crossover was not used
for any of the experim entation.
Converi:ence
A common problem with genetic algorithms is premature
convergence to a solution that is not optimal. This has been a recurring
problem when ru nning our experiments; therefore, this section discusses
some of the causes of convergence and possible way s of preventing
premature convergence. There are two ty pes of alleles which contribute to
this convergence: lost and converged. A n all ele is refe rred to as lost if
every member of the population has the same value for a particular gene.
W hen this occurs, the possible genoty pes are severely restricted. A n allele
44
is said to have converged if at least 95 % of the population has the same
value for a particular gene. Two possible causes of this convergence is that
a "super individual" starts producing too many offspring or, in contrast,
the other individuals are not producing enough offspring. One solution to
this problem is to keep the population as diverse as possible [Baker 1985].
The mutation operator serves as protection against convergence by helping
to keep the population diverse [Goldberg 1988]. An example of this
problem is shown by Ackley in a comparison between a genetic algorithm
and a hillclimbing algorithm. Over a convex solution space, the genetic
algorithm took longer to run primarily because a loss of an allele caused a
long run to be necessary. The probability of this occurring was reduced by
increasing the mutation rate [Ackley 1985].
One cause of convergence is to focus too much on rapid
improvement which can cause premature convergence on the wrong strain
by driving out alternative genetic material. A good balance must be found.
If performance is not sufficiently emphasized, the best members of the
population can be lost [Davis 1 987]. The manner in which infeasibility is
handled is a very important consideration with respect to convergence.
Most work with genetic algorithms has been performed on unconstrained
problems. Convergence is a difficulty with using the GA on constrained
VRP's [Liepins and Potter 1991]. Under a high infeasibility rate, a feasible
solution tends to drive other possibilities out of the population. This is due
to the fact that the probability of infeasible members reproducing with each
other is continuously decreasing [Davis 1987]. According to Liepins and
Potter, there are three methods of dealing with infeasibility:
45
1. Fo rce fe asible so lutio ns into the po pulatio n by using " spe cialize d
re co mbinatio n o pe rato rs."
2. Do no t allo w infe asibility by re pe ating re pro ductio n until a
fe asible so lutio n is ge ne rate d.
3. Use a pe nalty fu nctio n fo r infe asibility.
The y fo und that o f the se three me tho ds, o nly the f irst and third we re
effe ctive [ Lie pins and Po tte r 199 1]. D avis de alt with infe asibility fo rjo b
sho p sche duling pro ble ms by o nly allo wing feasible so lutio ns by se le cting
the f irst le gal actio n available fro m a list o f actio ns fo re ach wo rk statio n
[D avis 1985]. Ho we ve r, since ge ne tic algo rithmsfu nctio n by co mbining
info rmatio nf ro m all me mbe rso f the po pulatio n, infe asible me mbe rs
sho uld re main in the po pulatio n to re pro duce with the fe asible me mbe rs
[ Richardso n, e t al 1989] .
Fo r o ur ve rsio no f the ge ne tic algo rithm, infe asible me mbe rs we re
allo we d, but we re characte rize d with a pe nalty fu nctio n so as to give an
advantage to fe asible me mbe rso f the po pulatio n. Two pe nalty functio ns
we re te ste d to o bse rv e the ireffe ctso n the co nve rge nce o f the po pulatio n.
The fi rst pe nalty fu nctio n invo lve d sq uaring the amo unt the so lutio n
e xcee ds the we ight limit plus the amo unt the so lutio ne xcee ds the time limit
and multiply ing this by the numbe ro f ve hicle s who se ro ute s are infe asible
with re spe ct to we ight plus the numbe r infe asible with re spe ct to time. The
se co nd pe nalty f unctio n invo lve s multiply ing the amo unt the so lutio n
e xcee ds the we ight limit by 0.25 plus the amo unt the so lutio ne xcee dst he
time limit time s0.25. The large r pe nalty see me d to wo rk be tte r o ve rall
whe n use d in co mbinatio n with the pro ce dure o fkee ping a co unto f
duplicate ge ne s and rando mly re placing the m whe n ne ce ssary ino rde r to
46
maintain diversity in the popu lation. The only exception to this was the50
cu stomer, 6 vehicle problem, which produ ced a better solu tion with the
small er penalty fu nction. The advantage of the larger penalty was
particu larly obviou s with the larger data sets. I t appeared that they need a
larger penalty fu nction in order to be driven to feasibility. A lthou gh a
feasible solu tion was not fou nd for the larger problems with a time
constraint, the solu tion came closer tofe asibility when the larger penalty
fu nction wasu sed.
W hen infeasible members are left in the popu lation, it is common
practice tou se some penalty fu nction in order to give thefe asible members
of the popu lation an advantage over the infeasible members. O ne
altern ative to the standard procedu re of combining the cost fu nction and
the penalty fu nction into one is to treat the cost as one obj ective and treat
the penalty as a separate obj ective [ Richardson, et al 1989]. A ccording to
Richardson, Palmer, L iepins, andHilliard , there are fou r gu idelines for
designing a penalty fu nction:
47
Another cause of convergence is genetic drift. This term refers to
one allele winning out over the others even though it has no real significant
advantage. Normally with the GA, the problem is convergence causing
unequal alleles in the population. However, with genetic drift the problem
can be even greater, resulting in the bad alleles surviving instead of the
good alleles [Goldberg and Segrest 1 987] . There is a theorem which states
that the best individuals will increase exponentially in the number of times
they reproduce assuming that the population is infinitely large [Goldberg
and Richardson 1 987] . This is believed to be a cause of genetic drift.
One method of reducing the probability of premature convergence is
by incorporating the idea of niche and species into the genetic algorithm.
The concept of niche and species comes from the natural definition in
which different species have separate niches which are composed of
different environmental features. By forcing subpopulations to exist, the
probability of convergence is reduced [Goldberg and Richardson 1987] .
There are several methods of incorporating this idea into the genetic
algorithm. One such method, known as preselection, was developed by
Cavicchio ( 1 97 1 ). With preselection, the offspring only replaces the parent
if it gets a better fitness value than the parent. This maintains diversity by
only replacing solutions which are similar to themselves. DeJong ( 1 975)
developed the concept known as crowding. Each member of the population
is assigned a crowding factor based on its similarity to the other members.
When an offspring is produced, it replaces the individual which is most
similar to itself in a randomly drawn subpopulation of individuals with the
same crowding factor. Goldberg and Richardson introduced the idea of
sharing to induce niche and species on members of a population [Goldberg
48
and Richardson 1987]. T he idea of sharing is that solutions receive a
reward based on their perform ance; however, the reward must be shared
among all of the similar solutions. T herefore, a solution' s reward will be
reduced corresponding to the number of similar solutions [D eb and
Goldberg 1989]. Goldberg and Richardson demonstrated that a genetic
algorithm with sharing maintains subpopulations around diff erent peak s,
while without sharing, the population converges to a single peak [Goldberg
and Richardson 1987].
I n addition to niche and species, there are several other methods
w hich have been introduced for dealing with convergence. O ne idea
presented by Bick el andBick el is to characteriz e a population asconverged
if the evaluation of all the solutions is within a certain range. I f it is
determ ined that the population has converged by this defi nition, then a
certain percentage are replaced with new solutions[Bick el andBick el
1987]. Bak er proposedt hree additional methods of solving the problem of
premat ure convergence. The fi rst method is standard selection in which
there is a limit to the maximum ort he minimum off spring produced by a
particular parent. T he second met hod is rank ing. W ith rank ing, the rank
rat hert han the value of the solution determine an individual' s expected
number of off spring [Bak er 1985]. T he third method, the hy brid method,
has two altern atives. T he first altern ative is to use rank ing during periods
of rapid convergence and to use standard selection the other times. The
second altern ative is to changet he number in the population in order to
reach t he desirable percentage involvement ( the ratio of the number of the
best members int he population to the total number of members in the
population) [Bak er 1985]. T he disadvantage with this second altern ative is
49
that a super individual can still control the population. Even though other
individuals are not completely lost, their significance can be greatly
reduced because the super individual is still dominant [Baker 1 985].
Eshelman and Schaffer proposed a method of preventing premature
convergence by preventing incest, using uniform crossover, and removing
duplicate solutions from the population. In this method, an evaluation is
performed to determine the difference between each of the individuals in
the population. This difference is referred to as the "Hamming distance".
Incest prevention only allows two individuals to reproduce with each other
if their "Hamming distance" is greater than a certain amount. This amount
will decrease as the population converges. Eshelman and Schaffer
produced successful results with this method. However, they determined
that it was not necessary to remove duplicate solutions in combination with
incest prevention. This was because when the two procedures were
combined, results did not significantly improve, and the run time was
increased because of excessive comparisons [Eshelman and Schaffer 1 991].
50
Number of solutions mutated each generation - 2
Gene selection parameter value - 50.
The method of encoding solutions used was the random keys
representation. A penalty function was used during evaluation to penalize
infeasible solutions. This penalty was calculated by the square of the
amount the time limit was exceeded plus the square of the amount the
weight limit was exceeded times the number of vehicles not meeting the
time constraint plus the number of vehicles not meeting the weight
constraint. The method of reproduction was to copy the top 20 solutions to
the next generation, mutate two complete solutions, and to produce the
remaining 178 of the solutions by uniform crossover. In order to decrease
the problem of convergence, a particular gene was mutated for 20 of the
chromosomes if more than 70 chromosomes in the population had an
identical allele to the best member of the population for that particular
gene. This version seemed to perform better overall; however, there were
a few exceptions in which a slight modification to this version improved
performance on the problem. One exception was the 32 customer, 3
vehicle problem which performed better with a gene selection parameter of
40 rather than 50. Also, the 50 customer problem with and without the
time constraint, as well as the 100 customer, 8 vehicle problem without the
time constraint performed better with the smaller penalty function. The
smaller penalty function was the one in which the amount the constraints
were exceeded was multiplied by 0.25.
51
Results
Table 3.1 presents the problems analyzed. The first column of this
table lists the problem number which was assigned to each problem. If the
number is followed by " -t", this is an indication that the problem is the
same one as the previous problem only without the time constraint. The
number in brackets beside the problem number indicates the source from
which the best known solution value is reported. The next two columns
respectively list the number of customers and the number of vehicles for
the corresponding problem. The weight capacity for each vehicle is given
in column 4, and the time limit for each vehicle route is given in column 5
(where the dotted lines indicate that the problem has no time constraint).
Column 6 lists the stop times at each customer. The capacity ratio in
column 7 is calculated by dividing the total amount of weight to be picked
up by the total vehicle capacity available.
Table 3 . 2 presents the results of the genetic algorithm compared with
the best known solutions of the problems obtained from the literature. The
results from the GA were obtained from running the final version of the
GA, which was written in C programming language, on a Spare II UNIX
workstation. The first column lists the problem number from Table 3. 1 .
The best known solution which was obtained from the literature is given in
column 2. Column 3 gives the solution obtained using the GA. If the final
solution was infeasible, this number includes the penalty. The 4th column
lists the amount of time the algorithm took to complete the run. Column 5
gives the actual distance for the problems. If the solution was infeasible,
the number in parentheses represents the number of vehicles infeasible with
respect to time plus the number infeasible with respect to weight. The last
52
Table 3.1. Probl ems used for experimentati on.
3 4500 10 .76
P 1 [30] 22 .94
29 3 4500 10
P2[32] 20 .86
P3 [30] 32 3 38000 1 000
3 38000 20 .86
P3-t [30] 32 . 80
50 6 1 60 200 10
P4[ 1 7] 10 .97
P4-t [ 1 7] 50 5 1 60
75 11 140 1 60 10 .8 8
P5 [32] 10 .97
P5-t [ 1 7] 75 10 1 40
75 14 100 1 0000 10 .9 7
P6 [30] 10 .81
P7 [ 1 7] 1 00 9 200 230
8 200 10 .91
P7-t [ 1 7] 1 00 . 82
P8 [ 17] 1 00 11 200 1 040 90
10 200 90 .90
P8-t [32] 1 00
1 00 14 1 12 1 0000 10 .92
P9 [30] . 62
P l O [32] 1 20 11 200 720 50
1 20 7 200 50 .98
P l O-t [32] . 80
P 1 1 [32] 1 50 14 200 200 10
1 50 12 200 10 .93
P l l -t [ 17] .88
Pl2[32] 1 99 18 200 200 10
1 99 17 200 10 .92
P 1 2-t[ l 7] .98
P 12-t[32] 1 99 16 200 10
53
Table 3.2.Comparison of GA results with other methods of solving
problems.
Best GA % GA above
Problem Sol ution Solution Time Actual best known soln
Pl 568.56 569.7 5 1 2.5 569.7 0.2
P2 534 548.5 699. 3 548.5 2.7
*P3 2086 2 1 30.5 996. 1 2 1 30.5 2. 1
P3 2086 226 1 .9 802.2 226 1 .9 8.4
P3-t 2009.3 1 2009. 3 780.5 2009. 3 0.0
* P4 555.43 561 .3 1 230.5 56 1 . 3 1.1
P4 555.43 587.9 1 333.0 587.9 5.8
*P4-t 524.6 1 656.8 1 228. 1 656.8 25.2
P4-t 524.61 749.2 1 236.3 749.2 42. 8
P5 909 62 1 0.9 208 1 .8 1000 (6) INFEAS
P5-t 836.37 1 380. 8 2 1 08.6 1 3 80. 8 65. 1
P6 1042 1 722.6 2 1 50.7 1 722.6 65. 3
P7 865.94 60 1 4.4 52 min 1 1 00 ( I ) INFEAS
P7-t 826. 1 4 984.0 5 1 min 984.0 1 9. 1
P8 866.37 1 226.9 52 min 1 226.9 4 1 .6
P8-t 819 1 254. 1 52 min 1 254. 1 53. 1
P9 1 1 13 1 697.9 54 min 1 697.9 52.6
PIO 1 545 95344.7 1 hr. 5min 1 974 ( 1 ) I NFEAS
P l 0-t 1 042 2960.0 1 hr. 4min 2960.0 1 84.0
Pl 1 1 1 64 569277 .9 1 hr. 21 min 1 57 8 (6) INFEAS
P l 1 -t 1 034.90 2256.4 1 hr. 1 5min 2256.4 1 1 8 .0
Pl2 1417 457 8660.0 2hr. 34min 2280 (8) INFEAS
P 1 2- t l 1 329.29 3537.4 2hr. 30min 3537.4 1 66. 1
Pl 2-t2 1 334 4980.0 2hr. 29min 4890 (2) INFEAS
Notes: *P3 are the results of the 32 customer, 3 vehicle problem with a gene selection
parameter of 40 instead of 50. *P4 are the results of the 50 customer problem with the
smaller penalty function rather than the larger one.
54
column presents the percentage which the GA solution was above the best
known solution. Refer to Appendix 2 for tables showing which customers
are visited by which vehicles.
The problems can be categorized as either evenly distributed or
clustered. Problems P4, PS, P6, P7, and P9 consist of customer locations
which are evenly distributed over the region. [Noon, et al 1 991]. Problems
P2 and P3 share aspects of both these two categories. [Noon, et al 1991]
However, because of the way the genetic algorithm functions, the
organization of the customers should have no effect on the results.
Notice that the GA solution to the first three problems, the 29
customer, 32 customer, and 50 customer problems, are all relatively close
to the best known solution, with the exception of the 50 customer problem
without the time constraint. The poor results for this problem could be
due to the high capacity ratio. Beginning with the 75 customer problems,
the genetic algorithm performance becomes progressively worse. The
genetic algorithm did not even find feasible solutions to the time
constrained problems with 75 customers and greater. This is probably due
to the increased size of the search space. As the search space size increases,
it becomes more and more difficult for the genetic algorithm to converge
to the optimal solution. In some instances, the solutions to these problems
were continuing to decrease as the genetic algorithm approached its 2000th
generation. Therefore, in some cases, the algorithm was allowed to run
for 3000 generations in an attempt to allow the algorithm to complete its
convergence. However, this did not significantly improve the results. The
solutions continued to decrease a small amount for a few more generations
55
an d then con verged to a soluti on n ot si gnifi can tly better than the soluti on at
the 2000 th gen erati on.
Proposed Improvements
In the arti cle by W hi tley, S tark weather, an dF uq uay i t was observ ed
that:
The theory behin d gen eti c algori thmsi s well developed for
problems that can be en coded as a bin ary strin g wi thn o order
in depen den ci es. However, man y poten ti al appli cati on s of
gen eti c algori thms i nvolve complex orderin g depen den ci es
si mi lar to those foun d in theTravelin g S alesman Problem
[W hi tley , et al 1989].
S uh an dGucht li st three problems to overcomein makin g thi s
tran sformati on:
1. Represen tin g the problem eff ecti vely.
2. Recombi nati on operators are on ly eff ecti ve i f a heuri sti c i s
appli ed. "S uch operators can be foun din gradi en t descen t
algori thms, hi ll cli mbin g algori thms, si mulated ann eali ng, etc."
3. Premature con vergen ce whi chi s caused by a super i ndi vi dual
who overtak es the populati on or a poor performan ce by a
recombi nati on operator [S uh an dGucht 1987].
A ccordin g toGrefen stette, in order to apply gen eti c algori thms to
combin atori al opti miz ati on problems, somekin d of heuri sti c must be used.
He used a heuri sti c crossover operator whi ch proved to be more eff ecti ve
than the stan dard gen eti c algori thm [S uh an dGucht 1987].
S uh an d Gucht introduced a methodin whi ch two operators were
used. Thefi rst operator i s used to select two paren ts. A ran dom ci ty i s
then selected for the beginnin g of the off sprin g tour. S ubseq uen t gen es are
selected on e at a it mefrom the paren t whi ch wi ll produce the shortest path.
56
The proble m is the paths m ay still be crosse d. This proble m is sol ve d
using the se cond ope rator, the 2- opt ope rator. This ope rator randoml y
, ) . I f ED( i1 j, 1 ) + ED(h h
sele cts2 e dge s ( i1 j, 1 ) and ( 2i h , )+
, ) > ED( i1 h
ED( ii j, 1 ) , re pl ace the e dge s with ( i1 j, 2 ) and ( ii j, 1 ) ( whe re ED is the
E ucl ide anD istance ) [S uh andGucht 1987] . The y we re able to produce
be tte r re sul ts with the 2- opt ope rator than without it [S uh andGucht 1987] .
"I t turne d out that the sele ction of a natural re pre se ntation and the sele ction
of he uristicall y m otivate d re com bination ope rators is critical in the de sign
of robust ge ne tic al gorithm sfo r such proble m s." [S uh and Gucht 1987] .
The re are se ve ral me thods which have bee n propose d to im prove the
standard ge ne tic al gorithm. O ne meth od is hy bridiz ation of anothe r
optim iz ation al gorithm with the ge ne tic al gorithm. This meth od can
com bine the positive fe ature s of the othe r al gorithm , such as the e ncodi ng
te chniq ue , with the be st fe ature s of the ge ne tic al gorithm , crossove r and
m utation[D avis 199 1].
A nothe r me thod is to com bine sim ul ate d anne al ing with crossove r,
m utation, and inve rsion by using a te m pe rature parame et r to control
dive rsity in the popul ation [S irag andWe isse r 1987] . S im ul ate d anne ali ng
use s a single individual which is give n some am ount ofe ne rgy ( high for
ineff icie nt sol utions, l ow foreffi cie nt one s) . W he n a ne w sol ution is
ge ne rate d, it will re pl ace the curre nt sol ution base d on some probabil ity.
This probabil ity is assigne d according to the am ount ofe ne rgy the ne w
sol ution has com pare d to the curre nt one [S irag andWe isse r 1987] . The
way th is te m pe ratu re parame te r woul d work with the ge ne tic al gorithm
woul d be to sele ct ge ne sfrom the f irst pare nt until the te m pe rature is
e xcee de d, the n switch to the se cond pare nt until the et m pe ratu re is
57
exceeded again. This would work similarly for inversion and mutation.
The temperature should start out high and drop fa irly rapidly to a medi um
temperature, then drop slowly to a low temperature [S irag and W eisser
1987] .
O ne additional possibility fo r improving the results of the geneti c
algorithm is to run them in parallel. The idea of Parallel Genetic
A lgorithms ist hat instead of having one large population, have several
smaller subpopulations reproducing in parallel. A t the end of each
generation, each subpopulation sends theb est individual in its population to
the other subpopulations. There are diff erent methods of selecting which
i ndividuals are tob e replacedb y these new memb ers. A mong these are
replacing randomly , replacing the worst solution, or replacing the solution
which is most lik e the new one. I t is undetermined at this time whether
selection of the individual based on subpopulation performance rather than
population as a whole speeds up or slows down convergence. The
advantage of this method is that a large population siz e is enabled without
the unreasonable amount of time that it would tak e with a seq uential genetic
algorithm [ Pettey , et al 1987] .
Conclusion
Based on our research, the genetic algorithm seemed to perform well
on problems with 50 orfe wer customers. A s the number of customers
increased, the results became progressively worse and the lik eli hood of
finding afe asible solution also decreased. A lso, there was not a version
found which would consistently give the best solution for all problems
58
tested. Following are some thoughts Goldberg and DeJong have expressed
involving the inconsistency of genetic algorithms.
The idea of genetics is to be robust over a large domain, not to
achieve peak performance. Goldberg says, "When we change a genetic
algorithm to work better on a particular problem, we may have some
success in jazzing things up on that problem, but when we tum around and
try to use those operators elsewhere, we are likely to be disappointed."
[Goldberg 1 989] . Delong also believes that evolutionary systems are not
meant to be function optimizers and says,
59
References
60
1. Ackley, David H., "A Connectionist Algorithm for Genetic Search,"
Proceedings of the First International Conference on Genetic
Algorithms and Their Applications, Carnegie-Mellon University,
Pittsburgh, 1985.
3. Bean, James C., "Genetics and Random Keys for Sequencing and
Optimization," Department of Industrial and Operations Engineering,
Technical Report No.92-43, University of Michigan, June 1992.
17. Gendreau, Michel, Alain Hertz and Gilbert Laporte, "A Tabu Search
Heuristic for the Vehicle Routing Problem," Publication CRT-777,
Centre de recherche sur les transports Universite de Montreal, June
199 1.
19. Goldberg, David E., "Zen and the Art of Genetic Algorithms,"
Proceedings of the Third International Conference on Genetic
Algorithms, George Mason University, San Mateo, 1989.
62
20. Goldberg, David E.and Jon Richardson, "Genetic Algorithms with
Sharing for Multimodal Function Optimization," Genetic Algorithms
and Their Applications: Proceedings of the Second International
Conference on Genetic Algorithms, Massachusetts Institute of
Technology, Cambridge, 1 987.
24. Hadj-Alouane, Atidel Ben and James C. Bean, "A Genetic Algorithm
for the Multiple-Choice Integer Program," Department of Industrial
and Operations Engineering Technical Report No. 92-50, University
of Michigan, September 1 992.
63
29. Liepins, Gunar E.and W.D. Potter, "A Genetic Algorithm to
Multiple-Fault Diagnosis," in Handbook of Genetic Algorithms,
Lawrence Davis, ed., Van Nostrand Reinhold, 1991.
30. Noon, Charles E., John Mittenthal and Rekha Pillai, "A TSSP+ 1
Decomposition Approach for the Capacity - Constrained Vehicle
Routing Problem," Management Science Program Technical Report
No.37-91-272, University of Tennessee, June 1991.
64
37. Sirag and Weisser, "Toward a Unified Thermodynamic Genetic
Operator," Genetic Algorithms and Their Applications: Proceedings
of the Second International Conference on Genetic Algorithms,
Massachusetts Institute of Technology, Cambridge, 1987.
65
APPENDICES
66
APPENDIX 1
67
Appendix 1 consists of the final genetic algorithm which was used
for comparison of the genetic algorithm results to the best known solution
presented in the literature. The genetic algorithm is written in C
programming language.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define maxload 201 /* max number of loads */
#define maxveh 20 /* max number of vehicles */
#define maxpop 200 /* poplulation size */
#define maxcount 2000 /* number of generations to run */
FILE * outfile;
FILE * test;
FILE * infile;
FILE * testout;
FILE * gengraph;
FILE * geninfs;
int time_passed;
long timestore, time_now;
float bestinfs, worstfs;
int worstchr;
68
main()
(
void eval(),repro();
int pick();
float urand();
void readinb(), setpop();
int i,j,stop,k,1,feasfnd,n uminfeas;
float avg,z;
pptr = (struct ind * ) calloc (sizeof(s truct ind),302);
opptr = (struct ind * ) calloc (sizeof(struct ind),302);
dptr = (struct dist * ) calloc (sizeof(struct dist),302);
readinb();
feascnt = 200;
setpop();
if (do_out)
(
avg = 0.0;
for (i= l ;i<=maxpop; i++)
(
for (i = l ;j <=nload;j++)
printf("%.3d " ,( * (pptr+i)).genefj ]) ;
printf( "%f\n ",( * (pptr+i)). val);
avg += ( * (pptr+i)) .val;
}
printf("%f\n" ,avg/maxpop);
}
stop = 0;
count = 0;
while ( I -stop)
(
count++;
/* change the number of chromosomes mutated each generation */
/* if (count == 500)
{
nmut = 5;
}
if (count == 1 000)
69
{
nmut = 8;
} */
/* if (count == 1500)
{
nrep = 14;
nmut = 8;
} */
feasfnd = 0;
numinfeas = 0;
bestinfs = 1000000;
worstfs = 0;
worstchr = 0;
if(count >= maxcount)
stop = 1 ;
for (i= l ;i<=maxpop;i++)
if (( * (pptr+i)).infeas >= 1 )
numinfeas++;
feascnt = numinfeas;
fprintf(gengraph, 11 %d % .4f %d\n 1 1 ,count,( * (pptr+ 1 )).val,numinfeas);
fprintf(geninfs, 11 %d o/od\n 1 1 ,count,numinfeas);
if(count == pf* (count/pf))
(
/*printf("time = %d\n 11 ,clock(×tore)-time_now); */
printf( 11 %d generations, best value found is %.0f1 1 ,count,
( * (pptr+ 1 )) .val);
printf( 11 Number infeasible is %d 11 ,numinfeas);
printf(''\n ");
}
repro();
if(( * (pptr+ 1 )).val <= target) stop = 1 ;
i f (do_out)
(
avg = 0.0;
for (i= l ;i<=maxpop;i++)
{
for (j=l ;j<=nload;j++)
printf("o/od " ,( * (pptr+i)).gene[j]);
printf("o/of\n ",( * (pptr+i)). val);
avg += ( * (pptr+i)).val;
}
printf(" o/of\n" ,avg/maxpop ) ;
}
}
time_passed = clock(×tore)-time_now;
printf("time = o/of seconds\n",time_passed/1 e6);
eval ( l );
i = l;
printf( "Best solution found i s %.0f\n 11 ,(11* (pptr+ 1) ). val);
printf( 11 Best solution distance is %.0f\n ,( * (pptr+ 1 )).realval);
printf("Yehicle weights and times:\n 11 ) ;
70
for (k= l ;k<=nveh; k++)
printf(" Weight[%d] = %d Time[%d] = %f\n",
k,(* (pptr+ 1 ) ) . weight[k] ,k,(* (pptr+ 1 ) ) .time[k]);
printf("Infeasibility = %d\n",(*(pptr+ 1 ) ).infeas);
numinfeas = O;
for (k= l ;k<=maxpop;k++)
{
if ((*(pptr+k)).infeas >= 1 )
numinfeas++;
}
printf("Number infeasible = %d\n",numinfeas) ;
printf("Generations = %d\n ",count);
printf('\n");
for (j= 1 ;j<=nload;j++)
{
printf( " % . 3f ",(*(pptr+i)).gene[j]);
if (j == l O* (j/10))
prin tf('\n ");
}
printf('\n ");
printf( "%f\n ",(*(pptr+i)) . val);
i = 1;
fprintf(testout," Best solution found i s %.Of\n ",(* (pptr+ l )) .val);
fprintf(testout,"Best solution distance is %.Of\n ",(*(pptr+ 1 ) ) . realval) ;
fprintf(testout,"Vehicle weights and tirnes:\n ");
for (k= l ;k<=nveh;k++)
fprintf(testout," Weight[%d] = %d Tirne[ %d] = %f\n",
k,(*(pptr+ 1 ) ) . weight[k] ,k,(*(pptr+ 1 )).tirne[k] );
fprintf(testout,"Infeasibility = %d\n ",(* (pptr+ 1 ) ) .infeas);
numinfeas = O;
for (k= 1 ;k<=maxpop;k++)
{
if ( (* (pptr+k)).infeas >= 1 )
numinfeas++;
}
fprintf(testout, "Number infeasible = %d\n" ,nurninfeas);
fprintf(testout,"Generations = %d\n ",count);
71
if (k==l)
fprintf(testout, "%d %d\n " ,xcoor[iiU]] ,ycoor[iiU]]);
else
{
fprintf(testout, " %d %d\n" ,xcoor[iiU]] ,ycoor[iiU]] );
fprintf(testout, "%d %d\n\n " ,xcoor[nload+ 1 ] ,ycoor[nload+ 1 ] );
if (l<=nveh)
{
fprintf(testout, "%d %d\n ",xcoor[nload+ 1 ],ycoor[nload+ 1 ] );
}
}
fprintf(outfile,"time = %f seconds\n",(time_passed/l e6));
fprintf(outfile,"clevel, seed, count, value %d %d %d %f\n",clevel, seed,
count,( * (pptr+ 1 )).val);
} /* end seed loop *I
fclose(outfile );
fclose(gengraph);
fclose(geninfs);
void setpop()
int i,j ;
void eval();
float urand();
int pick();
/**************** **************************************************
* This procedure assigns a value to each solution based on the *
* distance traveled and the penalty *
*********************** ********************* **********************/
void eval(m)
int m;
72
int i,j ,k,l,w[maxveh+ I ] ;
float t[maxveh+ l ] ,tottime;
float f,overweight,overtime;
tottime = 0;
for (i= l ;i<= nload;i++)
{
ii[i] = i;
ff[i] = ( * (pptr+m)).gene[i] ;
}
h sort(nload);
overweight = 0.0;
overtime = 0.0;
f = 0.0;
for (k= I ; k<=nveh;k++)
w[k] = O;
for (k= l ;k<=nveh;k++)
t[k] = 0.0;
for(i= l ;i<=nload;i++)
{
k = ff[i] ;
if (i<nload)
I = ff[i+ l ] ;
else
I = nveh + I ;
if(i== l ) /* if leaving depot */
{
f = ( * (dptr+nload+ l )).node[ii [i]] ;
t[k] += ( * (dptr+nload+ I )).node[ii[i]] + stoptime;
}
if (k == 1) /* if using same vehicle for next stop */
{
f += ( * (dptr+ii[i])) .node[ii[i+ l ] ] ;
t[k] += ( * (dptr+ii[i] )).node[ii[i+ I ] ] + stoptime;
}
else /* using different vehicle for next stop */
{
f += ( * (dptr+ii[i] )) .node[nload+ l ] ;
t[k] += ( * (dptr+ii[i] )).node[nload+ l ] ;
i f (1 <= nveh) /* end of tour */
{
f += ( * (dptr+nload+ l )).node[ii[i+ l ] ] ;
t[l] += ( * (dptr+nload+ I )) .node[ii[i+ l ]] + stoptime;
}
}
w[k]+=eustweight[ii [i] ] ;
}
( * (pptr+m)).infeas = 0;
for (k= l ;k<=nveh ;k++)
{
73
if (w[k] > maxweight)
{
( * (pptr+m)).infeas ++;
overweight = overweight + (w[k] - maxweight);
}
if (t[k] > timelimit)
{
( * (pptr+m)).infeas++;
overtime = overtime + (t[k] - timelimit);
}
}
( * (pptr+m)).val = f + (pow(overweight,2. ) + pow(overtime,2.)) *
( * (pptr+m) ).infeas;
/* ( * (pptr+m)).val = f + (overweight * .25) + (overtime * .25); */
( * (pptr+m)).realv al = f;
if (( * (pptr+m)).infeas > 0)
if (( * (pptr+m)).val < bestinfs)
bestinfs = ( * (pptr+m)). val;
if (( * (pptr+m)).infeas == 0)
if (( * (pptr+m)).val > worstfs)
{
worstfs = ( * (pptr+m)).val;
worstchr = m;
}
for (i= l ;i<=nveh;i++)
{
( * (pptr+m)).weight[i] = w[i] ;
( * (pptr+m)).time[i] = t[i];
}
!************************************** *************************!
/* heapsorts arrays ff[n] and ii [n] in increasing order of ff */
/ *********************** *************** *************************/
void hsort(n)
int n;
(
int l,i,j,ir,rri,stop;
float rrf;
I = n/2 + 1 ;
Jr = n;
stop = O;
while ( I -stop) { /* printf(" %d %d %d %d\n ",l,ir,i,j);
for (i= 1 ;i<= nrep;i++) printf("% .Of " ,ff[i]);
printf("\n"); */
if (I> 1 )
(
1--;
rrf = ff[l]; rri = ii[!];
}
else
74
{
rrf = ff[ir] ; rri = ii [ir] ;
ff[ir] = ff[ l ] ; ii [ir] = ii [ l ] ;
If--;
if (ir== l )
{
ff[ l ] = rrf; ii[ l ] = rri;
stop = 1 ;
}
};
if ( I -stop)
{
i = l;
j = l+l;
while U<=ir)
{
if U<ir)
if (ff[j ] < ff[j+ 1 ] ) j++;
if (rrf < fffj] )
{
�f[i� = ff[j ] ; ii [i ] = ii fj ] ;
I =J;
j = j + j;
}
else j = ir + 1 ;
ff[i] = rrf; ii[i ] = rri;
} /* while */
}
}
/************************* ***********************************!
int pick(n)
int n;
{
float p;
int p l ;
p = rand();
p = p *n/2 1 47483647 .0;
pl = p + l;
if (p l < 1 )
pl = 1;
if (p l >n)
p l = n;
return p l ;
}
!************************************************************!
75
float urand()
{
float p;
p = rand();
p = p/2 1 47483647 .0;
return p;
}
/************************************** **************************
* Reproduction procedure *
****************************************************************/
void repro()
{
int i,j ,k,l, split,m,stop,n,ncr;
int v,genecnt,bcnt,acnt;
int numrep;
float z;
float addconst,bestgene,bestval;
/* initialize */
for (i= l ;i<=maxpop;i++)
{
for(i= 1 ;j<=nload;j++)
( * (opptr+i)).geneU] = ( * (pptr+i)).gene fj];
( * (opptr+i)).val = ( * (pptr+i)) .val;
( * (opptr+i)).realval = ( * (pptr+i)).realval;
( * (opptr+i)).infeas = ( * (pptr+i)).infeas;
for (l= l ;l<=nveh;l++)
{
( * (opptr+i)).weight[l] = ( * (pptr+i)).weight[l] ;
( * (opptr+i)).time[l] = ( * (pptr+i)).time[l];
}
ii[i] = i;
ff[i] = ( * (opptr+i)).val;
}
hsort(maxpop );
/* replicate top nrep solutions */
for(i= 1 ;i<=nrep;i++)
{
for (j= l ;j<=nload;j++)
( * (pptr+i)).geneU J = ( * (opptr+ii[i])).geneU] ;
( * (pptr+i)) .val = ( * (opptr+ii[i])).val;
( * (pptr+i)).realval = ( * (opp tr+ii[i])).realval;
( * (pptr+i)).infeas = ( * (opptr+ii[i])).infeas;
for (l= l ;l<=nveh;l++)
76
{
( * (pptr+i)).weight[l] = ( * (opptr+ii [i])). weight[l] ;
( * (pptr+i)).time[l] = ( * (opptr+ii[i])).time[l] ;
}
if (( * (pptr+i)).infeas > 0)
if (( * (pptr+i)).val < bestinfs)
bestinfs = ( * (pptr+i)).val;
if (( * (pptr+i)).infeas == 0)
if (( * (pptr+i)).val > worstfs)
{
worstfs = ( * (pptr+i)).val;
worstchr = i;
}
}
if(count == pf* (count/pf)) { /* check for duplicate genes */
for (i= l ;i<=nload;i++)
{
genecnt = 1 ;
bestgene = ( * (opptr+ii[ l ])).gene[i) ;
for (1=2;l<=maxpop; l++)
{
if (( * (opptr+ii [l])).gene [i] == bestgene)
{
genecnt ++;
}
}
if (genecnt > 70)
{
for U= l ;j<=20;j++)
{
1 = pick(maxpop-nrep );
( * (opptr+ii [l+nrep])).gene[i] = pick(nveh) + urand();
}
}
}
}
vick = O;
(* mate maxpop-nrep random pairs */
1 = nrep;
stop = maxpop - nmut;
while (i < stop)
{
i++;
j = pick(maxpop );
k = pick(maxpop);
for(m= 1 ;m<=nload;m++)
{
n = pick ( 1 00);/* printf("clevel = %d %d\n",clevel,n); */
if (n<=clevel)
{
( * (pptr+i)).gene[m]= ( * ( opptr+j)) .gene f m] ;
( * (pptr+i+ 1 )) .gene[m]=( * (opptr+k)).gene[m] ;
77
}
else
(
(*(pptr+i)) .gene[ m] =(* ( opptr+k ) ). g ene[ m] ;
(*(pptr+i+ 1 )).gene[ m] =(*( opptr+j) ).g ene[m ] ;
}
}
eval(i);
eval(i+ 1 ) ;
i f ((*(pptr+i+ 1 )) . val < (*(pptr+i)).val)
(
for (m= l ;m<=nload;m++)
(* (pptr+i)).gene[m] = (* (pptr+i+ 1 )).gene[m] ;
(*(pptr+i)).val = (* (pptr+i+ 1 )).val;
( * (pptr+i)).realval = (* (pptr+i+ 1 )) .realval;
(*(pptr+i) ).infeas = (*(pptr+i+ 1 ) ) . infeas;
for (l= l ;l<=nveh;l++)
(
(* (pptr+i)). wei ght[!] = (*(pptr+i+ 1 )). wei g ht[!] ;
(*(pptr+i)).time[l] = (* (pptr+i+ 1 )).time[!] ;
}
/* create mutations */
for (i= l ;i<=nmut;i++)
{
for (j= 1 ;j<=nload;j++)
(* (pptr+i+maxpop-nmut)). geneU ] = pick(nveh) + urand();
78
!************************************ **************************
* Read in the Data Set *
*************************************** ***********************/
void readinb()
{
int i ,j ,k,l;
float diffx;
float diffy;
infile = fopen("vrp.dat", "r");
test = fopen("test.tst" ,"w");
fscanf(infile, "%d %d o/od o/od %d\n",&nload,&maxweight,&nveh,&stoptime,&timelimit);
nload--;
for (i= l ;i <=nload;i++)
fscanf(infile,"%d %d o/od %d\n",&xcoor[i],&ycoor[i] ,&custweight [ i] ,&custno[i] );
fscanf(infile, "o/od o/od",&xcoor[nload+ 1 ] ,&ycoor[nload+ 1 ]);
fclose(infile);
/* ******************** ************** **************************
*** Calculate distance matrix ***
**************** ******************* ************************* */
for (i= l ;i<=nload+ l ;i++)
for (j = l ;j<=nload+ l ;j++)
(
diffx = xcoor[i]-xcoor[j] ;
diffy = ycoor[i]-ycoor[j];
( * (dptr+i)).node[j] = sqrt(pow(diffx,2.)+pow(diffy,2. ));
}
fprintf(test,"Loads = %d, Wt Limit = %d, Yeh = %d, St Time = o/od, Time Limit = o/od\n",
nload,maxweight,nveh,stoptime,timelimit);
for (i= l ;i<=nload;i++)
fprintf(test, "o/od o/od o/od o/od\n " ,xcoor f i],ycoor[i] ,custweight[i] ,custno[i]);
fprintf(test,"o/od o/od\n",xcoor[nload+ 1 ] ,ycoor[nload+ 1 ]);
fprintf(test, ''\n\n ");
for (i = l ;i <=nload+ l ;i++)
{
for (j = l ;j<=nload+ 1 ;j++) j
fprintf(test, " %. 1 f' ,( * (dptr+i)).node[ ] );
fprintf(test, ''\n ");
}
fclose(test);
}
79
APPENDIX 2
80
Appendix 2 displays the vehicle routes of the best solutions produced
by the genetic algorithm. These results are displayed for the data sets
which were presented in Table 3 . 1.
Table A2.1. 22 customers, 3 vehicles, without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 1 8 1 9 20 22 1 7 14 1 5 1 6 3 2 1 6 0 2730 404.0
2 0 12 1 1 9 8 5 4 21 7 0 3 1 00 297.8
3 0 10 13 0 4300 87.9
1 0 1 9 26 29 24 25 27 28 15 1 8 0 3950 326.7
2 0 23 1 0 1 1 1 2 1 6 1 3 7 1 7 9 8 1 4 21 0 4425 275.3
3 0 22 2 5 4 1 6 3 20 0 4375 236.6
1 0 6 23 30 3 1 1 8 29 28 27 19 0 34577 972.5
2 0 22 26 2 1 1 4 1 5 25 1 6 20 24 9 1 1 0 37358 895.4
3 0 1 7 2 4 3 1 5 32 7 8 1 3 1 0 1 2 0 26630 902.6
1 0 6 21 1 4 1 5 25 16 20 24 26 22 9 1 1 0 37805 1 076.2
2 0 7 8 19 27 28 29 1 8 3 1 30 23 0 374 14 744.7
3 0 1 7 2 4 3 1 5 32 1 3 10 12 0 23346 828.5
81
Table A2.S. 50 customers , 6 vehicles, with time constraint
Vehicle # Route of Vehicle Wt Time
1 0 5 49 10 39 33 45 15 44 37 1 2 0 1 55 1 99. 1
2 0 2 29 20 35 36 3 32 0 1 15 1 68.2
3 0 27 48 8 26 31 28 22 1 0 1 02 1 60.4
4 0 11 16 50 21 34 30 9 38 46 0 1 28 1 72.9
5 0 18 13 41 40 19 42 17 4 47 0 1 57 1 99. 1
6 0 6 23 7 43 24 25 14 0 1 20 1 6 1 .6
1 0 11 2 20 36 35 29 21 30 1 0 1 5 37 () 1 57 25 1 .9
2 () 5 9 34 50 16 32 8 48 27 0 1 50 1 95.7
3 0 12 47 18 25 14 6 () 1 59 1 2 1 .4
4 0 7 43 24 4 17 44 45 33 39 49 38 46 0 1 53 287 . 3
5 0 42 19 40 41 13 23 26 3 1 28 3 22 1 () 158 300.4
1 0 67 45 48 28 22 62 68 () 1 40 1 5 1 .6
2 0 12 31 10 38 65 66 () 1 37 1 66.4
3 0 5 47 36 69 21 74 30 0 1 20 1 55.6
4 0 33 73 1 63 3 44 40 17 0 1 43 1 58 .6
5 0 39 9 50 18 55 25 32 0 1 29 1 7 1 .4
6 0 15 20 70 60 71 37 0 71 1 55.6
7 0 53 8 46 34 52 27 29 75 () 153 1 58 . 3
8 0 51 16 49 24 56 23 6 0 1 14 1 61 . 8
9 0 26 72 58 11 59 14 0 1 32 1 55.6
10 0 2 61 64 42 41 43 () 1 13 1 65.2
11 0 7 35 19 54 13 57 4 () 1 12 1 50.0
82
Table A2.8. 75 customers , 10 vehicles, without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 31 55 18 24 2 47 48 0 1 37 234.5
2 0 74 61 41 56 23 3 32 9 0 1 40 2 1 5.4
3 0 30 71 60 46 59 38 26 0 131 235 . 1
4 0 68 21 29 27 13 66 65 0 1 26 205 . 3
5 0 67 19 54 15 37 45 12 17 0 1 40 198. 1
6 0 10 11 7 57 70 69 28 0 1 40 233.5
7 0 4 8 35 14 58 72 39 25 0 1 39 1 95.2
8 0 75 5 36 20 34 40 51 0 1 39 1 93.4
9 0 44 50 49 63 73 22 1 33 6 0 1 37 206.2
10 0 53 52 62 64 42 43 16 0 1 35 2 1 4. 1
1 0 30 52 59 8 34 0 1 00 1 52.4
2 0 22 61 5 27 67 0 95 1 48.4
3 0 11 14 46 0 95 99.6
4 0 6 32 53 4 0 99 1 35.7
5 0 26 12 3 24 23 43 0 96 1 60.9
6 0 75 2 54 66 0 99 1 57.9
7 0 33 42 64 74 29 15 0 97 1 85 .4
8 0 25 18 49 28 36 13 19 0 1 00 247.2
9 0 37 70 71 69 41 51 40 () 96 221 . 1
10 0 62 9 55 10 65 35 () 99 220.7
11 0 72 31 16 63 73 20 57 0 98 245. 1
12 0 7 58 39 50 56 () 1 00 1 73 . 3
13 0 48 47 60 21 1 0 98 1 65.9
14 0 17 44 38 45 68 0 92 158.1
83
Table A2.10. 100 customers, 9 vehicles, with time constraint
Vehicle # Route of Vehicle Wt Time
1 0 54 4 1 5 43 1 4 3 8 44 42 1 3 0 1 34 229.7
2 0 58 2 57 87 92 98 85 6 1 8 3 46 1 72 228 . 1
48 0
3 0 52 84 1 7 45 8 82 1 8 60 5 93 1 50 2 1 5. 6
59 0
4 0 31 10 1 50 68 80 55 25 39 56 0 1 53 226. 3
5 0 88 19 49 64 63 90 32 30 70 69 1 49 227.5
27 0
6 0 7 47 36 1 1 62 20 66 7 1 65 35 1 75 300. 1
33 12 0
7 0 89 6 96 99 1 6 86 9 1 1 00 37 97 1 93 22 1 .2
95 94 28 0
8 0 76 77 3 79 78 34 9 5 1 8 1 29 1 44 223.7
24 0
9 0 53 40 21 73 72 74 22 4 1 75 23 1 88 228.2
67 26 0
1 0 28 76 77 68 80 54 1 2 26 0 1 39 1 42.4
2 0 1 3 87 37 93 85 16 61 5 60 89 0 1 96 1 74.4
3 0 69 70 30 32 20 51 3 29 24 55 1 96 320 . 3
39 67 25 4 53 0
4 0 2 1 56 23 75 74 22 41 57 1 5 43 1 97 297 .5
38 1 4 42 97 94 0
5 0 58 40 72 73 2 95 59 99 96 6 0 1 39 1 72.8
6 0 50 79 78 34 35 65 66 7 1 9 81 200 258 . 2
33 1 27 0
7 0 31 10 90 63 64 1 1 7 82 46 45 200 34 1 .6
1 7 86 44 91 1 00 98 92 0
8 0 52 8 8 62 19 49 36 47 48 8 84 191 24 1 . 3
83 1 8 0
84
Table A2.1 2. 1 00 c ustomers, 1 1 vehicles , with time constraint
Vehicle # Route of Vehicle Wt Time
1 0 57 55 53 56 58 60 59 40 43 0 1 80 9 1 3.9
2 0 4 2 6 9 12 14 16 11 10 0 1 60 905.6
3 0 20 22 25 26 8 7 3 5 75 0 1 60 87 1 .9
4 0 46 51 31 35 32 33 36 34 29 24 0 1 90 997.7
5 0 41 42 44 45 48 50 49 27 28 2 1 0 1 30 97 1 .9
6 0 90 88 98 96 97 1 00 17 18 30 0 1 70 952.2
7 0 91 87 86 77 71 70 79 74 65 67 0 1 80 1 033.5
8 0 1 99 95 94 92 93 23 0 1 30 741 . 1
9 0 63 81 78 76 73 80 61 64 62 0 1 70 939.3
10 0 66 69 68 54 72 82 83 84 85 89 0 1 70 1 039. 1
11 0 13 15 19 38 39 37 52 47 0 1 70 860.9
1 0 20 25 26 28 1 8 1 7 1 9 1 6 1 0 0 1 80 908.8
2 0 46 45 44 42 43 23 1 3 1 1 9 8 0 1 60 1 008.9
3 0 90 89 85 82 77 78 8 1 63 0 1 80 835.7
4 0 67 65 66 59 60 58 56 53 54 55 200 1 096. 1
69 0
5 0 62 74 72 61 41 47 27 24 22 2 1 0 1 70 987.0
6 0 49 48 51 50 52 29 30 1 5 1 4 1 2 200 1 304.5
6 2 1 0
7 0 40 57 68 64 80 79 73 70 7 1 76 0 1 60 1 068.2
8 0 7 4 96 94 92 93 97 99 75 5 0 1 80 999.7
9 0 37 38 35 31 9 1 84 88 83 86 87 0 1 80 1 065.9
10 0 39 34 36 33 32 3 1 00 95 98 0 200 979.2
85
Table A2. 1 4. 1 00 cust om ers, 1 4 vehicl es, with out tim e c onstraint
Vehicle # Route of Vehicle Wt Time
1 0 2 41 22 74 77 33 81 51 90 63 0 1 12 247.0
2 0 95 59 38 43 42 87 0 1 02 1 56.5
3 0 18 82 7 24 29 80 12 26 0 87 1 98.8
4 0 53 54 39 56 1 62 0 98 1 92.5
5 0 52 36 49 27 68 28 0 1 12 1 93.3
6 0 40 21 72 4 50 20 30 0 1 07 1 80.5
7 0 31 11 64 32 71 9 0 1 02 1 9 1 .3
8 0 6 96 99 93 5 94 0 98 1 1 0.6
9 0 97 44 85 70 66 69 0 1 07 203.9
10 0 92 37 14 16 17 46 47 19 88 0 1 05 2 1 3.9
11 0 58 67 15 57 100 91 61 45 0 1 05 239.4
12 0 76 3 79 78 34 35 65 10 0 1 10 200.6
13 0 13 98 86 84 83 89 0 101 1 40.8
14 0 55 25 23 75 73 60 8 48 0 1 12 229.0
1 0 6 3 9 8 1 2 29 34 33 27 24 0 91 7 1 1 .4
2 0 55 56 60 6 1 65 45 43 40 59 57 191 1 025.6
62 64 54 68 72 0
3 0 99 1 00 98 97 108 5 4 1 0 1 5 1 3 1 32 684.9
1 17 0
4 0 111 2 1 7 1 1 1 4 1 9 3 5 26 20 () 1 17 698.3
5 () 82 84 1 1 3 83 90 7 3 79 7 1 74 69 1 17 7 1 3.9
1 20 0
6 0 1 15 21 23 36 3 1 30 25 22 1 6 1 09 0 1 10 684.7
7 0 70 76 78 77 66 63 58 53 52 0 1 25 67 1 .7
8 0 17 28 32 44 46 49 47 48 42 95 0 95 7 14. 1
9 0 67 75 80 5 1 50 41 37 38 39 0 151 674.8
10 () 88 87 96 1 10 1 16 1 04 107 1 06 102 92 1 25 7 1 8.8
85 1 12 86 ()
11 0 105 1 03 1 01 93 9 1 1 8 1 1 8 1 1 4 94 89 121 675.8
81 1 19 0
86
Table A2.16. 1 20 customers, 7 vehicles, without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 89 46 50 51 63 80 70 111 81 1 1 96 1 222.0
5 6 17 90 1 17 88 0
2 0 95 68 72 98 1 00 96 93 1 20 86 1 8 198 1 425.5
11 53 58 62 47 28 22 31 2 0
3 0 85 92 43 45 19 8 94 1 02 1 06 1 07 1 96 1 368.0
1 04 1 03 99 1 14 20 33 36 29 1 08 0
4 0 1 19 23 26 35 32 34 27 3 10 1 12 1 99 1 462.2
1 05 74 69 101 1 13 118 59 65 1 10 0
5 0 84 83 4 9 25 37 38 1 15 52 56 1 93 1 1 8 8.2
61 60 77 71 1 16 82 0
6 0 91 1 09 30 24 16 21 44 40 64 66 1 94 1 239.7
76 73 67 97 13 14 7 0
7 0 48 49 41 42 39 79 75 78 55 54 1 99 1 054.4
57 12 15 87 0
1 0 1 10 25 95 1 4 55 1 34 67 1 3 4 1 40 191 260.4
64 87 56 0
2 0 1 33 1 32 98 23 69 1 1 4 99 43 86 61 1 97 260.4
7 27 0
3 0 32 59 2 1 00 1 26 50 1 30 30 9 38 0 1 45 1 98.7
4 0 11 1 27 1 29 29 28 22 1 20 48 1 38 0 1 15 1 9 1 .6
5 0 60 8 26 1 13 1 40 1 1 2 57 97 24 96 1 44 253.2
58 1 02 46 0
6 0 1 48 88 66 1 35 1 43 4 1 49 68 6 0 1 53 1 99.8
7 0 81 1 83 131 1 28 84 2 1 79 74 34 1 85 26 1 .7
1 04 39 54 0
8 0 17 93 19 94 1 36 1 1 1 141 1 50 1 09 0 131 1 92. 8
9 0 77 18 1 42 1 47 15 52 63 1 44 1 03 76 0 1 69 1 97.9
10 0 90 1 05 75 89 1 17 73 1 0 49 5 0 1 46 1 98.6
11 0 71 1 22 91 65 42 45 1 24 106 1 25 33 1 72 254.4
72 1 23 1 08 0
12 0 78 1 39 47 1 46 1 45 1 37 44 107 92 37 1 79 1 94.5
12 0
13 0 62 118 16 101 3 82 31 80 5 1 0 161 1 96. 1
14 0 53 20 35 85 36 1 1 5 1 2 1 1 1 6 70 1 1 9 0 1 47 2 1 7.4
87
Table A2.18. 1 50 customers, 1 2 vehicles , without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 5 90 15 1 07 92 41 88 64 3 8 1 04 1 94 3 1 9.6
30 50 1 30 83 0
2 0 49 45 1 06 1 0 1 24 65 93 44 89 39 1 99 347 . 1
71 1 42 1 35 1 43 0
3 0 51 80 101 1 28 84 1 1 5 2 48 60 8 1 1 88 285.2
27 46 0
4 0 1 44 19 13 67 1 3 8 20 35 59 1 46 87 1 63 324.6
1 48 56 0
5 0 1 26 1 00 1 19 1 4 25 1 33 4 1 49 1 09 1 45 161 2 1 5.4
17 0
6 0 7 43 24 1 12 131 53 1 27 1 29 1 6 98 1 92 354.5
86 96 0
7 0 1 1 20 113 1 40 22 9 1 1 7 1 25 123 1 03 1 86 333.9
1 08 1 37 37 63 0
8 0 68 1 34 1 36 1 39 57 69 8 3 1 29 2 1 1 94 34 1 .7
79 74 34 76 0
9 0 42 1 50 1 47 1 02 82 1 1 4 99 23 78 9 1 191 357 . 8
72 33 0
10 0 47 54 3 121 70 28 1 1 6 36 85 1 1 8 1 92 294.6
62 0
11 0 141 40 94 66 1 1 1 1 8 1 1 0 55 52 1 22 1 97 3 1 9.4
105 75 73 0
12 0 11 26 61 1 32 97 58 95 6 32 77 178 262.5
12 0
88
Table A2.19. 1 99 custom ers, 18 v ehi cl es, with tim e constraint
Vehicle # Route of Vehicle Wt Time
1 0 1 88 16 73 1 47 181 1 16 23 1 94 61 0 1 53 1 90.7
2 0 1 57 1 56 94 121 138 37 88 1 40 22 1 90 318 4 1 5.5
1 85 1 43 89 1 37 62 1 60 1 83 1 1 98 0
3 0 1 39 176 1 02 78 1 77 19 11 1 80 7 0 1 38 1 96.6
4 0 1 87 32 57 1 09 39 40 50 1 29 7 1 1 26 0 1 77 191.1
5 0 173 83 1 23 178 84 14 1 67 1 79 99 58 1 89 257 . 3
26 0
6 0 1 59 1 92 1 86 141 1 42 42 1 58 66 1 93 0 1 32 1 93 . 5
7 0 1 68 1 00 1 30 151 1 17 44 1 06 1 44 74 0 1 30 1 92.4
8 0 86 93 2 1 20 1 55 36 21 64 28 1 0 1 0 1 54 1 96.5
9 0 1 84 1 99 1 13 43 191 1 95 1 04 3 81 0 158 1 99.0
10 0 60 1 75 46 34 45 59 98 79 1 3 1 52 0 1 67 1 96.0
11 0 171 1 66 1 24 1 54 8 51 10 75 3 1 25 264 353.6
18 1 46 56 9 0
12 0 55 1 45 1 48 92 1 35 1 63 1 62 1 64 1 33 70 1 94 348. 3
1 28 27 4 87 0
13 0 1 27 1 25 1 53 5 48 1 74 47 82 1 72 30 0 1 83 1 9 1 .7
14 0 33 1 05 1 82 49 1 07 24 63 95 54 1 1 2 0 1 52 1 95.6
15 0 1 14 91 68 1 15 1 97 1 36 1 96 53 67 0 1 29 1 95.2
16 0 17 97 131 80 119 52 38 1 70 1 65 77 1 99 257. 7
1 50 65 0
17 0 96 6 111 1 5 20 1 22 103 29 0 1 54 1 97.9
18 0 76 12 1 69 1 6 1 7 2 1 1 8 1 10 1 89 85 1 34 1 95 30 1 .7
108 69 1 32 35 1 49 0
89
Table A2.20. 199 customers, 16 vehicles, without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 29 21 1 14 1 22 47 79 1 32 98 1 96 1 05 200 4 1 6.7
32 1 63 0
2 0 1 56 91 68 1 0 1 27 1 70 48 73 1 25 1 53 200 509 . 3
19 72 1 47 0
3 0 40 49 1 84 90 1 6 74 25 3 8 1 68 0 1 97 279.4
4 0 88 1 02 8 1 0 1 00 1 82 1 83 1 99 62 65 200 432. 1
46 1 69 78 35 0
5 0 1 57 1 54 1 67 1 80 1 87 24 80 1 89 1 26 95 200 378.6
69 60 0
6 0 1 85 141 59 1 2 1 1 79 83 8 1 1 93 66 1 39 1 99 4 1 6.2
87 1 34 51 0
7 0 188 1 27 7 1 1 1 1 1 0 75 1 8 1 1 1 8 1 94 1 66 1 99 392. 1
1 20 2 0
8 0 1 44 1 59 1 06 198 20 1 08 50 26 6 1 62 1 99 55 1 .3
18 1 92 99 9 0
9 0 1 30 11 131 1 86 1 04 67 1 73 36 0 200 293. 1
10 0 76 71 119 41 64 1 37 1 46 1 35 39 1 09 201 522.4
1 78 58 1 50 1 64 0
11 0 37 1 24 155 1 74 94 1 75 23 1 36 77 57 1 97 476.6
61 1 28 1 23 70 4 0
12 0 34 1 1 15 1 43 86 161 1 65 1 4 44 63 0 1 96 353.3
13 0 1 33 1 17 55 1 95 158 5 1 7 1 1 7 56 3 1 203 423.0
1 60 0
14 0 1 52 1 72 82 1 5 1 97 33 1 1 2 1 42 22 1 49 1 96 422.6
176 15 191 43 0
15 0 45 13 1 97 1 48 107 96 1 03 42 53 93 200 483.9
54 84 0
16 0 12 85 52 1 38 28 89 1 40 30 1 29 1 1 6 1 99 5 87.4
113 1 90 92 1 45 3 0
90
Table A2.21. 1 99 customers , 1 7 vehicles, without time constraint
Vehicle # Route of Vehicle Wt Time
1 0 17 6 1 1 4 1 98 1 85 147 72 1 1 8 1 34 19 1 96 338.9
175 0
2 0 1 25 27 1 7 1 68 1 09 97 55 1 44 74 1 52 0 174 295. 5
3 0 1 1 1 1 97 90 1 83 1 04 1 37 1 99 1 1 6 1 82 1 30 1 68 304. 1
1 68 0
4 0 24 73 1 92 1 27 87 35 69 8 1 24 1 72 200 345. 9
67 0
5 0 1 87 1 06 32 25 92 1 46 50 1 80 1 33 14 1 87 267.9
7 0
6 0 1 62 1 32 1 70 1 77 98 1 55 36 1 38 20 5 1 97 333.3
54 33 0
7 0 66 62 41 42 141 1 1 5 8 1 56 22 1 36 1 93 345.4
57 1 00 1 93 1 05 0
8 0 93 1 73 1 54 1 67 1 76 21 43 23 1 95 1 42 1 92 39 1 .5
91 191 1 88 12
9 0 1 66 48 1 12 1 26 1 50 1 64 85 84 1 79 60 0 1 62 3 1 3. 4
10 0 38 80 131 65 99 1 23 1 3 1 53 79 37 1 98 340.7
1 90 1 1 5 53 1 94 0
11 0 5 1 46 83 1 57 1 02 1 69 1 6 1 86 4 0 1 94 283.4
12 0 1 1 0 39 75 1 65 52 86 101 1 40 1 2 1 94 1 99 364.5
30 29 64 28 0
13 0 8 1 1 49 71 1 19 1 29 77 1 78 174 1 39 89 1 86 359.3
1 84 1 20 0
14 0 47 1 22 82 1 48 1 35 145 1 07 63 15 88 1 83 364.0
1 03 59 0
15 0 1 17 1 59 44 9 3 95 61 1 1 3 1 43 1 60 1 95 28 1 .0
1 96 2 0
16 0 76 40 151 96 1 6 1 1 08 1 28 78 70 26 1 79 3 27.7
1 8 1 49 0
17 0 58 45 34 11 1 0 3 1 1 89 1 63 56 18 0 1 83 270. 8
91
VITA
92