Lecture Notes in Computer Science
Lecture Notes in Computer Science
net/publication/226762616
CITATIONS READS
145 1,012
2 authors:
All content following this page was uploaded by Gregorio Toscano Pulido on 23 March 2015.
1 Introduction
E. Zitzler et al. (Eds.): EMO 2001, LNCS 1993, pp. 126–140, 2001.
c Springer-Verlag Berlin Heidelberg 2001
A Micro-Genetic Algorithm for Multiobjective Optimization 127
2 Related Work
Fig. 1. Diagram that illustrates the way in which our micro-GA works.
or genotypical level) could also been used as a criterion for convergence. The
third type of elitism is applied at certain intervals (defined by a parameter called
“replacement cycle”). What we do is to take a certain amount of points from
all the regions of the Pareto front generated so far and we use them to fill in
the replaceable memory. Depending on the size of the replaceable memory, we
choose as many points from the Pareto front as necessary to guarantee a uniform
distribution. This process intends to use the best solutions generated so far as
the starting point for the micro-GA, so that we can improve them (either by
getting closer to the true Pareto front or by getting a better distribution). This
also avoids that the contents of the replaceable memory becomes homogeneous.
130 C.A. Coello Coello and G.T. Pulido
To keep diversity in the Pareto front, we use an approach similar to the adap-
tive grid proposed by Knowles & Corne [14]. The idea is that once the archive
that stores nondominated solutions has reached its limit, we divide the search
space that this archive covers, assigning a set of coordinates to each solution.
Then, each newly generated nondominated solution will be accepted only if the
geographical location to where the individual belongs is less populated than the
most crowded location. Alternatively, the new nondominated solution could also
be accepted if the individual belongs to a location outside the previously spefi-
cied boundaries. In other words, the less crowded regions are given preference
so that the spread of the individuals on the Pareto front can be more uniform.
The pseudo-code of the algorithm is the following:
function Micro-GA
begin
Generate starting population P of size N
and store its contents in the population memory M
/* Both portions of M will be filled with random solutions */
i=0
while i < Max do
begin
Get the initial population for the micro-GA (Pi ) from M
repeat
begin
Apply binary tournament selection
based on nondominance
Apply two-point crossover and uniform mutation
to the selected individuals
Apply elitism (retain only one nondominated vector)
Produce the next generation
end
until nominal convergence is reached
Copy two nondominated vectors from Pi
to the external memory E
if E is full when trying to insert indb
then adaptive grid(indb )
Copy two nondominated vectors from Pi to M
if i mod replacement cycle
then apply second form of elitism
i=i+1
end while
end function
The adaptive grid requires two parameters: the expected size of the Pareto
front and the amount of positions in which we will divide the solution space for
each objective. The first parameter is defined by the size of the external memory
A Micro-Genetic Algorithm for Multiobjective Optimization 131
and it is provided by the user. The second parameter (the amount of positions in
which we will divide the solution space) has to be provided by the user as well,
although we have found that our approach is not very sensitive to it (e.g., in most
of our experiments a value of 15 or 25 provided very similar results). The process
of determining the location of a certain individual has a low computational cost
(it is based on the values of its objectives as indicated before). However, when
the individual is out of range, we have to relocate all the positions. Nevertheless,
this last situation does not occur too often, and we allocate a certain amount of
extra room in the first and last locations of the grid to minimize the occurrence
of this situation.
When the external memory is full, then we use the adaptive grid to decide
what nondominated vectors will be eliminated. The adaptive grid will try to
balance the amount of individuals representing each of the elements of the Pareto
set, so that we can get a uniform spread of points along the Pareto front.
4 Comparison of Results
Several test functions were taken from the specialized literature to compare our
approach. In all cases, we generated the true Pareto fronts of the problems using
exhaustive enumeration (with a certain granularity) so that we could make a
graphical comparison of the quality of the solutions produced by our micro-GA.
Since the main aim of this approach has been to increase efficiency, we ad-
ditionally decided to compare running times of our micro-GA against those of
the NSGA II [6] and PAES [14]. In the following examples, the NSGA was run
using a population size of 100, a crossover rate of 0.8 (using SBX), tournament
selection, and a mutation rate of 1/vars, where vars = number of decision vari-
ables of the problem. In the following examples, PAES was run using a depth of
5, a size of the archive of 100, and a mutation rate of 1/bits, where bits refers
to the length of the chromosomic string that encodes the decision variables. The
amount of fitness function evaluations was set such that the NSGA II, PAES
and the micro-GA could reasonably cover the true Pareto front of each problem.
g(x2 )
Minimize f2 (x1 , x2 ) = (2)
x1
132 C.A. Coello Coello and G.T. Pulido
where:
( 2 ) ( 2 )
x2 − 0.2 x2 − 0.6
g(x2 ) = 2.0 − exp − − 0.8 exp − (3)
0.004 0.4
The parameters used by the micro-GA for this example were: size of the ex-
ternal memory = 100, size of the population memory = 50, number of iterations
= 1500, number of iterations of the micro-GA (to achieve nominal convergence)
= 2, number of subdivisions of the adaptive grid = 25, crossover rate = 0.7,
mutation rate = 0.029, percentage of non-replaceable memory = 0.3, population
size (of the micro-GA) = 4, replacement cycle at every 50 iterations.
Our first test function has a local Pareto front to which a GA can be easily
attracted. Fig. 2 shows the true Pareto front for this problem with a continuous
line, and the results found by the NSGA II, PAES and our micro-GA are shown
with points. Similar fronts were found by the three approaches. For this exam-
ple, both the NSGA II and PAES performed 12,000 evaluations of the fitness
function. The average running time of each algorithm (over 20 runs) were the fol-
lowing: 2.601 seconds for the NSGA II (with a standard deviation of 0.33555913),
1.106 seconds for PAES (with a standard deviation of 0.25193672) and only 0.204
seconds for the micro-GA (with a standard deviation of 0.07764461).
A Micro-Genetic Algorithm for Multiobjective Optimization 133
−x if x≤1
−2 + x if 1<x≤3
Minimize f1 (x) = (4)
4 − x if 3<x≤4
−4 + x if x>4
and −5 ≤ x ≤ 10.
The parameters used for this example were: size of the external memory =
100, size of the population memory = 50, number of iterations = 150, number
of iterations of the micro-GA (to achieve nominal convergence) = 2, number of
subdivisions of the adaptive grid = 25, crossover rate = 0.7, mutation rate =
0.056 (1/L, where L=18 bits in this case), percentage of non-replaceable memory
= 0.3, population size (of the micro-GA) = 4, replacement cycle at every 25
iterations.
134 C.A. Coello Coello and G.T. Pulido
This problem has a Pareto front that is disconnected. Fig. 3 shows the true
Pareto front for this problem with a continuous line (the vertical line is obviously
not part of the true Pareto front, but it appears because we used linear segments
to connect every pair of nondominated points). We used points to represent the
solutions found by the NSGA II, PAES and our micro-GA.
Again, similar Pareto fronts were found by the three approaches. For this ex-
ample, both the NSGA II and PAES performed 1,200 evaluations of the fitness
function. The average running time of each algorithm (over 20 runs) were the fol-
lowing: 0.282 seconds for the NSGA II (with a standard deviation of 0.00014151),
0.107 seconds for PAES (with a standard deviation of 0.13031718) and only 0.017
seconds for the micro-GA (with a standard deviation of 0.0007672).
where:
( q
f1 (x1 ,x2
1− if f1 (x1 , x2 ) ≤ g(x1 , x2 )
h(x1 , x2 ) = g(x1 ,x2 (9)
0 otherwise
runs) were the following: 2.519 seconds for the NSGA II (with a standard de-
viation of 0.03648403), 2.497 seconds for PAES (with a standard deviation of
1.03348519) and only 0.107 seconds for the micro-GA (with a standard deviation
of 0.00133949).
Our fourth example is the so-called “unitation versus pairs” problem [9], which
involves the maximization of two functions over bit strings. The first function, f1
is the number of pairs of adjacent complementary bits found in the string, and
the second function, f2 is the numbers of ones found in the string. The Pareto
front in this case is discrete. We used a string length of 28, and therefore, the
true Pareto front is composed of 15 points.
The parameters used for this example were: size of the external memory =
100, size of the population memory = 15, number of iterations = 1250, number
of iterations of the micro-GA (to achieve nominal convergence) = 1, number
of subdivisions of the adaptive grid = 3, crossover rate = 0.5, mutation rate
= 0.035, percentage of non-replaceable memory = 0.2, population size (of the
micro-GA) = 4, replacement cycle at every 25 iterations.
Fig. 5 shows the results obtained by our micro-GA for the fourth test func-
tion. A total of 13 (out of 15) elements of the Pareto optimal set were found on
average (only occasionally was our approach able to find the 15 target elements).
PAES was also able to generate 13 elements of the Pareto optimal set on average,
and the NSGA II was only able to generate 8 elements on average.
136 C.A. Coello Coello and G.T. Pulido
28 1
27 0 0 1
26 0 0 0 0 1
25 0 0 0 0 0 0 1
24 0 0 0 0 0 0 0 0 1
23 0 0 0 0 0 0 0 0 0 0 1
22 0 0 0 0 0 0 0 0 0 0 0 0 1
21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
f2
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0
2 0 0 0 0 0
1 0 0 0
0 0
0 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627
f1
For this example, both the NSGA II and PAES performed 5,000 evaluations
of the fitness function. The average running time of each algorithm (over 20 runs)
were the following: 2.207 seconds for the NSGA II, 0.134 seconds for PAES and
only 0.042 seconds for the micro-GA.
Borges & Barbosa [2] reported that were able to find the 15 elements of the
Pareto optimal set for this problem, using a population size of 100 and 5,000
evaluations of the fitness function, although no actual running times of their
approach were reported.
4.5 Test Function 5
Our fifth example is a two-objective optimization problem defined by Kursawe
[16]:
X
n−1 q
Minimize f1 (x) = −10 exp −0.2 x2i + x2i+1 (10)
i=1
n
X
Minimize f2 (x) = |xi |0.8 + 5 sin(xi )3 (11)
i=1
where:
−5 ≤ x1 , x2 , x3 ≤ 5 (12)
A Micro-Genetic Algorithm for Multiobjective Optimization 137
Fig. 6 shows the true Pareto front for this problem as points. The results
obtained by the NSGA II, PAES and our micro-GA are also shown as points.
It is worth mentioning that PAES could not eliminate some of the dominated
points in the runs performed.
The parameters used for this example were: size of the external memory =
100, size of the population memory = 50, number of iterations = 3000, number
of iterations of the micro-GA (to achieve nominal convergence) = 2, number
of subdivisions of the adaptive grid = 25, crossover rate = 0.7, mutation rate
= 0.019, percentage of non-replaceable memory = 0.3, population size (of the
micro-GA) = 4, replacement cycle at every 50 iterations.
For this example, both the NSGA II and PAES performed 2,400 evaluations
of the fitness function. The average running time of each algorithm (over 20 runs)
were the following: 6.481 seconds for the NSGA II (with a standard deviation of
0.053712), 2.195 seconds for PAES (with a standard deviation of 0.25408319) and
only 0.704 seconds for the micro-GA (with a standard deviation of 0.00692099).
and a refilling process that allows us to approach the true Pareto front in a
successive manner. Also, we use an adaptive grid (similar to the one used by
PAES [14]) that maintains diversity in an efficient way.
The approach still needs more validation (particularly, with MOPs that have
more decision variables and constraints), and needs to be compared with other
EMOO approaches (under similar conditions) using some of the metrics that
have been proposed in the literature (see for example [22]). We only provided
running times produced on a PC, but a more exhaustive comparison is obviously
lacking. However, the preliminary results presented in this paper, indicate the
potential of the approach.
Some other future work will be to refine part of the process, so that we can
eliminate some of the additional parameters that the approach needs. Since some
of them are not very critical (e.g., the number of grid subdivisions, or the amount
of iterations to reach critical convergence), we could probably automatically
preset them to a reasonable value so that the user does not need to provide
them.
Finally, we are also interested in using this approach as a basis to develop a
model of incorporation of preferences from the decision maker [4].
References
6. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T. Meyarivan. A Fast Eli-
tist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization:
NSGA-II. KanGAL report 200001, Indian Institute of Technology, Kanpur, India,
2000.
7. G. Dozier, J. Bowen, and D. Bahler. Solving small and large scale constraint sat-
isfaction problems using a heuristic-based microgenetic algorithm. In Proceedings
of the First IEEE Conference on Evolutionary Computation, pages 306–311, 1994.
8. David E. Goldberg. Sizing Populations for Serial and Parallel Genetic Algorithms.
In J. David Schaffer, editor, Proceedings of the Third International Conference on
Genetic Algorithms, pages 70–79, San Mateo, California, 1989. Morgan Kaufmann
Publishers.
9. Jeffrey Horn, Nicholas Nafpliotis, and David E. Goldberg. A Niched Pareto Ge-
netic Algorithm for Multiobjective Optimization. In Proceedings of the First IEEE
Conference on Evolutionary Computation, IEEE World Congress on Computa-
tional Intelligence, volume 1, pages 82–87, Piscataway, New Jersey, June 1994.
IEEE Service Center.
10. Hisao Ishibuchi and Tadahiko Murata. Multi-Objective Genetic Local Search Al-
gorithm. In Toshio Fukuda and Takeshi Furuhashi, editors, Proceedings of the 1996
International Conference on Evolutionary Computation, pages 119–124, Nagoya,
Japan, 1996. IEEE.
11. Andrzej Jaszkiewicz. Genetic local search for multiple objective combinatorial op-
timization. Technical Report RA-014/98, Institute of Computing Science, Poznan
University of Technology, 1998.
12. E.G. Johnson and M.A.G. Abushagur. Micro-Genetic Algorithm Optimization
Methods Applied to Dielectric Gratings. Journal of the Optical Society of America,
12(5):1152–1160, 1995.
13. Charles L. Karr. Air-Injected Hydrocyclone Optimization via Genetic Algorithm.
In Lawrence Davis, editor, Handbook of Genetic Algorithms, pages 222–236. Van
Nostrand Reinhold, New York, 1991.
14. Joshua D. Knowles and David W. Corne. Approximating the Nondominated
Front Using the Pareto Archived Evolution Strategy. Evolutionary Computation,
8(2):149–172, 2000.
15. K. Krishnakumar. Micro-genetic algorithms for stationary and non-stationary
function optimization. In SPIE Proceedings: Intelligent Control and Adaptive Sys-
tems, pages 289–296, 1989.
16. Frank Kursawe. A variant of evolution strategies for vector optimization. In
H. P. Schwefel and R. Männer, editors, Parallel Problem Solving from Nature.
1st Workshop, PPSN I, volume 496 of Lecture Notes in Computer Science, pages
193–197, Berlin, Germany, oct 1991. Springer-Verlag.
17. Geoffrey T. Parks and I. Miller. Selective Breeding in a Multiobjective Genetic
Algorithm. In A. E. Eiben, M. Schoenauer, and H.-P. Schwefel, editors, Parallel
Problem Solving From Nature — PPSN V, pages 250–259, Amsterdam, Holland,
1998. Springer-Verlag.
18. J. David Schaffer. Multiple Objective Optimization with Vector Evaluated Genetic
Algorithms. PhD thesis, Vanderbilt University, 1984.
19. N. Srinivas and Kalyanmoy Deb. Multiobjective Optimization Using Nondomi-
nated Sorting in Genetic Algorithms. Evolutionary Computation, 2(3):221–248,
fall 1994.
20. David A. Van Veldhuizen and Gary B. Lamont. Multiobjective Evolutionary Algo-
rithms: Analyzing the State-of-the-Art. Evolutionary Computation, 8(2):125–147,
2000.
140 C.A. Coello Coello and G.T. Pulido
21. Fengchao Xiao and Hatsuo Yabe. Microwave Imaging of Perfectly Conducting
Cylinders from Real Data by Micro Genetic Algorithm Coupled with Determinis-
tic Method. IEICE Transactions on Electronics, E81-C(12):1784–1792, December
1998.
22. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of Multiobjective
Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173–
195, Summer 2000.