1 - A Mean-Variance Optimization Algorithm
1 - A Mean-Variance Optimization Algorithm
1 - A Mean-Variance Optimization Algorithm
István Erlich, Senior Member, IEEE, Ganesh K. Venayagamoorthy, Senior Member, IEEE, and
Nakawiro Worawat, Graduate Student Member, IEEE
Abstract— A new stochastic optimization algorithm referred differential evolution and artificial immune systems [1]-[5].
to by the authors as the ‘Mean-Variance Optimization’ (MVO) These algorithms have been applied to solve a range of
algorithm is presented in this paper. MVO falls into the optimization problems from function optimization to
category of the so-called “population-based stochastic complex real world problems like power system stabilizer
optimization technique.” The uniqueness of the MVO
design [5], scheduling vehicle-to-grid transactions [6] and
algorithm is based on the strategic transformation used for
mutating the offspring based on mean-variance of the n-best elephant movement prediction [7].
dynamic population. The mapping function used transforms Although a large number of techniques and approaches
the uniformly distributed random variation into a new one exist in literature to improve many of the population-based
characterized by the variance and mean of the n-best stochastic algorithms, non-convex optimization is still a
population attained so far. The searching space within the challenge to the optimization community. This is mainly due
algorithm is restricted to the range - zero to one - which does
to the large variability with the topological properties of the
not change after applying the transformation. Therefore the
variables are treated always in this band but the function underlying objective function. Therefore, it is necessary to
evaluation is carried out in the problem range. The explore new principles allowing the resolution of a wide
performance of MVO algorithm has been demonstrated on range of non-convex optimization problems. In addition,
standard benchmark optimization functions. It is shown that other desirable features for real-world applications call for
MVO algorithm finds the near optimal solution and is simple to algorithms with less computational overhead and hardware
implement. The features of MVO make it a potentially an
requirements. Along this spirit, a new optimization
attractive algorithm for solving many real-world optimization
problems. algorithm, named by the authors, mean-variance
optimization (MVO) is presented in this paper.
I. INTRODUCTION Like differential evolution, genetic algorithms and particle
swarm optimization, MVO falls into the category of the so-
O PTIMIZATION problems exist in many fields of science,
engineering, and business. As many real-world
optimization problems become increasingly complex, better
called “population-based stochastic optimization technique.”
However, its concepts share some similarities and
differences from other known stochastic algorithms. The
optimization algorithms are always needed. Many such
similarities are in borrowing ideas of selection, mutation and
problems can be precisely formulated, but it is rather
crossover from evolutionary computation algorithms. The
difficult or impossible to solve, either analytically or through
main distinct features of the MVO algorithm is based on the
conventional numerical procedures. This is the case when
strategic transformation of mutated genes of the offspring
the problem is non-convex and, therefore, inherently
based on the mean-variance of the n-best population.
nonlinear and multimodal. Unconstrained optimization
The remaining sections of the paper are organized as
problems can be formulated as a k-dimensional minimization
follows: Section II describes the MVO algorithm. Section
problem as follows:
III presents results on sample benchmark functions. Finally,
conclusions are presented in Section IV.
Min f(x), x =[x1, x2, …, xk]
II. THE MVO ALGORITHM
where k is the number of the parameters to be optimized.
Population based stochastic search algorithms have The flowchart of the MVO algorithm is given in Fig. 1.
become very popular in recent years such as particle swarm The basic steps in the MVO algorithm are described below.
optimization, ant colony optimization, genetic algorithms,
evolutionary programming, bacterial foraging algorithm, A. Initialization of MVO Algorithm and Normalization of
Variables
Manuscript revised April 30, 2010. This work was supported in part by The MVO algorithm and optimization problem
the grants - NSF CAREER ECCS #0348221 and NSF EFRI 0836017.
I. Erlich is the Chair of the Department of Electrical Power Systems, parameters have to be initialized. The few MVO algorithm
University of Duisburg-Essen, Germany (e-mail: istvan.erlich@uni-due.de). parameters to be initialized include:
G. K. Venayagamoorthy is the Director of the Real-Time Power and • dynamic population size, n
Intelligent Systems Laboratory, Missouri University of Science and
Technology (http://web.mst.edu/~ganeshv & e-mail: gkumar@ieee.org). • number of dimensions (problem variables), m, to be
N. Worawat is PhD candidate with the Department of Electrical Power selected for mutation
Systems, University of Duisburg-Essen, Germany (e-mail: • selection method used
worawat.nakawiro@uni-due.de).
C. Dynamic Population 1 n
1 n
0
⋮ 0 0.2 0.4 0.6 0.8 1
0 0.25 1
0.6 0.5 0.75
1 Fig. a)
xi symmetrical
0.8 mapping function
0.4 si 1 = si 2 = 10
0.6
0.2 xi
Shape si = 10
0.4
asymmetrical
0
mapping function
0 0.2 0.4 0.6 0.8 1 0.2 si 1 = 10
x' i
si 2 = 20
Fig. 4. Effects of mean of dynamic population on the transformation
function h 0
0 0.2 0.4 0.6 0.8 1
1 x'i
Parameter shape si 1 = si 2
1
0 5 asymmetrical
0.8 Fig. b)
10 15 mapping function
0.8 si 1 = 20
50
si 2 = 10
0.6
0.6
xi
xi
0.4
0.4
xi = 0.5
symmetrical
0.2 0.2 mapping function
si 1=si 2=10
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x'i x'i
Fig. 5. Effects of shaping scaling factor on the transformation function h.
Fig. 6: Effect of different shape factors si1 ≠ si2
As can be seen, with decreasing variance and thus
increasing shaping factor, the corresponding curve is getting Treating zero variance - Zero variance can occur when all
more flat so that the space to be searched is focused on the individuals stored in the archive do not differ with respect to
region near to the mean value. However, all curves start and particular optimization variables. In consequence, the shape
end at zero and one, respectively. Therefore, the search still factor of the mapping curve tends to zero and the
covers the whole space between the min/max limitations, optimization does not progress. The probability of zero
even though the probability to be selected near to the border variance is increasing as the number of mutated variables m
is less. is small.
One of the approaches to solve this problem is to use the
The effect of different shape factors si1 ≠ si2 is
last no-zero variance further. The authors have seen
demonstrated in Fig. 6. Obviously, by choosing different considerable improvements with the following insertion to
shape factors the space to be searched can be controlled by the algorithm:
shifting emphasis to preferred side of the curve around the
si1 = si ; si2 = si
mean value. The algorithm implemented by the authors for
where si calculated from the last nonzero variance vi
utilizing this control alternative is as follows: if si < sd → sd=sd..kd ; si1=sd.
best
if xi < x i → si 2 = si ⋅ AF ; si1 = si else if si > sd → sd= sd /kd ; si1=sd.
best
else if xi > x i → si1 = si ⋅ AF ; si 2 = si Assuming that the variables sd and kd are common for all
where AF represents the asymmetry factor in the range of optimization variables and are initialized to:
[1-10]. sd = 75
kd = 0.0505/k + 1.0
Crossover -For the remaining un-mutated dimensions the
genes of the parent, xbest, are inherited. In other words, the For sd, any value between 10 and 90 is fine but 75 was
values of these un-mutated dimensions are clones of the chosen for the benchmark function optimizations in this
paper. The variable sd allows tracking the mean of all si TABLE II
PARAMETER SETTINGS FOR BENCHMARK FUNCTIONS IN TABLE I
resulting from the last no-zero variance. Dimension Initialization Number of
Function
Name Range Evaluations
III. BENCHMARK FUNCTION RESULTS f1 30 [-2.048 , 2.048] 200000
For evaluating and comparing the MVO algorithm with f2 30 [[-5.12, 2] 200000
best results reported in literature using variants of PSO [7, f3 30 [-1.28, 1.28] 200000
8], a test suite of standard benchmark optimization functions f4 30 [-50, 50] 200000
consisting of various unimodal and multimodal problems are f5 30 [-50, 50] 200000
used. Five of these functions are presented in Table I. The
TABLE III
parameter settings for the benchmark functions are presented RESULTS AVERAGED OVER 50 TRIALS
in Table II. The parameter settings for the MVO algorithm Functio
MVO MVO MVO MVO Best PSO
are as follows: Min Mean Max Std based
n Name
algorithms
• dynamic population size, n = 2, 4.18e-05 4.68e-02 1.38e-01 4.12e-02 5.39e+000
f1
• randomly selected number of variables for mutation (FDR – [8])
is 1, and 1.00e-99 5.30e-08 2.51e-06 3.55e-07 4.36e-010
f2 (CLPSO –
• the shape scaling factor, fs = 1.0 and asymmetry 8])
factor, AF=1.0. 1.20e-03 4.40e-03 8.72e-03 1.76e-3 3.15e-03
f3 (SADE –
TABLE I [9])
BENCHMARK FUNCTIONS 9.27e-30 6.96e-27 1.67e-25 2.58e-26 6.60e-30
Function f4 (SADE –
Benchmark Function fmin [9])
Name
n 3.23e-28 6.21e-25 1.59e-23 2.82e-24 5.00e-29
f1 (100(x i+1 − x i2 ) 2 + (x i − 1) 2 ) 0 f5 (SADE –
i=1 [9])
D
[y i2 − 10 cos(2πy i )+ 10]
i=1
1
xi , x i < 0
f2 2
yi =
round(2x i ) 1
, xi ≥
2 2
i = 1,2,......,D
D
f3 ix i4 + N(0,1) 0
i=1
D −1
π
D
{ 10 sin 2 ( π yi ) + ∑ ( yi − 1 )2 [ 1 + 10 sin2 ( π yi +1 )]
i =1
D
+( yD − 1 )2 } + ∑ u( xi ,10,100,4 ),
i =1
f4 1 0 Fig. 7. Fitness graphs over 50 trails for the benchmark function f1 in
yi = 1 + ( xi + 1 ),
4 Table I.
⎧k( xi − a )m ,xi > a,
⎪⎪
u( xi ,a,k ,m ) = ⎨0, −a ≤ xi ≤ a
⎪ m
⎪⎩k( − xi − a ) ,xi < −a,
D−1
0.1{sin2 (3πx1 ) + (x i − 1)2 [1+ sin2 (3πx i+1 )]
i=1
f5 D
0
+(x D − 1)2 [1+ sin2 (2πx D )]}+ u(x i ,5,100,4)
i=1
REFERENCES
[1] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in
Proc. IEEE Int. Conf. Neural Networks, pp. 1942–1948, 1995.
[2] Y. del Valle Y, G. K. Venayagamoorthy, S. Mohagheghi, J. C.
Hernandez and R. G. Harley, “Particle swarm optimization: Basic
concepts, variants and applications in power systems”, IEEE Trans.
on Evolutionary Computation, Vol. 12, Issue 2, April 2008, pp. 171-
Fig. 10. Fitness graphs over 50 trails for the benchmark function f4 in 195.
[3] A. Engelbrecht, Computational Intelligence: An Introduction, John
Table I.
Wiley & Sons, Ltd, England. ISBN 0-470-84870-7.
[4] D. Dasgupta, “Advances in Artificial Immune systems”, IEEE
Computational Intelligence Magazine, Vol. 1, No. 4, Nov. 2006, pp.
40-49.
[5] T. K. Das, G. K. Venayagamoorthy, U. O. Aliyu, “Bio-inspired
Algorithms for the Design of Multiple Optimal Power System
Stabilizers: SPPSO and BFA”, IEEE Transactions on Industry
Applications, Vol. 44, Issue 5, September/October 2008, pp. 1445-
1457.
[6] A. Saber, G. K. Venayagamoorthy, “Intelligent Unit Commitment
with V2G - A Cost-Emission Optimization”, Journal of Power
Sources, Vol. 195, 2010, pp. 898-911.
[7] G. K. Venayagamoorthy, “A Successful Interdisciplinary Course on
Computational Intelligence”, IEEE Computational Intelligence
Magazine – A special issue on Education, Vol. 4, No. 1, February
2009, pp. 14-23.
[8] J. J. Liang, A. K. Qin, P. N. Suganthan and S. Baskar,
“Comprehensive Learning Particle Swarm Optimizer for Global
Optimization of Multimodal Functions,” IEEE Transactions on
Evolutionary Computation, vol. 10, no. 3, pp. 281- 295, June 2006.
[9] J. Brest, S. Greiner, B. Boˇskovic´, M. Mernik, and V. Žumer, “Self-
Fig. 11. Fitness graphs over 50 trails for the benchmark function f5 in Adapting Control Parameters in Differential Evolution: A
Table I. Comparative Study on Numerical Benchmark Problems,” IEEE
Transactions on Evolutionary Computation, vol. 10, no. 6, pp 646-657
Dec. 2006.