Applied Mathematics and Computation 228 (2014) 195–219
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
Global Dynamic Harmony Search algorithm: GDHS
Mohammad Khalili a,⇑, Riyaz Kharrat a, Karim Salahshoor a, Morteza Haghighat Sefat b
a
b
Petroleum University of Technology, POB 6198144471, Ahwaz, Iran
Heriot-Watt University, Edinburgh, United Kingdom
a r t i c l e
i n f o
Keywords:
Harmony Search algorithm
Dynamic Harmony Search
Meta-heuristics
Evolutionary algorithms
Optimization
a b s t r a c t
This paper presents a new modification of Harmony Search (HS) algorithm to improve its
accuracy and convergence speed and eliminates setting parameters that have to be defined
before optimization process and it is difficult to predict fixed values for all kinds of problems. The proposed algorithm is named Global Dynamic Harmony Search (GDHS). In this
modification, all the key parameters are changed to dynamic mode and there is no need
to predefine any parameters; also the domain is changed to dynamic mode to help a faster
convergence. Two experiments, with large sets of benchmark functions, are executed to
compare the proposed algorithms with other ones. In the first experiment, 15 benchmark
problems are used to compare the proposed algorithm with other similar algorithms based
on the Harmony Search method and in the second experiment, 47 benchmark problems are
used to compare the performance of the GDHS with other algorithms from different
families, including: GA, PSO, DE and ABC algorithms. Results showed that the proposed
algorithm outperforms the other algorithms, considering the point that the GDHS does
not require any predefined parameter.
Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction
It’s been a few decades that many researchers are looking for new methods to solve complex and difficult problems in a
better, more accurate and faster way. Their researches led several methods in problem solving and optimization. The most
recent ones are Meta-heuristic methods that conceptualize phenomena in the nature to find the best solution. In 2001, Geem
et al. introduced a new meta-heuristic method named Harmony Search (HS) which is inspired by musical harmony [1]. The
Harmony Search algorithm has been so far applied to various optimization problems and it has been successfully used in
many fields, such as: [2–6] etc.
Since the first presentation of HS, many modifications have been proposed to the HS to reinforce its accuracy and
convergence speed. The main drawback of the original HS is that the parameters are set to fixed values, and it is difficult
to suggest values that work well with every optimization problem. Mahdavi et al. [7] developed the original HS algorithm
and proposed the Improved Harmony Search (IHS). They used dynamic values for some parameters (pitch adjustment rate
and bandwidth) to overcome the HS drawbacks. Although their suggestion was constructive and improved the HS very well;
their work had the drawback that needed some parameters to be set before the optimization process. Omran and Mahdavi
[8] presented another modification to the HS algorithm named Global-Best Harmony Search (GHS) algorithm. Their work
was just the previous version of HS algorithm, IHS, but with a difference in improvisation step. Although this variation of HS
was valuable, some parameters had to be set before process. Cobos et al. [9] presented another method named ‘‘Global-Best
⇑ Corresponding author.
E-mail addresses: khaliliput@gmail.com, khaliliput@yahoo.com (M. Khalili), kharrat@put.ac.ir (R. Kharrat), salahshoor@put.ac.ir (K. Salahshoor),
morteza.haghighat@pet.hw.ac.uk (M.H. Sefat).
0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2013.11.058
196
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Harmony Search using Learnable Evolution Models (GHS + LEM)’’. They used new machine learning techniques to generate
new populations along with the Darwinian method, applied in evolutionary computation and based on mutation and natural
selection. This method still has its own parameters in addition to HS parameters that should be set before start of process.
According to these researches, Global Dynamic Harmony Search (GDHS) algorithm is proposed, which doesn’t require
any predefined values. After the introduction of GDHS, the algorithm is tested by two experiments using the most important
benchmark problems that are often used for testing optimization problems and the results versus other methods are shown.
The other parts of this paper are as following: Section 2 introduces the algorithms HS, IHS, GHS and GHS + LEM briefly.
Section 3 presents the new algorithm, GDHS. Section 4 presents the experiments and the results of the algorithm tested with
benchmark functions. Finally Section 5 presents the conclusions.
2. Harmony Search
2.1. Harmony Search algorithm
In 2001, Geem et al. introduced a new meta-heuristic method named ‘‘Harmony Search algorithm’’ which is inspired from
musical harmony [1]. Musical performances seek a best state (fantastic harmony) determined by estimation, as the
optimization algorithms seek a best state (global optimum – minimum cost or maximum benefit or efficiency) determined
by objective function evaluation. Aesthetic estimation is determined by the set of the sounds played by joined instruments,
just as objective function evaluation is determined by the set of the values produced by component variables; the sounds for
better estimation can be improved through practice after practice, just as the values for better objective function evaluation
can be improved iteration by iteration [1].
The optimization procedure of the Harmony Search algorithm consists of five steps, as follows [10]:
Step
Step
Step
Step
Step
1.
2.
3.
4.
5.
Initialize the optimization problem and algorithm parameters.
Initialize the harmony memory (HM).
Improvise a new harmony from the HM.
Update the HM.
Repeat Steps 3 and 4 until the termination criterion is satisfied.
Here we go through the algorithm step by step with a brief description:
Step 1. Initialize the optimization problem and algorithm parameters. In each optimization problem the first step is to declare
the problem itself and algorithm parameters. The optimization generally is specified as bellow:
Minimize f ðxÞ while xi 2 X i ;
i ¼ 1; 2; . . . ; N
ð1Þ
where f(x) is the objective function; x is the set of each design variable (xi); Xi is the set of the possible range of values for each
design variable (continuous design variables), that is, Lxi 6 Xi 6 Uxi; and N is the number of design variables.
The required parameters of the HS algorithm to solve the optimization problem (i.e., Eq. (1)) are also specified in this step:
the harmony memory size (number of solution vectors in harmony memory, HMS), harmony memory considering rate
(HMCR), pitch adjusting rate (PAR), and termination criterion (maximum number of searches or the number of improvisations: NI). Here, HMCR and PAR are parameters that are used to improve the solution vector and both are defined in Step 3
[10].
Step 2. Initialize the harmony memory (HM). In this step, the ‘‘harmony memory’’ (HM) matrix, shown in Eq. (2), is filled
with randomly generated solution vectors and sorted by the values of the objective function, f(x) [10].
x11
x12
x13
6 x2
6 1
6 3
6
HM ¼ 6 x1
6 .
6 .
4 .
x22
x23
x32
x33
..
.
..
.
xHMS
2
xHMS
3
2
xHMS
1
x1D
..
.
x3D
x2D
..
.
xHMS
D
jf ðx1 Þ
3
jf ðx2 Þ 7
7
7
jf ðx3 Þ 7
7:
7
..
7
5
.
ð2Þ
jf ðxHMS Þ
Step 3. Improvise a new harmony from the HM.
In this step, a new harmony vector, X 0 ¼ ðx01 ; x02 ; . . . ; x0N Þ is generated from the HM based on memory considerations, pitch
adjustments, and randomization. For instance, the value of the first design variable ðx01 Þ for the new vector can be chosen
. Values of the other design variables (x0i ) can be chosen in the same
from any value in the specified HM range x11 xHMS
1
manner. Here, it is possible to choose the new value using the HMCR parameter, which varies between 0 and 1 as follows:
X 0i
:
(
g with probability of HMCR
x0i 2 fx1i ; x2i ; . . . ; xHMS
i
x0i 2 X i
with probability of ð1 HMCRÞ
ð3Þ
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
197
The HMCR is the probability of choosing one value from the historic values stored in the HM, and (1 HMCR) is the
probability of randomly choosing one feasible value that is not limited to those stored in the HM. A HMCR value of 1.0 is
not recommended because it eliminates the possibility that the solution may be improved by the values which are not stored
in the HM. This is similar to the reason why the genetic algorithm uses a mutation rate in the selection process [10].
Every component of the new harmony vector, X 0 ¼ ðx01 ; x02 ; . . . ; x0N Þ, is examined to determine whether it should be pitchadjusted. This procedure uses the PAR parameter that sets the rate of adjustment for the pitch chosen from the HM as follows
[10]:
Pitch adjusting rule for X 0i :
Yes with probability of PAR
No
with probability of ð1 PARÞ
ð4Þ
The Pitch adjusting process is performed only after a value is chosen from the HM. The value (1 PAR) sets the rate of doing
nothing. If the pitch adjustment decision for x0i is Yes, and x0i is assumed to be xi(k), i.e., the kth element in Xi, the
pitch-adjusted value of xi(k) is [10]:
Fig. 1. Dynamic HMCR.
Fig. 2. Dynamic PAR.
198
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
x0i ¼ x0i þ a
ð5Þ
where a is the value of bw uð1; 1Þ, bw is an arbitrary distance bandwidth for the continuous design variable, and u(1,1) is
a uniform distribution between 1 and 1. The HMCR and PAR parameters introduced in the Harmony Search help the
algorithm to find globally and locally improved solutions, respectively [10].
Step 4. Update the HM.
In this step, if the new harmony vector is better than the worst harmony in the HM in terms of the objective function
value, the new harmony is included in the HM and the existing worst harmony is excluded from the HM. The HM is then
sorted by the objective function values [10].
Step 5. Repeat Steps 3 and 4 until the termination criterion is satisfied.
In this step, the computations are terminated when the termination criterion is satisfied; if not, Steps 3 and 4 will be
repeated [10].
2.2. Improved Harmony Search
In 2007, Mahdavi et al. developed the original HS algorithm and proposed the ‘‘Improved Harmony Search (IHS)’’
algorithm which employs a novel method for generating new solution vectors that enhances accuracy and convergence rate
of Harmony Search (HS) algorithm [7].
To improve the performance of the HS algorithm and eliminate the drawbacks of specifying fixed values of PAR and bw,
IHS algorithm uses variables PAR and bw in improvisation step (Step 3). PAR and bw change dynamically with generation
number as expressed in Eqs. (6)–(8):
PARðtÞ ¼ PARmin þ
PARmax PARmin
t
NI
ð6Þ
where PAR(t) is the pitch adjusting rate for generation t, PARmin and PARmax are the minimum and maximum pitch adjusting
rates, respectively; NI is the number of solution vector generations, t is the generation number and,
bwðtÞ ¼ bwmax expðc:tÞ
c¼
bwmin
Ln bw
max
NI
ð7Þ
ð8Þ
where bw(t) is the bandwidth for generation t, bwmin and bwmax are the minimum and maximum bandwidths [7].
A major drawback of the IHS is that the user needs to specify the values for bwmin and bwmax which are difficult to guess
and problem dependent [8].
2.3. Global-Best Harmony Search
In 2008, Omran and Mahdavi presented a new modification to HS algorithm named ‘‘Global-Best Harmony Search algorithm’’ [8], inspired by the concept of swarm intelligence. Unlike the basic HS algorithm, the GHS algorithm generates a new
Fig. 3. An example of dynamic limit reduction for Ackley Test Function with ND = 30, Max. Improvisation = 5000 (Generated by MATLAB program).
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
199
best
best
harmony vector x0 by making use of the best harmony vector xbest ¼ fxbest
1 ; x2 ; . . . ; xN g in the HM. The pitch adjustment rule
is given as follows:
x0i ¼ xbest
k
ð9Þ
where k is a random integer between 1 and N. The performance of the GHS is investigated and compared with HS. The
experiments conducted showed that the GHS generally outperformed the other approaches when applied to ten benchmark
problems [11].
2.4. GHS + LEM
In 2011, Cobos et al. presented another method called: ‘‘GHS + LEM or Global-Best Harmony Search using Learnable Evolution Models’’ [9]. This method is inspired by the concept of the learnable evolution model (LEM) proposed by Michalski
[12]. In LEM, machine learning techniques are used to generate new populations along with the Darwinian method, applied
in evolutionary computation and based on mutation and natural selection. The method can determine which individuals in a
population (or set of individuals from previous populations) are better than others in performing certain tasks. This
reasoning, expressed as an inductive hypothesis, is used to generate new populations. Then, when the algorithm is run in
Darwinian evolution mode, it uses random or semi-random operations for the generation of new individuals (using
traditional mutation and/or recombination techniques). The LEM process can be summarized in the following steps [9]:1.
for each i
[1, maximum improvisation] do
max
=
( Upper limit- Lower limit )
bw den
U HM = max(max(HM (:,1: D )))
L HM = min(min(HM (:,1: D )))
U new = U HM + bw max
L new = L HM − bw max
if U new ≤ upper initial limit
U Dnew
end_if
if L new
=U
%Control the new limits not to exceed the original limits
new
≥ lower initial
limit
LDnew = L new
end_if
New Limit = [
]
⎛
⎞
HMCR = 0.9 + 0.2 × iteration −1 × ⎜⎜1 − iteration −1 ⎟⎟
max.imp. −1 ⎝ max.imp. −1
⎛
⎞
PAR = 0.85 + 0.3 × iteration −1 × ⎜⎜1 − iteration −1 ⎟⎟
max.imp. −1 ⎝ max.imp. −1
for each i
[1, N] do
U(0, 1) < HMCR then /*memory consideration*/
xi' = xij ,where j~U(1, … , HMS)
if U(0, 1) ≤ PAR then /*pitch adjustment*/
(
)
coef = 1 + ( HMS − j ) × ⎜⎜1 − iteration −1 ⎟⎟
max imp −1
⎝
⎛
⎞
xi' = xi' ± bw × coef
if xi' is not in the range of initial limits, correct it as:
xi' = xi' bw × coef
end_if
end_if
else /*random selection*/
x i' = L Dinew
+ rand × (U Dinew − L new
Di
)
end_if
done
done
Fig. 4. Improvisation step in GDHS algorithm.
200
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Generate a population.2. Run the machine learning mode.3. Run the Darwinian learning mode.4. Alternate between the two
modes until the stop criterion is reached.
Cobos et al. resulted that in HCMR P 0.9 the algorithm of GHS + LEM generally has better efficiency. Also they suggested
that the harmony memory size is better to be between 5 and 10.
3. Proposed algorithm: Global Dynamic Harmony Search (GDHS)
In all published modifications of the Harmony Search, authors introduced bw, PAR and HMCR as the main parameters of
the algorithm and studies are focused on estimation of these parameters. In IHS a great idea was proposed, changing bw and
Table 1
Setting parameters for various algorithms.
Variable
HS
IHS
GHS
GHS + LEM
GDHS
HMS
HMCR
PAR
PARmin
PARmax
Bw
bwmin
bwmax
HLGS
RCR
RRU
5
0.9
0.3
N.A.
N.A.
0.01
N.A.
N.A.
N.A.
N.A.
N.A.
5
0.9
N.A.
0.01
0.99
N.A.
0.0001
1/(20x (UB LB))
N.A.
N.A.
N.A.
5
0.9
N.A.
0.01
0.99
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
5
0.9
N.A.
0.01
0.99
N.A.
N.A.
N.A.
[HMS/2]
0.9
0.2
5
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
N.A.
Table 2
Benchmark functions used in experiment A. D: dimension, C: characteristic, U: unimodal, M: multimodal, S: separable, N: non-separable.
Function name
D
C
Global optimum
Range
1. Sphere (DeJong’s first function)
P
2
f ðxÞ ¼ D
i¼1 xi
2. Schwefel’s problem 2.22
P
QD
f ðxÞ ¼ D
i¼1 jxi j þ
i¼1 jxi j
3. Rosenbrock
P
2
2 2
f ðxÞ ¼ D1
i¼1 ½100ðxiþ1 xi Þ þ ðxi 1Þ
4. Step
P
2
f ðxÞ ¼ D
i¼1 ðbxi þ 0:5cÞ
5. Rotated hyper-ellipsoid
Pi
P
2
f ðxÞ ¼ D
i¼1 ð
j¼1 xj Þ
30
50
US
f(x) = 0 @ xi = 0, i = 1, . . ., D
5:12 6 xi 6 5:12
i = 1, . . ., D
30
50
UN
f(x) = 0 @ xi = 0, i = 1, . . ., D
10 6 xi 6 10 i = 1, . . ., D
30
50
UN
f(x) = 0 @ xi = 1, i = 1, . . ., D
30 6 xi 6 30 i = 1, . . ., D
30
50
US
f(x) = 0 @ xi = 0, i = 1, . . ., D
100 6 xi 6 100
i = 1, . . ., D
30
50
UN
f(x) = 0 @ xi = 0, i = 1, . . ., D
100 6 xi 6 100
i = 1, . . ., D
30
50
MS
f(x) = 418.9829D @
xi = 420.9687, i = 1, . . ., D
500 6 xi 6 500
i = 1, . . ., D
30
50
MS
f(x) = 0 @ xi = 0, i = 1, . . ., D
5:12 6 xi 6 5:12
i = 1, . . ., D
30
50
MN
f(x) = 0 @ xi = 0, i = 1, . . ., D
32 6 xi 6 32 i = 1, . . ., D
30
50
MN
f(x) = 0 @ xi = 0, i = 1, . . ., D
600 6 xi 6 600
i = 1, . . ., D
2
MN
5 6 xi 6 5 i = 1,2
30
50
UN
f(x⁄) = 1.0316285 @
x⁄ = (0.08983, 0.7126);
(0.08983, 0.7126)
f(x) = 0 @ xi = 0, i = 1, . . ., D
30
50
UN
f(x) = 0 @ xi = 0, i = 1, . . ., D
100 6 xi 6 100i = 1, . . ., D
30
50
MN
f(x) = 0 @ xi = 0, i = 1, . . ., D
100 6 xi 6 100
i = 1, . . ., D
30
50
MN
f(x) = 0 @ xi = 0, i = 1, . . ., D
0:5 6 xi 6 0:5 i = 1, . . ., D
30
50
U
f(x) = 0 @ xi = 0, i = 1, . . ., D
10 6 xi 6 10 i = 1, . . ., D
6. Generalized Schwefel’s problem 2.26
pffiffiffiffiffiffiffi
P
f ðxÞ ¼ D
i¼1 xi sinð jxi jÞ
7. Rastrigin
P
2
f ðxÞ ¼ D
i¼1 ½xi 10 cosð2pxi Þ þ 10
8. Ackley
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
PD
1
2
f ðxÞ ¼ 20 expð0:2 D1 D
i¼1 xi Þ expðD
i¼1 cosð2pxi ÞÞ þ 20 þ e
9. Griewank
1
f ðxÞ ¼ 4000
PD
2
i¼1 xi
QD
i¼1
cosðpxiffiiÞ þ 1
10. Six-Hump Camel-Back
f ðxÞ ¼ 4x21 2:1x41 þ 13 x61 þ x1 x2 4x22 þ 4x42
11. Shifted rotated high conditioned elliptic (SRHCE)
i1
P
6 ðD1Þ 2
xi
f ðxÞ ¼ D
i¼1 ð10 Þ
12. Shifted Schwefel’s problem 1.2 with noise in fitness (Schwefel’s P. 1.2
Pi
2
P
with noise) f ðxÞ ¼ D
i¼1 ð
j¼1 xj Þ ð1 þ 0:4 jNð0; 1ÞjÞ
13. Shifted rotated expanded Scaffer’s F6 (SRESF6)
pffiffiffiffiffiffiffiffiffiffi
sin2 ð x2 þy2 Þ0:5
Fðx; yÞ ¼ 0:5 þ
2
2 2
100 6 xi 6 100
i = 1, . . ., D
ð1þ0:001ðx þy ÞÞ
f ðxÞ ¼ Fðx1 ; x2 Þ þ Fðx2 ; x3 Þ þ þ FðxD1 ; xD Þ þ FðxD ; x1 Þ
14. Shifted rotated weierstrass
Pk max k
Pk max k
P
k
k
f ðxÞ ¼ D
i¼1 ð
k¼0 ½a cosð2pb ðxi þ 0:5ÞÞÞ D
k¼0 ½a cosð2pb 0:5Þ
where: a ¼ 0:5; b ¼ 3; k max ¼ 20
15. Sum of different power
P
ðiþ1Þ
f ðxÞ ¼ D
i¼1 jxi j
201
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
PAR with respect to the generation number. The parameter of PAR increases linearly with generation number (although
some paper otherwise claim with numerical simulation results [13]), while bw decreases exponentially. Given this change
in the parameters, IHS does improve the performance of HS, since it finds better solutions both globally and locally.
Here a modified version of HS algorithm named ‘‘Global Dynamic Harmony Search, GDHS’’ is proposed, which uses
dynamic formulas for bw, PAR and HMCR. Another change to the algorithm is changing the domain, dynamically. The idea
of decreasing or increasing the parameters, led us to a formula that is both decreasing and increasing during optimization
to have effects of both ideas. The suggested formulas are:
HMCR ¼ 0:9 þ 0:2
ffi
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
iteration 1
iteration 1
1
max :imp: 1
max :imp: 1
ð10Þ
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
iteration 1
iteration 1
PAR ¼ 0:85 þ 0:3
1
max :imp: 1
max :imp: 1
ð11Þ
At initial iterations, a low consideration rate makes the algorithm to generate more new solutions rather than choosing
from harmony memory; this provides more investigation. Meanwhile it’s better to have lower values of PAR to have both
chances of adjustment and repetition. At the same time the domain is restricted to bound the new generated numbers
and have a faster convergence.
Table 3
Results of the experiment A. Mean: mean of the best values, StdDev: standard deviation of the best values, SEM: Standard error of means (D = 30, NI = 50,000).
Function
Sphere
Schwefel’s problem 2.22
Rosenbrock
Step
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rastrigin
Ackley
Griewank
Six-Hump Camel-Back
SRHCE
Schwefel’s P. 1.2 with noise
SRESF6
Shifted rotated weierstrass
Sum of different power
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
HS
IHS
GHS
GHS + LEM
This study
0.000684005
9.67781E05
9.67781E06
0.143656975
0.047911784
0.004791178
312.2431152
486.5124844
48.65124844
11.56
4.608555943
0.460855594
4200.19364
1319.27706
131.927706
12545.01282
9.274118296
0.9274118296
1.266797341
1.023021844
0.102302184
0.981392208
0.485630315
0.048563032
1.085396028
0.035098647
0.003509865
1.031600318
3.48248E05
3.48248E06
2641376.45
2123.26009
212.326009
10734.2988
4207.32949
420.732949
1.96830452
0.58544715
0.05854472
4.6535775
0.38824192
0.03882419
8.0773E06
7.3632E06
7.3632E07
0.017838978
0.00710319
0.000710312
0.997096357
0.200329207
0.020032921
423.9427774
330.6943507
33.06943507
11.22
3.945538332
0.394553833
4444.37734
1338.22348
133.822348
12540.34846
10.54344883
1.054344883
2.722732645
1.130249802
0.113024980
1.584674315
0.331393069
0.033139307
1.087082117
0.031926489
0.003192649
1.031628428
5.53445E09
5.53445E10
2736608.87
84608.4261
8460.84261
10822.5744
3978.9036
397.89036
2.48652905
0.52748454
0.05274845
1.96625309
0.4172199
0.0417220
0.00104216
0.00258109
0.00025811
4.0457E05
7.29366E05
7.29366E06
0.040860755
0.037067055
0.003706706
72.47196696
103.3253058
10.33253058
0
0
0
6636.76182
7763.48957
776.348957
12569.46257
0.03971048
0.00397105
0.009457309
0.014012005
0.001401200
0.024746761
0.026603311
0.002660331
0.091022469
0.192952247
0.019295225
1.031568182
8.34751E05
8.34751E06
2639715.92
2365.63557
236.563557
9606.13828
9516.94576
951.694576
3.20171518
1.57984035
0.15798404
0.34992759
0.24911012
0.02491101
6.6663E05
0.00014128
0.00001413
2.09647E10
3.60168E10
3.60168E11
5.70121E05
4.49754E05
4.49754E06
15.77537882
22.43722786
2.243722786
0
0
0
162.758505
294.309443
29.4309443
12569.48662
3.40336E11
3.40336E12
3.81653E08
6.06791E08
6.06791E09
5.83147E06
4.86194E06
4.86194E07
2.1722E11
4.5967E11
4.5967E12
1.031628452
1.45999E09
1.45999E10
2638638.74
0.00134496
0.00013450
2900.82884
3964.44543
396.444543
0.71929327
0.62934122
0.06293412
0.27805826
0.03101279
0.00310128
3.5302E11
7.426E11
7.426E12
0
0
0
0
0
0
38.155578
28.662165
2.8662165
0
0
0
0.2973795
0.2876311
0.0287631
12569.48662
1.0802E011
1.0802E012
0
0
0
0
0
0
0.00372009
0.00596625
0.000596625
1.031628452
9.2595E009
9.2595E010
0
0
0
2964.182291
1612.564548
161.2564548
0.96139267
0.41747234
0.04174723
0
0
0
0
0
0
202
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
At the middle of the iterations, we will have larger values of HMCR up to 1 to pick values from harmony memory with
more chance and also PAR goes up to 1 to enforce the selected values to have adjustments. Still domain is getting more
bounded.
At final iterations, HMCR is decreased. Reduction of HMCR helps to escape from local optima by generating new values
other than values in the harmony memory. Since the domain has been bounded; the reduction of HMCR does not affect the
accuracy and convergence of the algorithm. The Figs. 1 and 2 show a schematic of HMCR and PAR in dynamic mode.
Domain is changing dynamically as follows:
U HM ¼ maximum value of the variables in the harmony memory
ð12Þ
LHM ¼ minimum value of the variables in the harmony memory
ð13Þ
U new ¼ U HM þ bwmax
ð14Þ
Lnew ¼ LHM bwmax
ð15Þ
New limit ¼ ½Lnew ; U new ;
ð16Þ
Table 4
Results of the experiment A. Mean: mean of the best values, StdDev: standard deviation of the best values, SEM: Standard error of means (D = 50, NI = 50,000).
Function
Sphere
Schwefel’s problem 2.22
Rosenbrock
Step
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rastrigin
Ackley
Griewank
Six-Hump Camel-Back
SRHCE
Schwefel’s P. 1.2 with noise
SRESF6
Shifted rotated weierstrass
Sum of different power
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
HS
IHS
GHS
GHS + LEM
This study
1.231713634
0.287760963
0.028776096
9.594968369
1.080214642
0.108021464
28119.05942
10535.26342
1053.526342
513.92
101.4288505
10.14288505
29509.5846
5773.08368
577.308368
20065.84929
183.3728874
18.33728874
35.35722669
4.955395824
0.495539582
5.259876609
0.384154298
0.038415430
5.701261605
1.080435525
0.108043552
1.031600318
3.48248E05
3.48248E06
4849016.47
321380.054
32138.0054
44173.8891
9303.92094
930.392094
7.53326153
0.91845325
0.09184532
13.1610808
1.03368193
0.10336819
11.8118627
14.739244
1.4739244
1.36689254
0.308892735
0.030889273
10.03102019
1.361876185
0.136187618
27416.72832
9607.338888
960.7338888
535.07
112.1655752
11.21655752
28901.7791
5338.41442
533.841442
20055.73906
187.2603852
18.72603852
45.97323267
5.414365656
0.541436566
5.382419569
0.38808497
0.03880849
5.887341775
1.108359613
0.110835961
1.031628428
5.53445E09
5.53445E10
5112265.38
477042.762
47704.2762
44978.9758
9664.43851
966.443851
7.56787837
0.73623798
0.07362379
12.7139825
1.16549521
0.11654952
9.55819903
36.363862
3.6363862
0.005550663
0.00776301
0.00077630
0.411417885
0.397066313
0.039706631
357.7255365
726.1644056
72.61644056
0.09
0.637149587
0.063714959
66423.751
22022.3762
2202.23762
20944.1766
8.953405915
0.895340591
0.407654111
0.622184354
0.062218435
0.324365569
0.444545366
0.044454536
0.700857354
0.368310151
0.036831015
1.031568182
8.34751E05
8.34751E06
4323424.84
382439.522
38243.9522
77200.6744
19279.3161
1927.93161
8.2040702
4.45743971
0.44574397
1.73347675
1.02389123
0.10238912
0.03464226
0.07923361
0.00792336
2.58528E08
5.5786E08
5.5786E09
0.000441435
0.000376589
0.000037659
39.49848422
59.9930843
5.99930843
0
0
0
9698.04483
7340.12668
734.012668
20949.14436
3.8441E09
3.8441E10
4.13742E06
8.94894E06
8.94894E07
6.09717E05
5.45021E05
5.45021E06
5.35254E09
1.198E08
1.198E09
1.031628452
1.45999E09
1.45999E10
4070199.91
0.00843488
0.00084348
23470.0405
11482.3296
1148.23296
1.27372272
1.20134079
0.12013408
1.25572551
0.1074032
0.01074032
7.1917E11
1.0876E10
1.0876E11
0
0
0
0
0
0
67.114686
42.538151
4.2538151
0
0
0
357.741914
117.986611
11.7986611
20949.14436
3.7323E11
3.7323E12
0
0
0
0
0
0
0.00221714
0.00490677
0.00049067
1.031628452
9.5882E09
9.5882E10
1.5897E08
5.7768E08
5.7768E09
25780.1068
6421.59280
642.159280
2.4822881
0.7184870
0.0718487
0
0
0
3.8403E12
1.1000E11
1.1000E12
203
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 5
Results of the experiment A. Mean: mean of the best values, StdDev: standard deviation of the best values, SEM: Standard error of means (D = 30, NI = 5000).
Function
Sphere
Schwefel’s problem 2.22
Rosenbrock
Step
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rastrigin
Ackley
Griewank
Six-Hump Camel-Back
SRHCE
Schwefel’s P. 1.2 with noise
SRESF6
Shifted rotated weierstrass
Sum of different power
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
HS
IHS
GHS
GHS + LEM
This study
1.391113338
0.442516349
0.044251635
7.343708198
1.298620467
0.129862047
44801.42361
27087.38846
2708.738846
577.17
178.0053373
17.80053373
18648.2804
4664.11977
466.411977
11723.5473
194.1725169
19.41725169
29.46075697
5.461732931
0.546173293
6.335464888
0.620693868
0.062069387
6.277931588
1.494900647
0.149490065
1.03155243
5.65151E05
5.65151E06
4949527.45
1549998.56
154999.856
25859.5942
6127.04081
612.704081
5.40555247
0.74702013
0.07470201
8.1876236
1.08444438
0.10844444
56.7816376
84.9067592
8.49067592
1.492336778
0.429375922
0.042937592
7.942415139
1.376467735
0.137646774
44335.30705
25827.36583
2582.736583
599.34
181.6569761
18.16569761
19114.2613
5576.63382
557.663382
11734.68475
182.3846445
18.23846445
36.89200558
6.233664508
0.623366451
6.480145971
0.686288345
0.068628834
6.370436585
1.816294084
0.181629408
1.031628431
7.41828E09
7.41828E10
5272015.41
2003255.08
200325.508
25706.8318
6656.33665
665.633665
5.66821959
0.57290456
0.05729046
9.14968576
1.03659385
0.10365938
70.2377894
135.452682
13.5452682
0.008490508
0.015014868
0.001501487
0.38849092
0.392227894
0.039222789
365.8780006
988.1136926
98.81136926
3.25
8.62738768
0.86273877
22845.7019
12812.2556
1281.22556
12561.42156
16.70991705
1.670991705
0.881408659
1.594135534
0.159413553
0.535233079
0.701961543
0.070196154
0.795212512
0.350711873
0.035071187
1.026283458
0.008075092
0.000807509
3257133.71
1000322.75
100032.275
33126.0056
14920.4074
1492.04074
3.73505659
2.6546381
0.2654638
1.62251403
1.09541355
0.10954135
0.04011324
0.06083117
0.00608312
5.93279E08
9.80204E08
9.80204E09
0.000681506
0.000587781
0.000058778
35.91286497
68.65861969
6.865861969
0
0
0
9742.75094
5923.66993
592.366993
12569.4597
0.248116333
0.024811633
1.17383E05
1.89959E05
1.89959E06
0.000125684
0.000103427
0.000010343
0.006467333
0.054165186
0.005416519
1.030628257
0.004327667
0.000432767
2638638.87
0.17113924
0.01711392
16884.5567
8512.95715
851.295715
1.38260315
1.1580235
0.1158024
0.9166651
0.08870444
0.00887044
8.4062E09
2.4547E08
2.4547E09
2.0616E12
2.6746E12
2.6746E13
5.5844E06
8.0947E06
8.0947E07
127.51887
219.55947
21.955947
0
0
0
6885.2165
1995.0210
199.50210
12221.5651
204.20577
20.420577
11.569186
3.4237195
0.3423720
1.2069E05
1.2343E05
1.2343E06
0.0069551
0.0073817
0.0007382
1.031628431
9.6625E09
9.6625E10
677.9420
737.0216
73.70216
23116.859
6545.2417
654.52417
4.6748659
0.9918541
0.0991854
0.0372841
0.0335924
0.0033592
4.1984E05
1.1419E04
1.1419E05
Table 6
Significance test for GDHS and HS (D = 30, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
9
14
1
5
2
6
4
8
12
13
7
15
10
3
SRHCE
Griewank
Shifted rotated weierstrass
Sphere
Rotated hyper-ellipsoid
Schwefel’s problem 2.22
Generalized Schwefel’s P. 2.26
Step
Ackley
Schwefel’s P. 1.2 with noise
SRESF6
Rastrigin
Sum of different power
Six-Hump Camel-Back
Rosenbrock
12440.193
303.8234
119.8628
70.6777
31.8348
29.9836
26.3894
25.0838
20.2086
17.2448
14.0034
12.3829
10.9698
8.0787
5.6240
212.33
0.0036
0.0388
0
131.9277
0.0048
0.9274
0.4609
0.0486
450.5773
0.0719
0.1023
0
0
48.74
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
204
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
where Unew and Lnew are the new upper and lower bounds of the domain, respectively.
Maximum bandwidth is defined as:
bwmax ¼
Dðupper and lower limitsÞ
bwden
ð17Þ
Table 7
Significance test for GDHS and IHS (D = 30, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
9
11
2
8
14
5
4
6
1
7
13
10
12
3
15
Griewank
SRHCE
Schwefel’s problem 2.22
Ackley
Shifted rotated weierstrass
Rotated hyper-ellipsoid
Step
Generalized Schwefel’s P. 2.26
Sphere
Rastrigin
SRESF6
Six-Hump Camel-Back
Schwefel’s P. 1.2 with noise
Rosenbrock
Sum of different power
333.5559
323.4440
49.7729
47.8186
47.1275
33.2088
28.4372
27.6363
25.1140
24.0897
22.6719
22.2481
18.3040
11.6224
4.0377
0.0032
8460.8426
0.0200
0.0331
0.0417
133.8224
0.3946
1.0543
0.0007
0.1130
0.0673
0
429.3255
33.1934
0.0003
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
Table 8
Significance test for GDHS and GHS (D = 30, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank
of p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
14
13
2
8
5
10
12
7
6
1
15
9
3
4
SRHCE
Shifted rotated weierstrass
SRESF6
Schwefel’s problem 2.22
Ackley
Rotated hyper-ellipsoid
Six-Hump Camel-Back
Schwefel’s P. 1.2 with noise
Rastrigin
Generalized Schwefel’s P. 2.26
Sphere
Sum of different power
Griewank
Rosenbrock
Step
11158.5908
14.0471
13.7101
11.0235
9.3021
8.5483
7.2201
6.8810
6.7494
6.0563
5.5469
4.7185
4.5224
3.2003
Inf
236.56356
0.024911
0.1634068
0.0037
0.0026603
776.3490
0
965.25966
0.0014
0.0040
0
0
0.0193
10.7227
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
–
Table 9
Significance test for GDHS and GHS + LEM (D = 30, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R:
Rank of p-value, I.R.: inverse rank of p-value, Sign: significance.
No.
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
14
2
8
7
9
3
1
5
15
13
12
6
10
4
SRHCE
Shifted rotated weierstrass
Schwefel’s problem 2.22
Ackley
Rastrigin
Griewank
Rosenbrock
Sphere
Rotated hyper-ellipsoid
Sum of different power
SRESF6
Schwefel’s P. 1.2 with noise
Generalized Schwefel’s P. 2.26
Six-Hump Camel-Back
Step
1.962E+10
89.6592
12.6763
11.9941
6.2897
6.2352
6.1484
5.8208
5.5201
4.7538
3.2057
0.1480
0
0
Inf
0.0001
0.0031
0
0
0
0.0006
3.6400
0
29.4310
0
0.0755
427.9859
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GHS + LEM
GHS + LEM
GDHS
GDHS
GDHS
GHS + LEM
–
–
–
–
205
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
where bwden is the denominator coefficient of the bandwidth and it is a fixed value throughout the optimization process:
bwden ¼ 20 abs 1 þ log10 U initial Linitial
ð18Þ
Table 10
Significance test for GDHS and HS (D = 50, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
8
14
2
7
9
4
5
6
13
1
3
12
10
15
SRHCE
Ackley
Shifted rotated weierstrass
Schwefel’s problem 2.22
Rastrigin
Griewank
Step
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
SRESF6
Sphere
Rosenbrock
Schwefel’s P. 1.2 with noise
Six-Hump Camel-Back
Sum of different power
150.8811
136.9209
127.3223
88.8246
71.3510
52.7471
50.6680
50.4856
48.1693
43.3152
42.8034
26.6265
16.2707
8.0787
8.0139
32138.01
0.0384
0.1034
0.1080
0.4955
0.1080
10.1429
577.4289
18.3373
0.1166
0.0288
1053.53
1130.4857
0
1.4739
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
Table 11
Significance test for GDHS and IHS (D = 50, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
8
14
11
7
2
5
9
13
6
4
1
3
10
12
15
Ackley
Shifted rotated weierstrass
SRHCE
Rastrigin
Schwefel’s problem 2.22
Rotated hyper-ellipsoid
Griewank
SRESF6
Generalized Schwefel’s P. 2.26
Step
Sphere
Rosenbrock
Six-Hump Camel-Back
Schwefel’s P. 1.2 with noise
Sum of different power
138.6918
109.0865
107.1658
84.9097
73.6559
53.4561
53.0971
49.4360
47.7093
47.7036
44.2514
28.4671
21.6784
16.5459
2.6285
0.0388
0.1165
47704.2762
0.5414
0.1362
533.9718
0.1108
0.1029
18.7260
11.2166
0.0309
960.7433
0
1160.3371
3.6364
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
Table 12
Significance test for GDHS and GHS (D = 50, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank
of p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
5
12
9
14
13
2
8
10
1
7
6
15
3
4
SRHCE
Rotated hyper-ellipsoid
Schwefel’s P. 1.2 with noise
Griewank
Shifted rotated weierstrass
SRESF6
Schwefel’s problem 2.22
Ackley
Six-Hump Camel-Back
Sphere
Rastrigin
Generalized Schwefel’s P. 2.26
Sum of different power
Rosenbrock
Step
113.0486
29.9991
25.3046
18.9671
16.9303
12.6729
10.3614
7.2966
7.2201
7.1501
6.5520
5.5485
4.3722
3.9951
1.4125
38243.952
2202.2692
2032.0652
0.0368
0.1024
0.4515
0.0397
0.0445
0
0.0008
0.0622
0.8953
0.0079
72.7409
0.0637
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
206
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Uinitial and Linitial are the problem’s initial upper and lower limits, respectively. An example of limit reduction is shown in
Fig. 3.
Minimum bandwidth is set as:
bwmin ¼ 0:001 bwmax
ð19Þ
Table 13
Significance test for GDHS and GHS + LEM (D = 50, NI = 50000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R:
Rank of p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
14
5
2
8
13
15
1
7
9
3
12
6
10
4
SRHCE
Shifted rotated weierstrass
Rotated hyper-ellipsoid
Schwefel’s problem 2.22
Ackley
SRESF6
Sum of different power
Sphere
Rastrigin
Griewank
Rosenbrock
Schwefel’s P. 1.2 with noise
Generalized Schwefel’s P. 2.26
Six-Hump Camel-Back
Step
4.825E+09
116.9170
12.7233
11.7219
11.1870
8.6338
6.2276
4.6343
4.6234
4.5185
3.7551
1.7559
0
0
Inf
0.0008
0.0107
734.1075
0
0
0.1400
0
0
0
0.0005
7.3544
1315.6016
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GHS + LEM
GDHS
GDHS
GDHS
GHS + LEM
GHS + LEM
GHS + LEM
–
–
–
Table 14
Significance test for GDHS and HS (D = 30, NI = 5000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of pvalue, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
8
14
2
9
4
11
1
7
5
6
3
10
15
13
12
Ackley
Shifted rotated weierstrass
Schwefel’s problem 2.22
Griewank
Step
SRHCE
Sphere
Rastrigin
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rosenbrock
Six-Hump Camel-Back
Sum of different power
SRESF6
Schwefel’s P. 1.2 with noise
102.0705
75.1208
56.5500
41.9486
32.4243
31.9281
31.4364
27.7556
23.1881
17.6737
16.4920
13.4479
6.6875
5.8846
3.0592
0.0621
0.1085
0.1299
0.1495
17.8005
154999.87
0.0442
0.6446
507.2881
28.1785
2708.83
0.0000
8.4907
0.1242
896.5535
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
Table 15
Significance test for GDHS and IHS (D = 30, NI = 5000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
8
14
2
7
9
1
4
11
5
6
3
13
15
12
10
Ackley
Shifted rotated weierstrass
Schwefel’s problem 2.22
Rastrigin
Griewank
Sphere
Step
SRHCE
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rosenbrock
SRESF6
Sum of different power
Schwefel’s P. 1.2 with noise
Six-Hump Camel-Back
94.4229
87.861
57.7014
35.6058
35.0352
34.7559
32.993
26.3139
20.6476
17.7826
17.1160
8.6724
5.1854
2.7744
0
0.0686
0.1037
0.1376
0.7112
0.1816
0.0429
18.1657
200325.52
592.2749
27.3796
2582.8299
0.1145
13.5453
933.5256
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
–
207
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 16
Significance test for GDHS and GHS (D = 30, NI = 5000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: Rank of
p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
7
9
6
14
5
2
8
10
15
12
1
4
13
3
SRHCE
Rastrigin
Griewank
Generalized Schwefel’s P. 2.26
Shifted rotated weierstrass
Rotated hyper-ellipsoid
Schwefel’s problem 2.22
Ackley
Six-Hump Camel-Back
Sum of different power
Schwefel’s P. 1.2 with noise
Sphere
Step
SRESF6
Rosenbrock
32.554
28.2996
22.4709
16.5874
14.4647
12.3089
9.9046
7.6246
6.6191
6.5873
6.1433
5.6547
3.7671
3.3163
2.3548
100032.3
0.3777
0.0351
20.4888
0.1096
1296.6650
0.0392
0.0702
0.0008
0.0061
1629.2905
0.0015
0.8627
0.2834
101.2213
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GHS
GDHS
GHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GHS
GDHS
Table 17
Significance test for GDHS and GHS + LEM (D = 30, NI = 5000). t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R:
Rank of p-value, I.R.: inverse rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
11
14
7
13
6
2
8
1
12
5
3
15
10
9
4
SRHCE
Shifted rotated weierstrass
Rastrigin
SRESF6
Generalized Schwefel’s P. 2.26
Schwefel’s problem 2.22
Ackley
Sphere
Schwefel’s P. 1.2 with noise
Rotated hyper-ellipsoid
Rosenbrock
Sum of different power
Six-Hump Camel-Back
Griewank
Step
35792.178
92.7107
33.7912
21.5925
17.0365
11.4985
10.9075
6.0523
5.8038
4.5717
3.9821
3.676
2.3111
0.0891
Inf
73.7022
0.0095
0.3424
0.1525
20.4206
0.0001
0
0
1073.8279
625.0598
23.0044
0
0.0004
0.0054
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GHS + LEM
GHS + LEM
GHS + LEM
GDHS
GDHS
GDHS
GHS + LEM
GDHS
GHS + LEM
GHS + LEM
GDHS
–
–
Table 18
Runtime of each test function for experiment A, with GDHS algorithm (sec. per run).
No.
Function
D = 30, NI = 50000
D = 50, NI = 50000
D = 30, NI = 5000
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Sphere
Schwefel’s problem 2.22
Rosenbrock
Step
Rotated hyper-ellipsoid
Generalized Schwefel’s P. 2.26
Rastrigin
Ackley
Griewank
Six-Hump Camel-Back
SRHCE
Schwefel’s P. 1.2 with noise
SRESF6
Shifted rotated weierstrass
Sum of different power
2.780
2.790
2.797
2.782
3.441
3.095
2.978
3.016
3.093
3.037 (D = 2)
3.057
3.522
3.133
24.492
3.270
3.247
3.242
3.251
3.251
4.397
3.628
3.338
3.500
3.605
3.037 (D = 2)
3.699
4.402
3.683
37.891
3.921
0.281
0.289
0.287
0.288
0.350
0.321
0.297
0.318
0.327
0.313 (D = 2)
0.316
0.345
0.329
2.393
0.334
System: windows 7 ultimate.
CPU: Intel Ò Core™ 2 Duo, 2.66–2.67 GHz.
RAM: 4 G.
Language: Matlab 7.12.0.635.
Algorithm: Global Dynamic Harmony Search (GDHS).
Harmony memory size (HMS) = 5.
D: dimension, NI: number of improvisations.
208
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
100%
90%
80%
70%
60%
50%
40%
Others
30%
GDHS
20%
10%
0%
Fig. 5. Success rate of GDHS versus other algorithms in experiment A (D = 30, NI = 50,000).
100%
90%
80%
70%
60%
50%
40%
Others
30%
GDHS
20%
10%
0%
Fig. 6. Success rate of GDHS versus other algorithms in experiment A (D = 50, NI = 50,000).
100%
90%
80%
70%
60%
50%
40%
Others
30%
GDHS
20%
10%
0%
Fig. 7. Success rate of GDHS versus other algorithms in experiment A (D = 30, NI = 5000).
209
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 19
Benchmark functions used in experiment B. D: dimension, C: characteristic, U: unimodal, M: multimodal, S: separable, N: non-separable.
No. Range
1
[5.12, 5.12]
D
C
5 US
Function
Stepint
2
[100, 100]
30 US
Step
3
[100, 100]
30 US
Sphere
4
[10, 10]
30 US
SumSquares
5
[1.28, 1.28]
30 US
Quartic
6
[4.5, 4.5]
2 UN
Beale
7
[100, 100]
2 UN
Easom
8
[10, 10]
2 UN
Matyas
9
[10, 10]
4 UN
Colville
Formulation
P
f ðxÞ ¼ 30 þ 5i¼1 bxi c
PD
f ðxÞ ¼ i¼1 ðbxi þ 0:5cÞ2
P
2
f ðxÞ ¼ D
i¼1 xi
P
2
ix
f ðxÞ ¼ D
i¼1 i
P
4
f ðxÞ ¼ D
ix
i¼1 i þ random½0; 1Þ
2
f ðxÞ ¼ ð1:5 x1 þ x1 x2 Þ2 þ ð2:25 x1 þ x1 x22 Þ þ ð2:625 x1 þ x1 x32 Þ
2
2
2
f ðxÞ ¼ cosðx1 Þ cosðx2 Þ expððx1 pÞ ðx2 pÞ Þ
f ðxÞ ¼ 0:26ðx21 þ x22 Þ 0:48x1 x2
2
2
f ðxÞ ¼ 100ðx21 x2 Þ þ ðx1 1Þ2 þ ðx3 1Þ2 þ 90ðx23 x4 Þ þ 10:1ððx2 1Þ2
2
10
[D2, D2]
6 UN
Trid6
11
[D2, D2]
10 UN
Trid10
12
[5, 10]
10 UN
Zakharov
13
[4, 5]
24 UN
Powell
14
[10, 10]
30 UN
Schwefel 2.22
15
[100, 100]
30 UN
Schwefel 1.2
16
[30, 30]
30 UN
Rosenbrock
17
[10, 10]
30 UN
18
[65.536, 65.536]
19
2 MS
Dixon-Price
Foxholes
[5, 10] [0,15]
2 MS
Branin
20
[100, 100]
2 MS
Bohachevsky 1
21
[100, 100]
2 MN Bohachevsky 2
þ ðx4 1Þ Þ þ 19:8ðx2 1Þðx4 1Þ
PD
PD
2
i¼1 ðxi 1Þ
i¼2 xi xi1
PD
P
2
f ðxÞ ¼ i¼1 ðxi 1Þ D
i¼2 xi xi1
PD
PD
P
2
4
2
f ðxÞ ¼ D
i¼1 xi þ ð
i¼1 0:5ixi Þ þ ð
i¼1 0:5ixi Þ
PD=4
2
f ðxÞ ¼ i¼1 ðx4i3 þ 10x4i2 Þ þ 5ðx4i1 x4i Þ2 þ ðx4i2 x4i1 Þ4 þ 10ðx4i3 x4i Þ4
P
QD
f ðxÞ ¼ D
i¼1 jxi j þ
i¼1 jxi j
2
PD Pi
f ðxÞ ¼ i¼1 ð j¼1 xj Þ
PD1
2
f ðxÞ ¼ i¼1 ½100ðxiþ1 x2i Þ þ ðxi 1Þ2
P
2
2
ið2x
f ðxÞ ¼ ðx1 1Þ2 þ D
i¼2
i xi1 Þ
f ðxÞ ¼
f ðxÞ ¼ 0:002 þ
P25
j¼1
1
ðx aij Þ6
i¼1 i
1
P2
jþ
2
2
5
þ 10 1 81p coxðx1 Þ þ 10
f ðxÞ ¼ x2 45:1
p2 x1 þ p x1 6
f ðxÞ ¼ x21 þ 2x22 0:3 cosð3px1 Þ 0:4 cosð4px2 Þ þ 0:7
f ðxÞ ¼ x21 þ 2x22 0:3 cosð3px1 Þ cosð4px2 Þ þ 0:3
22
[100, 100]
2 MN Bohachevsky 3
23
[10, 10]
2 MS
24
[5.12, 5.12]
30 MS
25
[500, 500]
30 MS
26
[0, p]
27
28
29
[100, 100]
2 MN Schaffer
30
[5, 5]
2 MN Six Hump Camel f ðxÞ ¼ 4x21 2:1x41 þ 13 x61 þ x1 x2 4x22 þ 4x42
Back
P
P
2 MN Shubert
f ðxÞ ¼ ð 5i¼1 i cosðði þ 1Þx1 þ iÞÞð 5i¼1 i cosðði þ 1Þx2 þ iÞÞ
2 MN Goldstein-Price
f ðxÞ ¼ ½1 þ ðx1 þ x2 þ 1Þ2 ð19 14x1 þ 3x21 14x2 þ 6x1 x2 þ 3x22 Þ
Booth
Rastrigin
Schwefel
2 MS
Michalewicz 2
[0, p]
5 MS
Michalewicz 5
[0, p]
10 MS
Michalewicz 10
f ðxÞ ¼ x21 þ 2x22 0:3 cosð3px1 þ 4px2 Þ þ 0:3
f ðxÞ ¼ ðx1 þ 2x2 7Þ2 þ ð2x1 þ x2 5Þ2
P
2
f ðxÞ ¼ D
i¼1 ½xi 10 cosð2pxi Þ þ 10
pffiffiffiffiffiffiffi
P
xi sinð jxi jÞ
f ðxÞ ¼ D
i¼1
2m
PD
f ðxÞ ¼ i¼1 sinðxi Þðsinðix2i =pÞÞ ; m ¼ 10
PD
2m
2
f ðxÞ ¼ i¼1 sinðxi Þðsinðixi =pÞÞ ; m ¼ 10
PD
2m
2
f ðxÞ ¼ i¼1 sinðxi Þðsinðixi =pÞÞ ; m ¼ 10
pffiffiffiffiffiffiffiffiffiffi
sin2 ð x21 þx22 Þ0:5
f ðxÞ ¼ 0:5 þ
2
2 2
ð1þ0:001ðx1 þx2 ÞÞ
31
[10, 10]
32
[2, 2]
33
[5, 5]
4 MN Kowalik
34
[0,10]
4 MN Shekel 5
35
[0,10]
4 MN Shekel 7
36
[0,10]
4 MN Shekel 10
½30 þ ð2x1 3x2 Þ2 ð18 32x1 þ 12x21 þ 48x2 36x1 x2 þ 27x22 Þ
2
2
P
x1 ðbi þbi x2 Þ
f ðxÞ ¼ 11
i¼1 ðai b2 þb x þx Þ
4
i 3
i
P5
f ðxÞ ¼ i¼1 P4 1 2
j¼1
37
[D, D]
4 MN Perm
38
[0, D]
4 MN PowerSum
39
[0, 1]
3 MN Hartman 3
40
[0, 1]
6 MN Hartman 6
41
[600, 600]
30 MN Griewank
42
[32, 32]
30 MN Ackley
f ðxÞ ¼
f ðxÞ ¼
P7
i¼1
P10
i¼1
P4
j¼1
P4
j¼1
ðxj aij Þ þci
1
ðxj aij Þ2 þci
1
ðxj aij Þ2 þci
2
PD PD k
k
k¼1 ½
i¼1 ði þ bÞððxi =iÞ 1Þ
i2
PD h PD k
f ðxÞ ¼ k¼1 ð i¼1 xi Þ bk
h P
i
P
f ðxÞ ¼ 4i¼1 ci exp 3j¼1 aij ðxj pij Þ2
h
i
P
P
f ðxÞ ¼ 4i¼1 ci exp 6j¼1 aij ðxj pij Þ2
PD 2 QD
xiffi
1
p
f ðxÞ ¼ 4000
i¼1 xi
i¼1 cosð iÞ þ 1
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
P
D
2 exp 1
f ðxÞ ¼ 20 exp 0:2 D1 D
i¼1 xi
i¼1 cosð2pxi Þ þ 20 þ e
D
f ðxÞ ¼
(continued on next page)
210
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 19 (continued)
No. Range
D
43
[50, 50]
30 MN Penalized
C
Function
44
[50, 50]
30 MN Penalized 2
45
[0, 10]
2 MN Langerman 2
46
[0,10]
5 MN Langerman 5
47
[0,10]
10 MN Langerman 10
Formulation
n
o P
P
2
2
2
f ðxÞ ¼ Dp 10 sin2 ðpy1 Þ þ D1
þ D
i¼1 ðyi 1Þ ½1 þ 10 sin ðpyiþ1 Þ þ ðyD 1Þ
i¼1 uðxi ; 10; 100; 4Þ
8
m
< kðxi aÞ ;
xi > a
yi ¼ 1 þ 14 ðxi þ 1Þuðxi ; a; k; mÞ ¼ 0;
a 6 xi 6 a
:
kðxi aÞm ; xi < a
n
o
P
2
2
2
2
2
f ðxÞ ¼ 0:1 sin2 ðpx1 Þ þ D1
i¼1 ðxi 1Þ ½1 þ sin ð3pxiþ1 Þ þ ðxD 1Þ ½1 þ sin ð2pxD Þ þ ðyD 1Þ
P
þ D
i¼1 uðxi ; 5; 100; 4Þ
P
P5
P
2
2
f ðxÞ ¼ i¼1 ci exp p1 D
cos p D
j¼1 ðxj aij Þ
j¼1 ðxj aij Þ
P
P
P
2
2
f ðxÞ ¼ 5i¼1 ci exp p1 D
cos p D
j¼1 ðxj aij Þ
j¼1 ðxj aij Þ
P
P
P
2
2
f ðxÞ ¼ 5i¼1 ci exp p1 D
cos p D
j¼1 ðxj aij Þ
j¼1 ðxj aij Þ
So, the bandwidth is in the form of:
bw ¼ bwmax e
bw
Lnbw min
max
iteration1
Maximp 1
ð20Þ
This is reduced to:
bw ¼ bwmax e
ðLnð0:001ÞÞ
iteration1
Maximp 1
ð21Þ
For pitch adjustment, a correction coefficient is proposed as:
iteration 1
coef ¼ ð1 þ ðHMS jÞÞ 1
Maximp 1
ð22Þ
Adjustment formula is:
x0i ¼ xji bw coef
ð23Þ
x0i .
In Eq. (22), j is the selected solution vector number, j (1, . . ., HMS), related to
The ‘‘coef’’ parameter makes smaller or
larger bandwidths with respect to the quality of the selected value in the harmony memory and also state of the optimization (i.e. iteration number). The improvisation step of GDHS is shown in Fig. 4.
4. GDHS verifications tests
In this section two experiments are executed to show the performance of the proposed algorithm. In the experiment A, 15
benchmark problems are selected based on GHS + LEM [9]. The values of responses of other algorithms are obtained from the
[9], so as to compare in fair conditions, the initial values of GHS + LEM are used and the averages are obtain after 100
repetitions as in [9].
In the second experiment 47 benchmark functions are used based on Artificial Bee Colony [14] to test the performance of
the proposed algorithm against other types of algorithms from different families. Since the response values of other
algorithms are obtained from the so-called paper, the same initial and test conditions are used, so the algorithm results
are repeated 30 times to meet that condition of the [14].
4.1. Experiment A
4.1.1. Benchmark functions and parameter settings
This section shows the performance of the proposed algorithm versus other recently developed modifications of HS algorithm. In order to compare the proposed algorithm with previous works, 15 test functions are selected based on GHS + LEM
[9] which include unimodal/multimodal, separable/non-separable functions. The definition and characteristics of these test
functions are gathered from [15–17]. Table 1 shows the parameters for each algorithm, while Table 2 represents the 15
benchmark test functions and their conditions. Except the bi-dimensional Six-Hump Camel-Back, all other test functions
are multidimensional which are tested for 30 and 50 dimensions. All of the functions used for this experiment, have been
tested with the maximum number of iterations (total function evaluation number) of 5000 and 50,000 and the harmony
memory size (population size) is set at 5. The initial harmony memory is generated randomly within the ranges specified
211
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 20
Results of the experiment B. D: dimension, Mean: mean of the best values, StdDev: standard deviation of the best values, SEM: standard error of means.
No.
D
Function
Min.
1
5
Stepint
0
2
30
Step
0
3
30
Sphere
0
4
30
SumSquares
0
5
30
Quartic
0
6
2
Beale
0
7
2
Easom
1
8
2
Matyas
0
9
4
Colville
0
10
6
Trid6
50
11
10
Trid10
210
12
10
Zakharov
0
13
24
Powell
0
14
30
Schwefel 2.22
0
15
30
Schwefel 1.2
0
16
30
Rosenbrock
0
17
30
Dixon-Price
0
18
2
Foxholes
0.998
19
2
Branin
0.398
20
2
Bohachevsky 1
0
21
2
Bohachevsky 2
0
22
2
Bohachevsky 3
0
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
GA
PSO
DE
ABC
This study
0
0
0
1.17E+03
76.561450
13.978144
1.11E+03
74.214474
13.549647
1.48E+02
12.4092893
2.265616
0.1807
0.027116
0.004951
0
0
0
1
0
0
0
0
0
0.014938
0.007364
0.001344
49.9999
2.25E5
4.11E06
209.476
0.193417
0.035313
0.013355
0.004532
0.000827
9.703771
1.547983
0.282622
11.0214
1.386856
0.253204
7.40E+03
1.14E+03
208.1346
1.96E+05
3.85E+04
7029.10615
1.22E+03
2.66E+02
48.564733
0.998004
0
0
0.397887
0
0
0
0
0
0.06829
0.078216
0.014280
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.00115659
0.000276
5.04E05
0
0
0
1
0
0
0
0
0
0
0
0
50
0
0
210
0
0
0
0
0
0.00011004
0.000160
2.92E05
0
0
0
0
0
0
15.088617
24.170196
4.412854
0.6666667
E8
1.8257E09
0.99800393
0
0
0.39788736
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.0013633
0.000417
7.61E05
0
0
0
1
0
0
0
0
0
0.0409122
0.081979
0.014967
50
0
0
210
0
0
0
0
0
2.17E7
1.36E7
2.48E08
0
0
0
0
0
0
18.203938
5.036187
0.033333
0.6666667
E9
1.8257E10
0.9980039
0
0
0.3978874
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.0300166
0.004866
0.000888
0
0
0
1
0
0
0
0
0
0.0929674
0.066277
0.012100
50
0
0
210
0
0
0.0002476
0.000183
3.34E05
0.0031344
0.000503
9.18E05
0
0
0
0
0
0
0.0887707
0.077390
0.014129
0
0
0
0.9980039
0
0
0.3978874
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.0005617
0.0001018
1.86E05
0
0
0
1
0
0
0
0
0
3.9911E07
5.9499E07
1.0863E07
50
0
0
210
0
0
0
0
0
0.0014455
0.0002926
5.34E05
0
0
0
2.4498E06
2.8195E06
5.1477E07
31.5794810
23.273952
4.249223
0.6666667
1.79E9
3.27E10
0.9980039
0
0
0.3978874
0
0
0
0
0
0
0
0
0
0
0
(continued on next page)
212
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 20 (continued)
No.
D
Function
Min.
23
2
Booth
0
24
30
Rastrigin
0
25
30
Schwefel
12569.5
26
2
Michalewicz 2
1.8013
27
5
Michalewicz 5
4.6877
28
10
Michalewicz 10
9.6602
29
2
Schaffer
0
30
2
Six Hump Camel Back
1.03163
31
2
Shubert
186.73
32
2
Goldstein-Price
3
33
4
Kowalik
0.00031
34
4
Shekel 5
10.15
35
4
Shekel 7
10.4
36
4
Shekel 10
10.53
37
4
Perm
0
38
4
PowerSum
0
39
3
Hartman 3
3.86
40
6
Hartman 6
3.32
41
30
Griewank
0
42
30
Ackley
0
43
30
Penalized
0
44
30
Penalized 2
0
45
2
Langerman 2
1.08
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
Mean
StdDev
SEM
GA
PSO
DE
ABC
This study
0
0
0
52.92259
4.564860
0.833426
11593.4
93.254240
17.025816
1.8013
0
0
4.64483
0.097850
0.017865
9.49683
0.141116
0.025764
0.004239
0.004763
0.000870
1.03163
0
0
186.731
0
0
5.250611
5.870093
1.071727
0.005615
0.008171
0.001492
5.66052
3.866737
0.705966
5.34409
3.517134
0.642138
3.82984
2.451956
0.447664
0.302671
0.193254
0.035286
0.010405
0.009077
0.001657
3.86278
0
0
3.29822
0.050130
0.009152
10.63346
1.161455
0.212052
14.67178
0.178141
0.032524
13.3772
1.448726
0.2645
125.0613
12.001204
2.191110
1.08094
0
0
0
0
0
43.9771369
11.728676
2.141353
6909.1359
457.957783
83.611269
1.5728692
0.119860
0.021883
2.4908728
0.256952
0.046913
4.0071803
0.502628
0.091767
0
0
0
1.0316285
0
0
186.73091
0
0
3
0
0
0.00049062
0.000366
6.68E05
2.0870079
1.178460
0.215156
1.9898713
1.420602
0.259365
1.8796753
0.432476
0.078959
0.03605158
0.048927
0.008933
11.3904479
7.355800
1.342979
3.6333523
0.116937
0.021350
1.8591298
0.439958
0.080325
0.01739118
0.020808
0.003799
0.16462236
0.493867
0.090167
0.0207338
0.041468
0.007571
0.00767535
0.016288
0.002974
0.679268
0.274621
0.050139
0
0
0
11.716728
2.538172
0.463405
10266
521.849292
95.276209
1.801303
0
0
4.683482
0.012529
0.002287
9.591151
0.064205
0.011722
0
0
0
1.031628
0
0
186.7309
0
0
3
0
0
0.0004266
0.000273
4.98E05
10.1532
0
0
10.40294
0
0
10.53641
0
0
0.0240069
0.046032
0.008404
0.0001425
0.000145
2.65E05
3.862782
0
0
3.226881
0.047557
0.008683
0.0014792
0.002958
0.000540
0
0
0
0
0
0
0.0021975
0.004395
0.000802
1.080938
0
0
0
0
0
0
0
0
12569.487
0
0
1.8013034
0
0
4.6876582
0
0
9.6601517
0
0
0
0
0
1.0316285
0
0
186.73091
0
0
3
0
0
0.0004266
6.04E5
1.10E05
10.1532
0
0
10.402941
0
0
10.53641
0
0
0.0411052
0.023056
0.004209
0.0029468
0.002289
0.000418
3.8627821
0
0
3.3219952
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1.0809384
0
0
0
0
0
0
0
0
12569.487
0
0
1.8013034
0
0
4.6876582
0
0
9.6601517
0
0
0
0
0
1.0316285
0
0
186.73091
0
0
3
0
0
0.0006651
2.17E5
3.97E6
9.3310072
2.132314
0.389305
10.402941
0
0
10.536410
0
0
0.0032908
0.002523
0.0004606
0.0002550
0.000272
4.97E05
3.8627821
0
0
3.2985268
0.0484976
0.008854
0
0
0
0
0
0
0
0
0
0
0
0
1.0809384
0
0
213
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Table 20 (continued)
No.
D
Function
Min.
46
5
Langerman 5
1.5
47
10
Langerman 10
NA
Mean
StdDev
SEM
Mean
StdDev
SEM
GA
PSO
DE
ABC
This study
0.96842
0.287548
0.052499
0.63644
0.374682
0.068407
0.5048579
0.213626
0.039003
0.0025656
0.003523
0.000643
1.499999
0
0
1.0528
0.302257
0.055184
0.938150
0.000208
3.80E05
0.4460925
0.133958
0.024457
0.9384233
0
0
0.8059999
0
0
for each function. Each of the algorithms was repeated 100 times on each of the test functions with different random seeds to
be sure that reliable average mean values and deviations are obtained.
4.1.2. Results and discussion
The mean average, standard deviation, and standard error of means of the results of this experiment are shown in the
Tables 3–5. For simplicity in notification and calculations, the values smaller than 1012 are assumed to be zero. The bold
Table 21
Significance test for GDHS and GA. t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: rank of p-value, I.R.: inverse
rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
42
2
3
4
24
25
44
43
41
14
5
15
13
16
17
10
12
36
11
9
37
35
28
38
29
21
34
33
47
27
32
46
40
1
6
7
8
18
19
20
22
23
26
30
31
39
45
Ackley
Step
Sphere
SumSquares
Rastrigin
Schwefel
Penalized 2
Penalized
Griewank
Schwefel 2.22
Quartic
Schwefel 1.2
Powell
Rosenbrock
Dixon-Price
Trid6
Zakharov
Shekel 10
Trid10
Colville
Perm
Shekel 7
Michalewicz 10
PowerSum
Schaffer
Bohachevsky 2
Shekel 5
Kowalik
Langerman 10
Michalewicz 5
Goldstein-Price
Langerman 5
Hartman 6
Stepint
Beale
Easom
Matyas
Foxholes
Branin
Bohachevsky 1
Bohachevsky 3
Booth
Michalewicz 2
Six Hump Camel Back
Shubert
Hartman 3
Langerman 2
451.107
83.7021
81.921
65.3244
63.5001
57.3298
57.0767
50.5754
50.1456
43.5277
36.3863
35.5539
34.3297
27.8796
25.1074
24.3432
16.1404
14.9813
14.8387
11.1103
8.4843
7.8781
6.3391
6.1219
4.8747
4.7821
4.5529
3.318
2.4787
2.3973
2.1
0.57138
0.0241
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
0.0325
13.9781
13.5496
2.2656
0.8334
17.0258
2.1911
0.2645
0.2121
0.2532
0.005
208.1346
0.28262
7029.1074
48.5647
0
0.0008
0.4477
0.035313
0.0013
0.0353
0.6421
0.0258
0.0017
0.0009
0.0143
0.80619
0.0015
0.0684
0.0179
1.0717
0.0525
0.0127
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.000009
0.000012
0.000028
0.001570
0.016116
0.019758
0.040089
0.569950
0.980856
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
47
46
45
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.001064
0.001087
0.001111
0.001136
0.001163
0.001190
0.001220
0.001250
0.001282
0.001316
0.001351
0.001389
0.001429
0.001471
0.001515
0.001563
0.001613
0.001667
0.001724
0.001786
0.001852
0.001923
0.002000
0.002083
0.002174
0.002273
0.002381
0.002500
0.002632
0.002778
0.002941
0.003125
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
214
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
font in tables shows the best answers among all algorithms (i.e. the closest to the actual optimum of the function). The values
for other methods in Tables 3–5 are extracted from GHS + LEM [9].
In order to analyze the results clearly and see whether there is any significant difference between the results of each algorithm, a t-test is performed on each pairs of algorithms. The p-values are then calculated for each function, but they are not
used directly to be compared with a to determine the significance because as Bratton mentioned in [18], the probabilistic
nature of optimization algorithms makes it possible that some results are dependent on chance, even random data generated
from the same distribution will differ significantly sometimes. In order to handle this issue, the Modified Bonferroni Correction is proposed by Bratton [18]. In this method a number of t-tests are conducted, and p-values are calculated. The p-values
are then ranked ascending (t-test values descending), and the new ranking is recorded. These ranks are then inverted, so that
the highest p-value gets an inverse-rank of 1. Then, a (=0.05) is divided by the inverse rank for each observation, and the new
a is obtained. If ‘‘p < New a’’, then there is a significant difference between algorithms on the specified function. These statistical tests are presented in Tables 6–17.
Tables 6–9 present the results of the comparison between each pairs of the algorithms, for the case of 30 dimensions and
50,000 iterations, except for the Six-Hump Camel-Back which is defined in two dimensions. From the Tables 6 and 7, it is
shown that GDHS outperforms both HS and IHS algorithms. Table 8 shows the comparison between GDHS and GHS. As it
shows, there is no significant difference between the two algorithms on Step function, but on all other functions, GDHS gives
the best results. Table 9 shows the comparison between GDHS and GHS + LEM algorithms. On 4 functions (Schwefel’s P. 1.2
Table 22
Significance test for GDHS and PSO. t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: rank of p-value, I.R.: inverse
rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
47
36
25
28
27
35
13
24
40
34
46
5
39
26
38
45
15
41
9
37
43
16
33
44
42
1
2
3
4
6
7
8
10
11
12
14
17
18
19
20
21
22
23
29
30
31
32
Langerman 10
Shekel 10
Schwefel
Michalewicz 10
Michalewicz 5
Shekel 7
Powell
Rastrigin
Hartman 6
Shekel 5
Langerman 5
Quartic
Hartman 3
Michalewicz 2
PowerSum
Langerman 2
Schwefel 1.2
Griewank
Colville
Perm
Penalized
Rosenbrock
Kowalik
Penalized 2
Ackley
Stepint
Step
Sphere
SumSquares
Beale
Easom
Matyas
Trid6
Trid10
Zakharov
Schwefel 2.22
Dixon-Price
Foxholes
Branin
Bohachevsky 1
Bohachevsky 2
Bohachevsky 3
Booth
Schaffer
Six Hump Camel Back
Shubert
Goldstein-Price
1249.1033
109.6359
67.6984
61.6014
46.827
32.4372
21.9336
20.5371
17.8118
16.2858
11.1163
11.0762
10.7463
10.4387
8.4813
8.0112
4.759
4.5778
3.674
3.6626
2.7386
2.6919
2.6065
2.581
1.8257
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
0.0006
0.079
83.6113
0.0918
0.0469
0.25937
0
2.1414
0.0808
0.4448
0.039
0
0.0214
0.0219
1.343
0.0501
0
0.0038
0
0.0089
0.0076
6.1261
0
0.003
0.092
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.000013
0.000025
0.000523
0.000542
0.008182
0.009270
0.011607
0.012403
0.073045
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
47
46
45
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.001064
0.001087
0.001111
0.001136
0.001163
0.001190
0.001220
0.001250
0.001282
0.001316
0.001351
0.001389
0.001429
0.001471
0.001515
0.001563
0.001613
0.001667
0.001724
0.001786
0.001852
0.001923
0.002000
0.002083
0.002174
0.002273
0.002381
0.002500
0.002632
0.002778
0.002941
0.003125
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
PSO
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
GDHS
PSO
GDHS
PSO
GDHS
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
215
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
with noise, Generalized Schwefel’s P. 2.26, Six-Hump Camel Back and Step) there is no significant difference between the two
algorithms. On 3 functions, GHS + LEM gives better results, while GDHS performs better on 8 functions.
Tables 10–13 present the results of the comparison between each pairs of the algorithms, for the case of 50 dimensions
and 50,000 iterations. From Tables 10–12, it is shown that GDHS outperforms HS, IHS and GHS algorithms. Table 13 shows
the comparison between GDHS and GHS + LEM algorithms. On 3 functions, (Generalized Schwefel’s P. 2.26, Six-Hump Camel
Back and Step), there is no significant different between the two algorithms. On 4 functions, GHS + LEM gives better results,
while GDHS performs better on 8 functions.
Tables 14–17 present the results of the comparison between each pairs of the algorithms, for the case of 30 dimensions
and 5000 iterations. Table 14 shows the comparison between GDHS and HS. On all functions, GDHS gives the best results.
Table 15 shows the comparison between GDHS and IHS. On the function of Six Hump Camel Back, there is no significant difference between the two algorithms. Among all other functions, GDHS gives the best results. Table 16 shows the comparison
between GDHS and GHS. On 3 functions, GHS performs better while GDHS gives the best results on the other 12 functions.
Table 17 shows the comparison between GDHS and GHS + LEM algorithms. On 2 functions, Griewank and Step, there are no
significant difference between the two algorithms. On 6 functions, GHS + LEM gives better results, while GDHS performs better on 7 functions.
Table 23
Significance test for GDHS and DE. t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: rank of p-value, I.R.: inverse
rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
46
13
24
25
5
28
40
33
15
47
16
41
44
9
37
34
38
27
1
2
3
4
6
7
8
10
11
12
14
17
18
19
20
21
22
23
26
29
30
31
32
35
36
39
42
43
45
Langerman 5
Powell
Rastrigin
Schwefel
Quartic
Michalewicz 10
Hartman 6
Kowalik
Schwefel 1.2
Langerman 10
Rosenbrock
Griewank
Penalized 2
Colville
Perm
Shekel 5
PowerSum
Michalewicz 5
Stepint
Step
Sphere
SumSquares
Beale
Easom
Matyas
Trid6
Trid10
Zakharov
Schwefel 2.22
Dixon-Price
Foxholes
Branin
Bohachevsky 1
Bohachevsky 2
Bohachevsky 3
Booth
Michalewicz 2
Schaffer
Six Hump Camel Back
Shubert
Goldstein-Price
Shekel 7
Shekel 10
Hartman 3
Ackley
Penalized
Langerman 2
30758768
27.0545
25.284
24.1769
10.2285
5.8863
5.7773
4.77
4.759
4.4723
3.0766
2.739
2.7386
2.7334
2.4613
2.1119
1.9991
1.8257
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
0
0
0.4634
95.2762
0
0.0117
0.0124
0
0
0.0552
4.3476
0.0005
0.0008
0.015
0.0084
0.3893
0
0.0023
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.000013
0.000013
0.000037
0.003193
0.008173
0.008182
0.008297
0.016840
0.039013
0.050291
0.073045
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
47
46
45
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.001064
0.001087
0.001111
0.001136
0.001163
0.001190
0.001220
0.001250
0.001282
0.001316
0.001351
0.001389
0.001429
0.001471
0.001515
0.001563
0.001613
0.001667
0.001724
0.001786
0.001852
0.001923
0.002000
0.002083
0.002174
0.002273
0.002381
0.002500
0.002632
0.002778
0.002941
0.003125
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
DE
DE
GDHS
GDHS
GDHS
GDHS
GDHS
DE
DE
DE
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
216
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
Comparing the Tables 3 and 4, and the statistical results of them, Tables 6–13, it can be concluded that with increasing the
number of dimensions, while the number of iterations is kept constant, GDHS certainly outperforms the original HS, IHS and
GHS algorithms. Between algorithms GDHS and GHS + LEM, increasing the number of dimensions, causes the GDHS to fail on
the function of ‘‘Schwefel’s P. 1.2 with noise’’, which is a unimodal/non-separable function.
Comparing the Tables 3 and 5, and the statistical results obtained from them, Tables 6–9 and 14–17, it can be concluded
that with decreasing the number of iterations, while the number of dimensions is kept constant, GDHS certainly outperforms
the original HS and IHS algorithms. Between GDHS and GHS, with decreasing iterations, GHS gives better result on 3 functions that are multimodal (separable and non-separable), while GDHS performs better on 12 functions. Between GDHS and
GHS + LEM, with decreasing iterations, on the function of Six Hump Camel Back, GDHS performs better with lower iterations.
On the other hand, decreasing the number of iterations causes the GDHS to fail on 4 functions (Generalized Schwefel’s P.
2.26, Rastrigin, Schwefel’s P. 1.2 with noise and Sum of different power), which are multimodal/unimodal kind of functions.
Also GDHS outperforms GHS + LEM on 7 functions. Table 18 shows the runtime of each test function, for different cases, with
the GDHS algorithm. The graphs of success rate of GDHS versus other algorithms in this experiment are shown in Figs. 5–7.
Generally speaking, comparing the results will lead us to the point that although the GDHS algorithm does not give the
best results on some ‘‘non-separable’’ functions, it performs the best on some other functions of this kind. This contradiction
Table 24
Significance test for GDHS and ABC. t: t-value of student t-test, SED: standard error of difference, p: p-value calculated for t-value, R: rank of p-value, I.R.: inverse
rank of p-value, Sign: significance.
No
Function
t
SED
p
R
I.R.
NEW a
Sign.
17
5
33
13
47
37
9
16
12
46
38
15
40
34
1
2
3
4
6
7
8
10
11
14
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
35
36
39
41
42
43
44
45
Dixon-Price
Quartic
Kowalik
Powell
Langerman 10
Perm
Colville
Rosenbrock
Zakharov
Langerman 5
PowerSum
Schwefel 1.2
Hartman 6
Shekel 5
Stepint
Step
Sphere
SumSquares
Beale
Easom
Matyas
Trid6
Trid10
Schwefel 2.22
Foxholes
Branin
Bohachevsky 1
Bohachevsky 2
Bohachevsky 3
Booth
Rastrigin
Schwefel
Michalewicz 2
Michalewicz 5
Michalewicz 10
Schaffer
Six Hump Camel Back
Shubert
Goldstein-Price
Shekel 7
Shekel 10
Hartman 3
Griewank
Ackley
Penalized
Penalized 2
Langerman 2
2.04E+09
33.1475
20.354
15.8967
14.7158
8.93
7.6829
7.4109
7.4107
7.1968
6.3961
4.759
2.6505
2.1119
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
Inf
0
0.0009
0
0.0001
0.0245
0.0042
0.0121
4.2492
0
0
0.0004
0
0.0088
0.3893
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.000013
0.010343
0.039013
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
47
46
45
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0.001064
0.001087
0.001111
0.001136
0.001163
0.001190
0.001220
0.001250
0.001282
0.001316
0.001351
0.001389
0.001429
0.001471
0.001515
0.001563
0.001613
0.001667
0.001724
0.001786
0.001852
0.001923
0.002000
0.002083
0.002174
0.002273
0.002381
0.002500
0.002632
0.002778
0.002941
0.003125
0.003333
0.003571
0.003846
0.004167
0.004545
0.005000
0.005556
0.006250
0.007143
0.008333
0.010000
0.012500
0.016667
0.025000
0.050000
ABC
GDHS
ABC
GDHS
GDHS
GDHS
GDHS
ABC
GDHS
GDHS
GDHS
ABC
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
217
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
in performance of GDHS on non-separable functions shows that it can be better if some parameters are changed in order to
prepare more investigation and escape from the local optima.
4.2. Experiment B
4.2.1. Benchmark functions and parameter settings
In this experiment 47 benchmark problems are selected based on [14] to test the performance of the proposed algorithm
against other types of algorithms from different families. This set of test functions includes many problems such as unimodal, multimodal, separable, non-separable and multidimensional. The functions used in this experiment are: GA, PSO,
Table 25
Runtime of each test function for experiment B, with GDHS algorithm.
No.
Range
D
C
Function
Time (second/run)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[5.12, 5.12]
[100, 100]
[100, 100]
[10, 10]
[1.28, 1.28]
[4.5, 4.5]
[100, 100]
[10, 10]
[10, 10]
[D2, D2]
[D2, D2]
[5, 10]
[4, 5]
[10, 10]
[100, 100]
[30, 30]
[10, 10]
[65.536, 65.536]
[5, 10] [0, 15]
[100, 100]
[100, 100]
[100, 100]
[10, 10]
[5.12, 5.12]
[500, 500]
[0, p]
[0, p]
[0, p]
[100, 100]
[5, 5]
[10, 10]
[2, 2]
[5, 5]
[0, 10]
[0, 10]
[0, 10]
[D, D]
[0, D]
[0, 1]
[0, 1]
[600, 600]
[32, 32]
[50, 50]
[50, 50]
[0, 10]
[0, 10]
[0, 10]
5
30
30
30
30
2
2
2
4
6
10
10
24
30
30
30
30
2
2
2
2
2
2
30
30
2
5
10
2
2
2
2
4
4
4
4
4
4
3
6
30
30
30
30
2
5
10
US
US
US
US
US
UN
UN
UN
UN
UN
UN
UN
UN
UN
UN
UN
UN
MS
MS
MS
MN
MN
MS
MS
MS
MS
MS
MS
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
MN
Stepint
Step
Sphere
SumSquares
Quartic
Beale
Easom
Matyas
Colville
Trid6
Trid10
Zakharov
Powell
Schwefel 2.22
Schwefel 1.2
Rosenbrock
Dixon-Price
Foxholes
Branin
Bohachevsky 1
Bohachevsky 2
Bohachevsky 3
Booth
Rastrigin
Schwefel
Michalewicz 2
Michalewicz 5
Michalewicz 10
Schaffer
Six Hump Camel Back
Shubert
Goldstein-Price
Kowalik
Shekel 5
Shekel 7
Shekel 10
Perm
PowerSum
Hartman 3
Hartman 6
Griewank
Ackley
Penalized
Penalized 2
Langerman 2
Langerman 5
Langerman 10
24.773
36.484
37.554
35.601
38.775
32.973
32.560
32.206
24.209
25.425
27.009
27.421
34.336
35.661
41.004
35.501
35.468
32.040
32.521
32.550
32.541
32.459
32.701
36.383
38.769
34.253
26.579
28.763
34.225
32.970
33.328
32.469
24.934
25.515
26.581
27.543
29.023
28.480
24.974
26.264
38.621
37.999
37.856
36.765
35.953
27.183
28.679
System: windows 7 ultimate.
CPU: Intel Ò Core™ 2 Duo, 2.66–2.67 GHz.
RAM: 4G.
Language: Matlab 7.12.0.635.
Algorithm: Global Dynamic Harmony Search (GDHS).
Harmony memory size (HMS) = 50.
Number of improvisations = 500,000.
D: dimension, C: characteristic, U: unimodal, M: multimodal, S: separable, N: non-separable.
218
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
DE and ABC, which all of them are used in the original form (i.e. no modification), as described in [14]. Test functions used for
this experiment with their characteristics are listed in Table 19. These functions can be found in [14–16,19,20].
Like previous experiment, the initial harmony memory is generated randomly within the ranges specified for each function. Each of the algorithms was repeated 30 times on each of the test functions with different random seeds to be sure that
reliable average mean values and deviations are obtained. In all of the functions used for this experiment, the maximum
number of iterations (total function evaluation number) is set at 500,000 and the initial harmony memory size (population
size) is set at 50.
4.2.2. Results and discussion
The mean average, standard deviation, and standard error of means of the results of this experiment are shown in the
Table 20. The values for the algorithms GA, PSO, DE and ABC are extracted from [14]. Again, for simplicity in notification
and calculations, the values smaller than 1012 are supposed to be zero. The values with bold font in the table show the best
answer among all algorithms (i.e. the closest to the actual optimum of the function).
To compare the results in a proper manner, the same procedure of the previous experiment, t-test table, is used. The statistical test results of this experiment are presented in Tables 21–24.
Table 21 shows the statistical results and comparison between GDHS and GA. As it is seen from the results, on 19 functions there are no significant differences between the two algorithms, while on the other 28 functions, GDHS outperforms
GA. Table 22 presents the comparison between GDHS and PSO. On 27 functions there are no significant differences between
the two algorithms. On 3 functions, Powell, Schwefel 1.2, Colville; PSO gives better results while GDHS outperforms PSO on
17 functions. Table 23 shows the comparison between GDHS and DE. As the table shows, in this case there are no significant
differences between the two algorithms on 37 functions. On 5 functions DE performs better and GDHS performs better on the
other 5 functions. Table 24 shows the comparison between GDHS and ABC algorithm. On 35 functions, there are no significant differences between the two algorithms. On four functions (Dixon-Price, Kowalik, Rosenbrock and Schwefel 1.2) ABC
gives better results but GDHS performs better on 8 functions.
From Tables 21–24, one can see that, for all algorithms, there is no significance on Stepint, Beale, Easom, Matyas, Foxholes,
Branin, Bohachevsky1, Bohachevsky3, Booth, Six Hump Camel Back, Shubert, Goldstein-Price functions; although GA could
not find the global optimum for Goldstein-Price function.
Comparing the statistical results, Tables 21–24, it can be concluded that GDHS does not give the best results on some
‘‘Non-Separable’’ functions. The functions that GDHS is not the best algorithm on them are listed below with a brief
description:
Schwefel 1.2: Unimodal, non-separable, continuous, differentiable, scalable
Powell: Unimodal, non-separable, continuous, differentiable, scalable
Kowalik: Multimodal, non-separable; the global minimum is located very close to the local minima
Rosenbrock: Unimodal, non-separable, continuous, differentiable, scalable; the global optimum lays inside a long, narrow, parabolic shaped flat valley
Dixon-Price: Unimodal, non-separable, continuous, differentiable, scalable
Langerman: Multimodal, non-separable, continuous, differentiable, scalable; the local minima are unevenly distributed
Colville: Multimodal, non-separable, continuous, differentiable, non-scalable
Although GDHS does not give the best results on the above functions, it can easily find the global optimum of the other
non-separable functions: Beale, Easom, Matyas, Trid 6, Trid 10, Zakharov, Schwefel 2.22, Bohachevsky 2, Schaffer, Shekel 7,
Shekel 10, Perm, Hartman 3, Griewank, Ackley, Penalized, Penalized 2, Langerman 2 (18 functions). Table 25 shows the run-
100%
90%
80%
70%
60%
Others
50%
GDHS
40%
30%
20%
10%
0%
GDHS-GA
GDHS-PSO
GDHS-DE
GDHS-ABC
Fig. 8. Success rate of GDHS versus other algorithms in experiment B.
M. Khalili et al. / Applied Mathematics and Computation 228 (2014) 195–219
219
time of each test function, for different cases, with the GDHS algorithm. The graph of success rate of GDHS versus other
algorithms in this experiment is shown in Fig. 8.
7. Conclusions
This paper presented a new modification of Harmony Search algorithm called GDHS. In the proposed algorithm all the key
parameters was changed to dynamic mode to make them case independent. This modification allows the algorithm to perform efficiently in both unimodal and multimodal functions.
In this work, two experiments were executed. In the first one, the proposed algorithm was compared with 4 other algorithms from the same family, including: original HS, IHS, GHS, GHS + LEM. The statistical results showed that the GDHS algorithm outperforms all its ancestors. In this experiment, it was shown that with increasing the number of dimensions, while
the number of iterations is kept constant, GDHS outperforms the original HS, IHS, GHS and GHS + LEM. Also, from the statistical results, it can be concluded that with decreasing the number of iterations, while the number of dimensions is kept
constant, GDHS performs better than or similar to that of these algorithms, considering the point that the proposed method
uses less setting parameters than the others.
In the second experiment, the GDHS algorithm was compared with 4 algorithms from different families, including: GA,
PSO, DE and ABC, on a large set of unconstrained test functions. The results showed that the proposed algorithm performs
better than or similar to these algorithms, knowing the issue that in the proposed algorithm most of the parameters are
changing dynamically and there is only one parameter (harmony memory size) to be predefined. Changing the parameters
dynamically helps in various problems that the user is not familiar with the nature of the problem and the dynamic mode
helps to have the best investigation and convergence attributes.
References
[1] Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation 76 (2) (2001) 60–68.
[2] K.S. Lee, Z.W. Geem, A new structural optimization method based on the harmony search algorithm, Comput. Struct. 82 (9) (2004) 781–798.
[3] J.H. Kim, Z.W. Geem, E.S. Kim, Parameter estimation of the nonlinear Muskingum model using harmony search, J. Am. Water Resour. Assoc. 37 (5)
(2001) 1131–1138.
[4] Z.W. Geem, J.-H. Kim, S.-H. Jeong, Cost efficient and practical design of water supply network using harmony search, Afr. J. Agric. Res. 6 (13) (2011)
3110–3116.
[5] Z.Z.W. Geem, J.C.J. Williams, Harmony search and ecological optimization, Int. J. Energy Environ. 1 (2) (2007) 150–154.
[6] Z.W. Geem, W.E. Roper, Various continuous harmony search algorithms for web-based hydrologic parameter optimisation, Int. J. Math. Model. Numer.
Optim. 1 (3) (2010) 213–226.
[7] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algorithm for solving optimization problems, Appl. Math. Comput. 188 (2)
(2007) 1567–1579.
[8] M.G.H. Omran, M. Mahdavi, Global-best harmony search, Appl. Math. Comput. 198 (2) (2008) 643–656.
[9] C. Cobos, D. Estupiñán, J. Pérez, GHS + LEM: global-best harmony search using learnable evolution models, Appl. Math. Comput. 218 (6) (2011) 2558–
2578.
[10] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice, Comput. Meth.
Appl. Mech. Eng. 194 (36–38) (2005) 3902–3933.
[11] L. Zhang, Y. Xu, Y. Liu, An elite decision making harmony search algorithm for optimization problem, J. Appl. Math. 2012 (2012) 1–15.
[12] R.S. Michalski, Learnable evolution model: evolutionary processes guided by machine learning, Mach. Learn. 38 (1) (2000) 9–40.
[13] Z.W. Geem, K.-B. Sim, Parameter-setting-free harmony search algorithm, Appl. Math. Comput. 217 (8) (2010) 3881–3889.
[14] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108–132.
[15] M. Molga, C. Smutnicki, Test functions for optimization needs. Available at: <www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf>, 2005.
[16] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session
on real-parameter optimization, 2005.
[17] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Trans. Evol. Comput. 3 (2) (1999) 82–102.
[18] D. Bratton, J. Kennedy, Defining a standard for particle swarm optimization, in: Swarm Intelligence Symposium, 2007. SIS 2007, IEEE, 2007, pp. 120–
127.
[19] M. Jamil, X.S. Yang, A literature survey of benchmark functions for global optimisation problems, Int. J. Math. Model. Numer. Optim. 4 (2) (2013) 150–
194.
[20] J.J. Liang, B.Y. Qu, P.N. Suganthan, A.G. Hernández-Díaz, Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter
optimization, 2013.