Autonomous Particles Groups For Particle Swarm Optimization: Seyedali Mirjalili Andrew Lewis Ali Safa Sadiq

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Arab J Sci Eng (2014) 39:4683–4697

DOI 10.1007/s13369-014-1156-x

RESEARCH ARTICLE - COMPUTER ENGINEERING AND COMPUTER SCIENCE

Autonomous Particles Groups for Particle Swarm Optimization


Seyedali Mirjalili · Andrew Lewis ·
Ali Safa Sadiq

Received: 6 December 2012 / Accepted: 15 June 2013 / Published online: 4 June 2014
© King Fahd University of Petroleum and Minerals 2014

Abstract In this paper, a modified particle swarm opti- Keywords PSO · Social behavior · Social coefficient ·
mization (PSO) algorithm called autonomous groups par- Cognitive coefficient · Function optimization ·
ticles swarm optimization (AGPSO) is proposed to further Autonomous particles groups
alleviate the two problems of trapping in local minima and
slow convergence rate in solving high-dimensional problems.
The main idea of AGPSO algorithm is inspired by individu-
als’ diversity in bird flocking or insect swarming. In natural
colonies, individuals are not basically quite similar in terms
of intelligence and ability, but they all do their duties as mem-
bers of a colony. Each individual’s ability can be useful in
a particular situation. In this paper, a mathematical model
of diverse particles groups called autonomous groups is pro-
posed. In other words different functions with diverse slopes,
curvatures, and interception points are employed to tune the
social and cognitive parameters of the PSO algorithm to
give particles different behaviors as in natural colonies. The
results show that PSO with autonomous groups of particles
outperforms the conventional and some recent modifications
of PSO in terms of escaping local minima and convergence
speed. The results also indicate that dividing particles in
groups and allowing them to have different individual and
social thinking can improve the performance of PSO signif-
icantly.

1 Introduction

S. Mirjalili (B) · A. Lewis


Particle swarm optimization (PSO) is one of the most widely
School of Information and Communication Technology,
Griffith University, Nathan, Brisbane, QLD 4111, Australia used evolutionary algorithms inspired by the social behavior
e-mail: seyedali.mirjalili@griffithuni.edu.au of animals [1,2]. The simplicity and inexpensive computa-
A. Lewis tional cost make this algorithm very popular. Due to these
e-mail: a.lewis@griffith.edu.au advantages, PSO has been applied to many domains such as
medical detection [3], grid scheduling [4], robot path plan-
A. S. Sadiq
ning [5], video abstraction [6], airfoil design [7], and neural
Faculty of Computing, Universiti Teknologi Malaysia,
81310 UTM Skudai, Johor, Malaysia networks [8–10]. In spite of these advantages, trapping in
e-mail: alisafa09@gmail.com local minima and slow convergence rate are two unavoidable

123
4684 Arab J Sci Eng (2014) 39:4683–4697

problems [11], which are common for all types of evolution- reason, the authors equipped the algorithm with a mutation
ary algorithms [12–22]. These two problems deteriorate with strategy.
increased problem dimensionality. There are some studies which have used time-varying
There are many methods in the literature to combat these coefficients for both cognitive and social coefficients. In
problems. Some of them focus on the hybridization of PSO 2009, Ziyu and Dingxue [32] introduced an exponentially
with other algorithms such as PSO-Genetic Algorithm (GA) time-varying acceleration function for adjusting both cogni-
[23], PSO-Gravitational Search Algorithm (GSA) [9,24], tive and social coefficients to control the global search abil-
and PSO-ant colony optimization (ACO) [25]. Some studies ity and convergence to the global best solution. In 2009, Bao
manipulate the interaction neighborhood topology of PSO and Mao suggested an asymmetric time-varying acceleration
to do this [26–28]. Regardless of their promising results, coefficient adjustment strategy [33]. They tried to utilize this
increased computational cost is the main problem of these strategy to balance local search and global search. They used
methods. some linear time-varying acceleration functions to adjust
Using dynamic parameter tuning is a method that increases social and cognitive coefficients. In 2008, Ciu et al. [34]
the performance of PSO without suffering from high com- employed three non-linear time-varying cognitive adjust-
putational cost [29–34]. The main parameters of PSO are the ment strategies as well as a time-varying social coefficient
weighting factor (w), cognitive coefficient (c1 ) and social adjustment strategy. The social factor was a function of the
coefficient (c2 ). The similarity of these approaches is that cognitive factor. The authors tried to find effective non-linear
the parameters are tuned with the same strategy for all par- time-varying strategies for c1 and c2 to solve complex func-
ticles. Therefore, all the particles follow the same pattern in tion optimization. The results showed that the PSO with the
their social and individual behaviors. In other words, the par- proposed time-varying adjustment strategy was superior to
ticles are obliged to search without any self-determination the conventional PSO.
and intelligence. In this paper, we propose a new approach Due to the complex nature of optimization problems, con-
of utilizing autonomous groups to give particles a sort of stant and linear time-varying values for cognitive and social
independence with the purpose of increasing performance. factor may not work well in many cases. Using a non-linear
The rest of the paper is organized as follows: Sect. 2 time-varying coefficient for PSO could yield better perfor-
describes the related works. Section 3 discusses the basic mance in some cases. However, one non-linear time-varying
principles of the PSO algorithm. The proposed method is strategy for all particles may not lead to a general opti-
explained in Sect. 4. The experimental results are demon- mizer with good performance. In this paper, we propose
strated in Sect. 5. Finally, Sect. 6 concludes the work and autonomous groups of particles for PSO which have differ-
suggests some directions for future research. ent social and individual behaviors to improve local minima
avoidance and convergence speed.

2 Related Works
3 Overview of the PSO Algorithm
To improve the PSO algorithm’s performance, recently some
modified algorithms have been proposed. In 2009, Cai [29] PSO is an evolutionary computation technique that was pro-
proposed a new modified PSO based on the black stork forag- posed by Kennedy and Eberhart [1,2]. It was inspired from
ing process. He defined two types of particles inspired from the social behavior of bird flocking which uses a number
the foraging behavior of adult and infant black storks. These of individuals (particles) flying around the search space to
two types of particles have different cognitive coefficients find the best solution. The particles trace the best location
(c1 ) that are a function of best fitness value in the current (best solution) in their paths over the course of iterations.
iteration, worst value in the entire swarm, and current fitness In other words, particles are influenced by their own best
values. The results show that the modified PSO has better locations found as well as the best solution obtained by the
performance than the conventional PSO when dealing with swarm. These concepts have been mathematically modeled
high-dimensional, multimodal optimization problems. [1] using a position vector (x) and velocity vector (v) of
Cai et al. also proposed a new setting for the social factor length D, where D indicates the dimension (number of vari-
(c2 ) to improve the convergence speed [30,31]. The social ables) of the problem. In the course of iterations, a particle
coefficient is a function of the best fitness value in the cur- adjusts its position and velocity as follows:
rent iteration, the worst fitness value in the entire swarm, and
the current fitness value. This method can be considered as a vit+1 = wvit + c1 × rand × ( pbesti − xit ) + c2
PSO algorithm with N different particles in terms of follow- ×rand × (gbest − xit ) (1)
ing social consensus. This algorithm suffers from trapping
in local minima more than the conventional PSO. For this xit+1 = xit + vit+1 (2)

123
Arab J Sci Eng (2014) 39:4683–4697 4685

(a) (b)
L L

L - Inverse log
Negative
L- nth
root
L- nth
power

L-Log

0 0
0 Max iteration 0 Max iteration

(c)
L

Log nth
root

nth
Identity power

Inverse log
0
0 Max iteration

Fig. 1 a Some samples of all possible functions for updating c1 and c2 , b specific functions for updating c1 , and c specific functions for updating
c2 where L is the upper bound of the c1 and c2

where w is the inertial weight which is responsible for con- Table 1 Updating strategies
trolling the PSO algorithm’s stability and usually is in [0.4, Algorithm Updating formula
0.9], c1 is the cognitive coefficient that controls the influence
C1 C2
of the individual memory of good solutions found, conven-
tionally selected in (0, 2], c2 is the social factor also con- AGPSO1
ventionally chosen from the range (0, 2] which controls the Group1 (−2.05/T )t + 2.55 (1/T )t + 1.25
extent to which a particle’s motion is influenced by the best Group2 (−2.05/T )t + 2.55 (2t 3 /T ) + 0.5
solution found by the whole swarm, rand is a random number Group3 (−2t 3 /T 3 ) + 2.5 (1/T )t + 1.25
between 0 and 1 which tries to give PSO more randomized
Group4 (−2t 3 /T 3 ) + 2.5 (2t 3 /T 3 ) + 0.5
search ability, and pbest and gbest are two variables to store
AGPSO2
the best solutions obtained so far by each particle and the
Group1 2.5−(2log(t)/log(T )) (2log(t)/log(T )) + 0.5
whole swarm, respectively. As can be observed, there are
Group2 (−2t 3 /T 3 ) + 2.5 (2t 3 /T 3 ) + 0.5
three main coefficients, w, c1 , and c2 . Dynamic tuning of
Group3 0.5 + 2exp[−(4t/T )2 ] 2.2−2exp[4t/T )2 ]
these parameters is a way to give particles different behav-
iors as the algorithm proceeds. In this work, c1 and c2 are Group4 2.5 + 2(t/T )2 −2(2t/T ) 0.5−2(t/T )2 + 2(2t/T )
targeted to increase the performance of PSO. AGPSO3
Group1 1.95−2t 1/3 /T1/3 2t 1/3 /T1/3 + 0.05
Group2 (−2t3 /T 3 ) + 2.5 (2t 3 /T 3 ) + 0.5
Group3 1.95−2t 1/3 /T 1/3 (2t 3 /T 3 ) + 0.5
4 Proposed Method
Group4 (−2t 3 /T 3 ) + 2.5 2t 1/3 /T 1/3 + 0.05

4.1 Motivation of Proposed Method

Finding the global minimum is a common, challenging task verge towards the global minimum can be divided into two
among all minimization methods [35]. In population-based basic phases. In the early stages of the optimization, the indi-
optimization methods, generally, the desirable way to con- viduals should be encouraged to scatter throughout the entire

123
4686 Arab J Sci Eng (2014) 39:4683–4697

Group 1 Group 2
2 3
c2
c1 2 c1
1 c2
1
0 0
0 Max iteration 0 Max iteration
Group 3 Group 4
3 3
2 c1 2 c1
1 c2 1 c2
0 0
0 Max iteration 0 Max iteration

Fig. 2 Mathematical models of autonomous groups for AGPSO1

Group 1 Group 2
3 c2 3
2 2 c1
c1
1 1 c2
0 0
0 Max iteration 0 Max iteration
Group 3 Group 4
3 3
c2
2 2 c1
c1
1 1
c2
0 0
0 Max iteration 0 Max iteration

Fig. 3 Mathematical models of autonomous groups for AGPSO2

Group 1 Group 2

3 3
c2
2 2 c1
1 c1 1 c2

0 0
0 Max iteration 0 Max iteration
Group 3 Group 4
3 3
2 c1 c2 2
1 1 c1
c2
0 0
0 Max iteration 0 Max iteration

Fig. 4 Mathematical models of autonomous groups for AGPSO3

search space. In other words, they should try to explore the Create and initialize a D-dimensional PSO
whole search space instead of clustering around local min- Divide particles randomly into autonomous groups
ima. In the latter stages, the individuals have to exploit infor- Repeat
Calculate particles’ fitness, Gbest, and Pbest
mation gathered to converge on the global minimum. In PSO, For each particle:
with fine-adjusting of the parameters c1 and c2 , we can bal- Extract the particle’s group
ance these two phases to find global minimum with fast con- Use its group strategy to update c1 and c2
vergence speed. Use c1 and c2 to update velocities (1)
Use new velocities to define new positions (2)
Considering these points, we propose the autonomous End for
groups concept as a modification of the conventional PSO. Until stopping condition is satisfied
In this method, each group of particles autonomously tries
to search the problem space with its own strategy, based on Fig. 5 Pseudo-code for the proposed modification of PSO algorithm
tuning c1 and c2 . The groups’ strategies can contain constant, (AGPSO)
linear time-varying, exponential, or logarithmic time-varying
values for c1 and c2 as shown in Fig. 1.

123
Arab J Sci Eng (2014) 39:4683–4697 4687

Table 2 Unimodal benchmark functions 4.2 Autonomous Groups and AGPSO Algorithm
Function Dim Range f min
n The concept of autonomous groups is inspired by the indi-
F1 (x) = xi2 300 [−100,100] 0 viduals’ diversity in animals flocking or insects swarming.
i=1
n n
F2 (x) = i=1 |x i | + i=1 |x i | 300 [−10,10] 0 In any gathering, individuals are not quite similar in terms of
n n 2
F3 (x) = i=1 j−1 x j 300 [−100,100] 0 intelligence and ability, but they all do their duties as a mem-
F4 (x) = maxi {|xi | , 1 ≤ i ≤ n} 300 [−100,100] 0 ber of the group. Each individual’s ability can be useful in
n−1  a particular situation. In a termite colony, for instance, there
F5 (x) = i=1 100(xi+1 − xi2 )2 300 [−30,30] 0
 are four types of termites such as soldier, worker, babysit-
+(xi − 1)2
n ter, and queen. They all have diverse abilities, but these
F6 (x) = i=1 ([xi + 0.5])2 300 [−100,100] 0
n differences are necessary for survival of their colony. Sol-
F7 (x) = i=1 i xi4 + random[0, 1) 300 [−1.28,1.28] 0
diers have greater bulk with giant jaws to fight with ene-

Table 3 Multimodal
Function Dim Range f min
benchmark functions
n √
F8 (x) = i=1 −xi sin |xi | 300 [−500,500] −418.9829 × 300
n  2 
F9 (x) = i=1 xi − 10 cos(2π xi ) + 10 300 [−5.12,5.12] 0

F10 (x) = −20exp 300 [−32,32] 0


n  n
× −0.2 n1 i=1 xi2 − exp n1 i=1 cos(2π xi ) + 20 + e
1 n n x
F11 (x) = 4000 i=1 xi − i=1 cos( √i ) + 1
2 300 [−600,600] 0
i
π n−1  
F12 (x) = n  10sin(π y1 ) + i=1 (yi − 1)
2 1 + 10sin2 (π yi+1 )
n
+(yn − 1) + i=1 u(xi , 10, 100, 4)
2

yi = 1 + xi 4+1 ⎧ 300 [−50,50] 0


⎨ k(xi − a)m xi > a
u(xi , a, k, m) = 0 − a < xi < a

k(−xi − a)m xi < −a
 2 n  
F13 (x) = 0.1 sin (3π x1 ) + i=1 (xi − 1)2 1 + sin2 (3π xi + 1) 300 [−50,50] 0
   n
+(xn − 1)2 1 + sin2 (2π xn ) + i=1 u(xi , 5, 100, 4)

Table 4 Fixed-dimension
multimodal benchmark Function Dim Range f min
functions −1
25
F14 (x) = 1
+ j=1 j+2
1
2 [−65,65] 1
i=1 ( x i −ai j )
500 6

 2
11 x 1 (bi2 +bi x 2 )
F15 (x) = i=1 ai − bi2 +bi x 3 +x 4
4 [−5,5] 0.00030

F16 (x) = 4x12 − 2.1x14 + 13 x16 + x1 x2 − 4x22 + 4x24 2 [−5,5] −1.0316


 2 
F17 (x) = x2 − 4π 2 x1 + π x1 − 6
5.1 2 5
+ 10 1 − 8π1
cosx1 + 10 2 [−5,5] 0.398

 
F18 (x) = 1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 ) 2 [−2,2] 3
 
× 30 + (2x1 − 3x2 ) ×
2
 (18 − 32x1 + 12x1 + 48x
2
 2 − 36x1 x2 + 27x2 )
2
4 3  2
F19 (x) = − i=1 ci exp − j=1 ai j x j − pi j 3 [1,3] −3.86

4    
2
F20 (x) = − i=1 ci exp − 6j=1 ai j x j − pi j 6 [0,1] −3.32
5  −1
F21 (x) = − i=1 (X − ai )(X − ai )T + ci 4 [0,10] −10.1532
7  −1
F22 (x) = − i=1 (X − ai )(X − ai )T + ci 4 [0,10] −10.4028
10   −1
F23 (x) = − i=1 (X − ai )(X − ai )T + ci 4 [0,10] −10.5363

123
4688 Arab J Sci Eng (2014) 39:4683–4697

Table 5 Updating strategies we mathematically model the autonomous groups, utilizing


Algorithm Updating formula different strategies for updating c1 and c2 . In other words,
the groups behave differently in terms of the extent to which
C1 C2
they follow individual and social leads.
SPSO [2] 2 2 Updating strategies of autonomous groups could be imple-
TACPSO [32] 0.5 + 2exp[−(4t/T )2 ] 2.2 − 2exp[−(4t/T )2] mented with any continuous function whose range is in the
MPSO [33] (−2.05/T )t + 2.55 (1/T )t + 1.25 interval [0, L]. Figure 1 represents some of the functions
IPSO [34] 2.5 + 2(t/T )2 – 2(2t/T ) 0.5 − 2(t/T )2 + 2(2t/T )
that can be used for updating cognitive and social factors.
These functions consist of ascending or descending linear
and polynomial, as well as exponential and logarithmic func-
tions. In Fig. 1, the blue and red curves can be used for
mies. Workers are smaller than soldiers, so they can move updating c1 and c2 , respectively. As may be observed, c1
around very fast to find and provide food for the colony. is decreased over the iteration, whereas cs is increased. It
They also have the ability of excavating to build the nest. The is clear that particles tend to have higher local search abil-
queen and babysitters reproduce and raise children. These ity when c1 is greater than c2. In contrast, particles search
four types of termite can be considered as four autonomous the search space more globally when c2 is greater than c1 .
groups which have a common goal of promoting the colony’s Finding a good balance between c1 and c2 and consid-
survival. ering them as dynamic coefficients is investigated in this
In conventional PSO, all particles behave the same in terms study.
of local and global search, so particles can be considered as a We define four groups based on termite colonies which
group with one strategy. However, using diverse autonomous have their own patterns to search the problem search
groups with a common goal in any population-based opti- space locally and globally. We also develop three differ-
mization algorithm theoretically could result in more ran- ent versions of PSO with different autonomous groups
domized and directed search simultaneously. In this paper, named AGPSO1, AGPSO2, and AGPSO3. The dynamic

Table 6 Comparison results among all algorithms on unimodal benchmark functions


Test Function AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F1
Average best so far 1.35E+04 1.63E+04 4.15E+03 8.69E+05 8.93E+04 3.53E+04 3.93E+04
Median best so far 1.3393E+04 1.5894E+04 4.2138E+03 8.7488E+05 8.6431E+04 3.4401E+04 3.7548E+04
SD best so far 3.3699E+03 3.7457E+03 1.1488E+03 1.9307E+04 1.3441E+04 6.4825E+03 6.3760E+03
F2
Average best so far 5.38E+04 3.47E+02 1.10E+02 7.21E+09 1.58E+06 1.21E+05 8.59E+03
Median best so far 3.9440E+04 3.4272E+02 1.1068E+02 2.7663E+08 3.5944E+05 5.3477E+04 3.5306E+03
SD best so far 4.6654E+04 6.4717E+01 1.2805E+01 3.3309E+10 6.1156E+06 1.4437E+05 1.3154E+04
F3
Average best so far 3.23E+05 2.74E+05 2.56E+05 5.03E+06 4.60E+05 2.40E+05 3.21E+05
Median best so far 2.9637E+05 2.6753E+05 2.4995E+05 5.0515E+06 4.5800E+05 2.3158E+05 3.1431E+05
SD best so far 6.1328E+04 6.4660E+04 4.6873E+04 1.7152E+06 7.4540E+04 5.7444E+04 8.0702E+04
F4
Average best so far 5.93E+01 6.28E+01 6.26E+01 9.78E+01 9.22E+01 5.88E+01 5.93E+01
Median best so far 5.9099E+01 6.2104E+01 6.2783E+01 9.7744E+01 9.6449E+01 5.8866E+01 5.9099E+01
SD best so far 3.4682E+00 3.3038E+00 2.4881E+00 4.5566E-01 8.8241E+00 2.4359E+00 3.4682E+00
F5
Average best so far 2.74E+06 2.80E+06 4.63E+05 3.82E+09 1.19E+08 2.54E+07 2.74E+06
Median best so far 2.5478E+06 2.4039E+06 4.6453E+05 3.8447E+09 1.2626E+08 2.4604E+07 2.5478E+06
SD best so far 9.5801E+05 1.4135E+06 1.4947E+05 1.6960E+08 3.0795E+07 7.6467E+06 9.5801E+05
F6
Average best so far 1.36E+04 1.64E+04 4.26E+03 8.72E+05 9.34E+04 3.54E+04 3.82E+04
Median best so far 1.3293E+04 1.6556E+04 4.0308E+03 8.7876E+05 9.2104E+04 3.5586E+04 3.7781E+04
SD best so far 2.2733E+03 3.4075E+03 1.2613E+03 2.3473E+04 1.8672E+04 5.2585E+03 6.5413E+03
F7
Average best so far 2.3529E+01 3.7165E+01 1.1886E+01 1.8920E+04 4.9963E+02 1.4398E+02 2.1712E+02
Median best so far 2.2634E+01 3.4481E+01 1.0926E+01 1.9000E+04 4.8630E+02 1.3566E+02 2.0407E+02
SD best so far 6.5380E+00 1.4691E+01 3.4267E+00 9.5780E+02 1.3928E+02 5.2471E+01 6.9669E+01

123
Arab J Sci Eng (2014) 39:4683–4697 4689

Table 7 Comparison results among all algorithms on multimodal benchmark functions


Test function AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F8
Average best so far −6.662E+04 −5.90E+04 −6.83E+04 −3.83E+04 −4.81E+04 −6.63E+04 −6.39E+04
Median best so far −6.719E+04 −5.819E+04 −6.896E+04 −3.785E+04 −4.707E+04 −6.674E+04 −6.377E+04
SD best so far 5.0885E+03 4.7799E+03 4.2049E+03 3.7701E+03 4.3025E+03 2.3123E+03 2.9516E+03
F9
Average best so far 1.4010E+03 1.4983E+03 1.3493E+03 3.1436E+03 2.3469E+03 1.3428E+03 1.6243E+03
Median best so far 1.3944E+03 1.4794E+03 1.3457E+03 3.1595E+03 2.3642E+03 1.3181E+03 1.3944E+03
SD best so far 7.8790E+01 1.2466E+02 1.0394E+02 1.6415E+02 9.5798E+01 9.4062E+01 7.8790E+01
F10
Average best so far 1.6990E+01 1.7313E+01 1.7089E+01 1.9966E+01 1.9356E+01 1.5317E+01 1.8093E+01
Median best so far 1.6962E+01 1.7429E+01 1.7103E+01 1.9966E+01 1.9367E+01 1.5267E+01 1.8166E+01
SD best so far 4.2914E−01 5.1447E−01 2.4298E−01 1.9365E−04 9.8243E−02 8.9630E−01 2.9006E−01
F11
Average best so far 1.9974E+02 1.7726E+02 5.5658E+01 2.8277E+03 1.2274E+03 2.5232E+02 2.7665E+02
Median best so far 2.0386E+02 1.6590E+02 4.2906E+01 2.7726E+03 1.2170E+03 2.5856E+02 2.0386E+02
SD best so far 6.7637E+01 5.3999E+01 3.6911E+01 2.9022E+02 1.7422E+02 5.6008E+01 6.7637E+01
F12
Average best so far 2.3642E+03 2.3793E+04 5.6427E+01 1.8896E+09 5.7484E+08 2.2419E+06 6.4022E+06
Median best so far 3.7962E+02 1.6973E+04 4.1806E+01 1.8026E+09 5.4422E+08 1.9938E+06 5.2430E+06
SD best so far 4.6017E+03 2.3723E+04 3.8456E+01 5.1837E+08 2.8703E+08 1.5327E+06 3.9894E+06
F13
Average best so far 3.7934E+05 1.2170E+06 3.4986E+04 3.6421E+09 1.1519E+09 2.2574E+07 5.3217E+07
Median best so far 2.2776E+05 7.8352E+05 2.5203E+04 3.6631E+09 1.0375E+09 2.1180E+07 3.8380E+07
SD best so far 3.9792E+05 1.3654E+06 3.5129E+04 6.7487E+08 5.0529E+08 1.1331E+07 7.9149E+07

coefficients of these algorithms are presented in Table 1 search ability much earlier than group 2. AGPSO3 utilizes
and Figs. 2, 3, 4. In this table, T indicates the maxi- two principal third root, two cubic functions for groups 1 and
mum number of iterations and t is the current iteration. 2 as well as one principal third root and cubic functions for
We try to use a diverse range of functions to investigate groups 3 and 4.
their effects on the performance of PSO. These functions In PSO with autonomous groups (AGPSO), at first all par-
have been chosen with different slopes, curvatures, and ticles are randomly placed in the problem search space. After
interception points to investigate the efficiency of these that, the particles are randomly divided into some predefined
characteristics in improving the performance of PSO. For autonomous groups. At each iteration, gBest, pBest, and the
instance, the particles of AGPSO1 in group 1 tend to fol- fitness of the particles are defined. For each particle, the coef-
low social behavior earlier than other groups, followed by ficients c1 and c2 are updated using its group’s strategy. After
group 2. In contrast, the particles in group 4 prefer to calculating c1 and c2 , the velocities and positions of particles
search individually in the majority of the iterations since will be updated using Eqs. (1) and (2). Figure 5 shows the
the intersection points of c1 and c2 are close to the last pseudo-code of AGPSO.
iterations. To see how autonomous groups are effective in AGPSO,
It may be observed that AGPSO1 uses two linear functions some points may be noted:
for group 1, linear and cubic functions for groups 2 and 3,
and two cubic functions for group 3. AGPSO2 employs two • Autonomous groups have different strategies to update c1 ,
logarithmic, two cubic, two exponential, and two quadratic so particles could explore the search space locally with
functions for groups 1 to 4, respectively. It is worth men- different capability than the convectional PSO.
tioning that these functions have different patterns, changing • Autonomous groups have different strategies to update
during the course of the iterations. For instance, the particles c2 , so particles could follow social behavior more
in group 1 of AGPSO 2 tend to change the global and local autonomously than the conventional PSO.

123
4690 Arab J Sci Eng (2014) 39:4683–4697

Table 8 Comparison results among all algorithms on fixed-dimension benchmark functions


Test function AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F14
Average best so far 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01
Median best so far 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01 9.98E−01
SD best so far 3.3876E−16 3.3876E−16 3.3876E−16 3.3876E−16 3.3876E−16 3.3876E−16 3.3876E−16
F15
Average best so far 3.3824E−04 3.9905E−04 3.6853E−04 1.0108E−03 4.4773E−04 4.1489E−04 4.1489E−04
Median best so far 3.0749E−04 3.0749E−04 3.0749E−04 7.8266E−04 3.0749E−04 3.0749E−04 3.0749E−04
SD best so far 1.6714E−04 2.7940E−04 2.3232E−04 4.6281E−04 2.7642E−04 2.8739E−04 2.8739E−04
F16
Average best so far −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316
Median best so far −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316
SD best so far 0 0 0 0 0 0 0
F17
Average best so far 0.3979 0.3979 0.3979 0.3979 0.3979 0.3979 0.3979
Median best so far 0.3979 0.3979 0.3979 0.3979 0.3979 0.3979 0.3979
SD best so far 0 0 0 0 0 0 0
F18
Average best so far 3 3 3 3 3 3 3
Median best so far 3 3 3 3 3 3 3
SD best so far 0 0 0 0 0 0 0
F19
Average best so far −3.8628 −3.8628 −3.8628 −3.8628 −3.8628 −3.8628 −3.8628
Median best so far −3.8628 −3.8628 −3.8628 −3.8628 −3.8628 −3.8628 −3.8628
SD best so far 0 0 0 0 0 0 0
F20
Average best so far −3.2625 −3.2586 −3.2824 −3.2361 −3.2704 −3.2665 −3.2665
Median best so far −3.2625 −3.2031 −3.3220 −3.2031 −3.3220 −3.3220 −3.3220
SD best so far 6.0463E−02 6.0328E−02 5.7005E−02 7.5065E−02 6.0063E−02 6.0328E−02 6.0328E−02
F21
Average best so far −8.4675 −9.3111 −9.1412 −8.1262 −7.8784 −8.6360 −8.7238
Median best so far −10.1532 −10.1532 −10.1532 −10.1532 −10.1532 −10.1532 −10.1532
SD best so far 2.4247 1.9151 2.0586 2.5251 2.6820 2.3573 2.4471
F22
Average best so far −9.5225 −10.0513 −10.0513 −9.6984 −9.5212 −9.8755 −9.8742
Median best so far −10.4029 −10.4029 −10.4029 −10.4029 −10.4029 −10.4029 −10.4029
SD best so far 2.0023 1.3381 1.3381 1.8271 2.0054 1.6093 1.6135
F23
Average best so far −10.3577 −9.4643 −10.0003 −9.4627 −10.3561 −10.0003 −10.5364
Median best so far −10.5364 −10.5364 −10.5364 −10.5364 −10.5364 −10.5364 −10.5364
SD best so far 0.9787 2.1810 1.6357 2.1842 0.9873 1.6357 0

• Dynamic and diverse patterns of c1 and c2 cause balanc- ventional PSO in solving complex optimization prob-
ing between local and global search during the course of lems.
iterations. • PSO with autonomous groups has diverse strategies for
• Autonomous groups contain non-linear patterns such updating c1 and c2 , so it perhaps could be more adaptable
as exponential and logarithmic functions for c1 and than the conventional PSO in solving a wider range of
c2 , so they could be more effective than the con- optimization problems.

123
Arab J Sci Eng (2014) 39:4683–4697 4691

6
F1 50
F2 8
F3

Average Best-so-far

Average Best-so-far

Average Best-so-far
10 10 10

5 7
10 10

4 6
10 10

3 0 5
10 10 10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F4 10
F5 6
F6

Average Best-so-far
Average Best-so-far

Average Best-so-far
10 10

5
1.9 10
10
4
10
1.8
10
5 3
10 10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

5
F7
Average Best-so-far

10

0
10
1 1000 2000
Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

Fig. 6 Convergence curves of the algorithms on unimodal benchmark functions

method in dealing with problems of high dimensionality. In


These points theoretically could give AGPSO the potential
addition, ten fixed-dimension benchmark functions have also
of having high performance. In the following section, the
been selected to provide a comprehensive study. A detailed
effectiveness of AGPSO is investigated and proved.
description of the benchmark functions is available in the
Appendix.
5 Experimental Results and Discussions To validate the results of AGPSO, it is compared with
the conventional and some recent modifications of PSO with
5.1 Selected Benchmark Functions time-varying accelerators such as TACPSO [32], MPSO [33],
and IPSO [34]. Please note that the source codes of the
As shown in Tables 2, 3, 4, twenty-three standard bench- AGPSO algorithm can be found in http://www.alimirjalili.
mark functions are employed to testify the performance of com/Projects.html.
AGPSO [9,36–41]. The objective is to find the global mini-
mum. These benchmark functions can be divided into three 5.2 Parameter Setting
groups: unimodal, multimodal, and fixed-dimension multi-
modal. In these tables, Dim indicates the dimension of the The coefficients of SPSO, TACPSO, MPSO, and IPSO are
function, Range gives the boundaries of the search space, and listed in Table 5. The inertial weight w for all the algo-
f min is the minimum value of the function. Note that uni- rithms AGPSO1 to AGPSO3 is decreased linearly from 0.9
modal and multimodal functions with 300 dimensions have to 0.4. There are 100 particles, and the maximum iteration is
been chosen to examine the performance of the proposed 2000.

123
4692 Arab J Sci Eng (2014) 39:4683–4697

F8 F9 F10

Average Best-so-far

Average Best-so-far
3.7

Average Best-so-far
10
4 1.3
-10 10

3.2 1.2
10 10
5
-10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

4
F11 10
F12 F13
Average Best-so-far

Average Best-so-far

Average Best-so-far
10 10
10
10
3
10

5
10
2
10
5
10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

Fig. 7 Convergence curves of the algorithms on multimodal benchmark functions

5.3 Performance Analysis The results for the multimodal benchmark functions are
provided in Table 7. In contrast to the unimodal func-
To compare the performance of all algorithm, the results are tions, these benchmark functions have many local min-
collected over 30 independent runs. The average, median, and ima that increase exponentially with problem dimension-
standard deviation of the best solution in the last iteration are ality. Therefore, they are suitable for benchmarking the
reported in Tables 6, 7, 8. The best results are indicated in capability of algorithms in avoiding local minima. As the
bold type. results show, AGPSO3 performs better than the other algo-
Table 6 shows the results for unimodal functions. As may rithms in most of the multimodal benchmark functions. The
be seen from this table, AGPSO3 has the best results in only benchmark function on which AGPSO3 is not able to
five out of seven unimodal benchmark functions. In general, outperform TACPSO is F10, but the results of these two
the results of AGPSO1 to AGPSO3 are much better than algorithms are very close. In general, the AGPSO algo-
the other algorithms. These results show that autonomous rithms have the best results. The results of Table 7 show
groups could improve the performance of PSO algorithm that the autonomous groups increased the performance of
for these benchmark functions. Figure 6 illustrates the con- the PSO algorithm in terms of avoiding local minima. As
vergence curves of the algorithms. As can be seen from may be observed in Fig. 7, similar to the results of uni-
these curves, AGPSO3 has the best convergence rates for modal benchmark functions, the convergence rate of the
most of the benchmark functions, followed by AGPSO1 AGPSO algorithms is better than the other algorithms.
and AGPSO2. It is worth noting that unimodal benchmark The AGPSO3 algorithm has the best convergence rates
functions have only one global minimum and there are of the AGPSO algorithms. The reason for the improved
no local minima in the search space. So these kinds of ability in avoiding local minima is that the autonomous
functions are quite suitable for benchmarking the conver- groups give AGPSO more randomized search in compari-
gence ability of algorithms. Consequently, the results of the son with the conventional and recent modifications of the
AGPSO algorithms indicate that autonomous groups could PSO algorithm, so the particles are not easily trapped in local
improve the convergence ability of the PSO algorithm sig- minima.
nificantly. The reason for the superior results is that the In contrast to the multimodal functions, the fixed-dimen-
particles have diversity in the population and are able to sion multimodal benchmark functions have few local min-
exploit knowledge of the location of near optimal solutions ima. As shown in Table 8, the results of all algorithms
effectively. are equal on five of the functions. However, the AGPSO

123
Arab J Sci Eng (2014) 39:4683–4697 4693

2
F14 -1
F15 F16

Average Best-so-far

Average Best-so-far

Average Best-so-far
10 10
-0.4
-10
-2
10
1
10 -0.2
-3
-10
10

0 -4 0
10 10 -10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F17 2
F18 F19

Average Best-so-far

Average Best-so-far
Average Best-so-far

10
-0.1
10 0.56
-10
1 0.57
10 -10

0.58
-10
-0.4 0
10 10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

0.3
F20 F21 F22
Average Best-so-far

Average Best-so-far

Average Best-so-far
-10
0 0
-10 -10

0.4
-10

1
-10
0.5
-10
1 1000 2000 1 1000 2000 1 1000 2000
Iteration Iteration Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

F23
Average Best-so-far

0
-10

1
-10
1 1000 2000
Iteration

AGPSO1 AGPSO2 AGPSO3 SPSO MPSO TACPSO IPSO

Fig. 8 Convergence curves of the algorithms on fixed-dimension multimodal benchmark functions

algorithms outperform the other algorithms on F15, F20, effect of autonomous groups is more observable for the high-
F21, and F22. AGPSO3 has the best results in three of dimensional problems.
these functions. Figure 8 illustrates the convergence behav- To summarize, the results show that the proposed method
ior of the algorithms dealing with fixed-dimension func- is useful for the PSO algorithm in terms not only of avoiding
tions. All the algorithms have close convergence curves, local minima but also improved convergence rate. Statisti-
slightly better for the AGPSO algorithms. The similar- cally speaking, the AGPSO3 algorithm has the best results
ity of results and convergence curves is due to the low- for seventeen out of twenty-three benchmark functions, more
dimensional characteristic of these benchmark functions; the than half. The AGPSO1 and AGPSO2 algorithms also show

123
4694 Arab J Sci Eng (2014) 39:4683–4697

the best results on 5 and 6 benchmark functions, respec- 6 Conclusion


tively. This shows that there is a significant superiority for the
AGPSO3 algorithm compared to others. As can be seen in In this paper, a new modification of PSO called AGPSO
the tables, generally the results of the AGPSO algorithms are is proposed utilizing the concept of autonomous groups
much better than those of the SPSO algorithms. The SPSO inspired by the diversity of individuals in natural colonies.
algorithm provides good results on 5 out of 23 test functions, Three versions of AGPSO with different autonomous groups
mostly on low-dimensional functions. However, the AGPSO were introduced. To evaluate their performance, twenty-three
algorithms show much better results on high-dimensional benchmark functions were employed, and the results com-
unimodal and multimodal test functions. The results of uni- pared with the conventional, and some recent modifications
modal functions revealed that SPSO failed to provide fast of PSO. The results show that AGPSO has merit compared
convergence behavior in the high-dimensional problems, to other algorithms in terms of improving avoidance of trap-
whereas the proposed approach allows AGPSO to provide ping in local minima and convergence speed, particularly for
high convergence rates because of different cognitive behav- problems of higher dimensionality. The results also showed
iors for particles. In addition, the results of high-dimensional that dividing particles in groups and allowing them to have
multimodal functions indicated the poor ability of SPSO in different individual and social behavior can improve the per-
avoiding local optima. The results of the AGPSO algorithms formance of PSO significantly without any extra computa-
show that the proposed autonomous groups allow particles tional burden.
to have different patterns for following the social behavior of For future studies, it would be interesting to apply AGPSO
the whole swarm, resulting in higher local optima avoidance in optimization problems to evaluate the efficiencies of
capability. AGPSO in solving real-world problems. Increasing the num-
Among the three proposed groups, the third groups show ber of autonomous groups is also worthy of investigation.
much better results. As can be inferred from Figs. 2 to 4, Moreover, employing different types of function with greater
group 1 of AGPSO3 has the most local search ability because variety of slopes, curvatures, and interception points is rec-
the intersection point of c1 and c2 is close to the start of iter- ommended for future study.
ations. However, group 2 better allows particles to search
globally because c1 intersects c2 after almost three quarters
of the allowed iterations. Group 3 and group 4 also provide Appendix
smooth transition between local and global search ability.
This combination prevents particles from easily becoming Tables 9, 10, 11, 12, 13 14, 15, 16 contain the details of the
trapped in local optima. This is the main reason for the benchmark functions.
superior results of the proposed autonomous groups (espe-
cially the third group). It should be noted that this remark-
Table 9 ai, j in F14
able improvement has been made just with dividing particles
to autonomous groups and utilizing the new mathematical −32, −16, 0, 16, 32, −32, . . . ., 0, 16, 32
ai, j =
−32, −32, −32, −32, −16, . . . ., 32, 32, 32
functions. There is no extra computational cost for the pro-
posed method. The results support the contention that the
proposed approach has merit for solving high-dimensional
problems.

123
Arab J Sci Eng (2014) 39:4683–4697 4695

Table 10 ai and bi in F15


i 1 2 3 4 5 6 7 8 9 10 11

ai 0.1957 0.1947 0.1735 0.1600 0.0844 0.0627 0.0456 0.0342 0.0342 0.0235 0.0246
b−1
i 0.25 0.5 1 2 4 6 8 10 12 14 16

Table 11 aij and ci in F19


i ai1 ai2 ai3 ci

1 3 10 30 1
2 0.1 10 35 1.2
3 3 10 30 3
4 0.1 10 30 3.2

Table 12 pi j in F19
i pi1 pi2 pi3

1 0.3689 0.1170 0.2673


2 0.4699 0.4387 0.7470
3 0.1091 0.8732 0.5547
4 0.03815 0.5743 0.8828

Table 13 ai j and ci in F20


i ai1 ai2 ai3 ai4 ai5 ai6 ci

1 10 3 17 3.5 1.7 8 1
2 0.05 10 17 0.1 8 14 1.2
3 3 3.5 1.7 10 17 8 3
4 17 8 0.05 10 0.1 14 3.2

Table 14 pi j in F20
i pi1 pi2 pi3 pi4 pi5 pi6

1 0.131 0.169 0.556 0.012 0.828 0.588


2 0.232 0.413 0.830 0.373 0.100 0.999
3 0.234 0.141 0.352 0.288 0.304 0.665
4 0.404 0.882 0.873 0.574 0.109 0.038

Table 15 ai j and ci in F21 , F22 ,


and F23 i ai1 ai2 ai3 ai4 ci

1 4 4 4 4 0.1
2 1 1 1 1 0.2
3 8 8 8 8 0.2
4 6 6 6 6 0.4
5 3 7 3 7 0.4
6 2 9 2 9 0.6
7 5 6 3 3 0.3
8 8 1 8 1 0.7
9 6 2 6 2 0.5
10 7 3.6 7 3.6 0.5

123
4696 Arab J Sci Eng (2014) 39:4683–4697

Table 16 Best solution for


fixed-dimension multimodal F X output F min
functions
F14 (−32, 32) 1
F15 (0.1928, 0.1908, 0.1231, 0.1358) 0.00030
F16 (0.089, −0.712), (−0.089, 0.712) −1.0316
F17 (−3.14, 12.27), (3.14, 2.275), (9.42, 2.42) 0.398
F18 (0,_1) 3
F19 (0.114, 0.556, 0.852) −3.86
F20 (0.201, 0.15, 0.477, 0.275, 0.311, 0.657) −3.32
F21 5 local minima in ai j , i = 1, 2, 3, 4, 5 −10.1532
F22 7 local minima in ai j ,i = 1, 2, 3, 4, 5, 6, 7 −10.4028
F23 10 local minima in ai j , i = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 −10.5363

References 16. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.: An effective krill herd
algorithm with migration operator in biogeography-based opti-
1. Eberhart, R.C.; Kennedy, J.: A new optimizer using particles swarm mization. Appl. Math. Model. (2013)
theory. In: Sixth International Symposium on Micro Machine and 17. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.: Stud krill herd algo-
Human Science, Nagoya, Japan, pp. 39–43 (1995) rithm. Neurocomputing 128, 363–370 (2014)
2. Eberhart R.C.; Kennedy, J.: Particle swarm optimization. In: IEEE 18. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Hao, G.-S.: Hybrid
International Conference on Neural Network, Perth, Australia, krill herd algorithm with differential evolution for global numeri-
pp. 1942–1948 (1995) cal optimization. Neural Comput. Appl. 1–12 (2013). doi:10.1007/
3. Chandra, S.; Bhat, R.; Singh, H.: A PSO based method for detec- s00521-013-1485-9
tion of brain tumors from MRI. In: Nature & Biologically Inspired 19. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H.:
Computing, Coimbatore, pp. 666–671 (2009) Chaotic Krill Herd algorithm. Inform. Sci. 274, 17–34 (2014)
4. Mathiyalagan, M.; Dhepthie, U.; Sivanandam, S.: Grid Scheduling 20. Guo, L.; Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Duan, H.: A
Using Enhanced PSO Algorithm. vol. 2, pp. 140–145 (2010) new improved krill herd algorithm for global numerical optimiza-
5. Masehian, E.; Sedighizadeh, D.: A multi-objective PSO-based tion. Neurocomputing 138, 392–402 (2013)
algorithm for robot path planning. In: IEEE International 21. Saremi, S.; Mirjalili, S.: Integrating chaos to biogeography-based
Conference on Industrial Technology Vi a del Mar, pp. 465–470 optimization algorithm. Int. J. Comput. Commun. Eng. 2, 655–658
(2010) (2013)
6. Fayk, M.; El Nemr, H.; Moussa, M.: Particle swarm optimisation 22. Saremi, S.; Mirjalili, S. M.; Mirjalili, S.: Chaotic krill herd opti-
based video abstraction. J. Adv. Res. 1, 163–167 (2010) mization algorithm. Procedia Technol. 12, 180–185 (2014)
7. Mirjalili, S.; Rawlins, T.; Hettenhausen, J.; Lewis, A.: A compari- 23. Premalatha, K.; Natarajan, A.M.: Hybrid PSO and GA for global
son of multi-objective optimisation metaheuristics on the 2D airfoil maximization. Int. J. Open Probl. Comput. Sci Math. 2, 597–608
design problem. ANZIAM J. 54, C345–C360 (2013) (2009)
8. Mirjalili, S.; Sadiq, A.S.: Magnetic Optimization algorithm for 24. Mirjalili, S.; Mohd Hashim, S.Z.: A new hybrid PSOGSA algo-
training multi layer perceptron. In: 2011 IEEE 3rd international rithm for function optimization. In: International Conference on
conference on communication software and networks (ICCSN), Computer and Information Application (ICCIA 2010), pp. 374–
pp. 42–46 (2011) 377 (2010)
9. Mirjalili, S.; Mohd Hashim, S.Z.; Moradian Sardroudi, H.: Train- 25. Shuang, B.; Chen, J.; Li, Z.: Study on hybrid PS-ACO algorithm.
ing feed forward neural networks using hybrid particle swarm opti- Appl. Intell. 34, pp. 64–73 (2011)
mization and gravitational search algorithm. Appl. Math. Comput. 26. Toscano-Pulido, G.; Reyes-Medina, A.; Ramirez-Torres, J.: A sta-
218, 11125–11137 (2012) tistical study of the effects of neighborhood topologies in particle
10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A.: Let a biogeography-based swarm optimization. Comput. Intell. 343, 179–192 (2011)
optimizer train your Multi-Layer Perceptron. Inform Sci. 269, 188– 27. Matsushita, H.; Nishio, Y.: Network-structured particle swarm opti-
209 (2014) mizer with various topology and its behaviors. Adv. Self-Organ.
11. Mirjalili, S.; Lewis, A.: S-shaped versus V-shaped transfer func- Maps. 5629, 163–171 (2009)
tions for binary particle swarm optimization. Swarm Evol. Comput. 28. Mo, S.; Zeng, J.; Tan, Y.: Particle swarm optimisation based on
9, 1–14 (2013) self-organisation topology driven by different fitness rank. Int. J.
12. Mirjalili, S.; Lewis, A.: Adaptive gbest guided Gravitational Comput. Sci. Eng. 6, 24–33 (2011)
Search Algorithm. Neural Comput. Appl. (2014). doi:10.1007/ 29. Cai, X.: A new modified PSO based on black stork foraging process.
s00521-014-1640-y In: 8th IEEE international conference on cognitive informatics,
13. Saremi, S.; Mirjalili, S.; Lewis, A.: Biogeography-based optimisa- Kowloon, Hong Kong, pp. 509–513 (2009)
tion with chaos. Neural Comput. Appl. 1–21 (2014). doi:10.1007/ 30. Cai a, X.; Cui, Z.; Zeng, J.; Tana, Y.: Dispersed particle swarm
s00521-014-1597-x optimization. Inform. Process. Lett. 105, 231–235 (2008)
14. Wang, G.; Guo, L.; Wang, H.; Duan, H.; Liu, L.; Li, J.: Incorporat- 31. Cai, X.; Cui, Y.; Tan, Y.: Predicted modified PSO with time-
ing mutation scheme into krill herd algorithm for global numerical varying accelerator coefficients. Int. J. Bio-Inspired Comput. 1,
optimization. Neural Comput. Appl. 1–19 (2012) 50–60 (2009)
15. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.: A chaotic particle- 32. Ziyu, T.; Dingxue, Z.: A modified particle swarm optimization with
swarm krill herd algorithm for global numerical optimization. an adaptive acceleration coefficients. In: Asia-Pacific Conference
Kybernetes 42, 9–9 (2013) on Information Processing, Shenzhen, pp. 330–332 (2009)

123
Arab J Sci Eng (2014) 39:4683–4697 4697

33. Bao, G.Q.; Mao, K.F.: Particle swarm optimization algorithm 37. Yang, X.-S.: In: An introduction with metaheuristic applications
with asymmetric time varying acceleration coefficients. In: IEEE Xin-She, Y. (ed.) Test Problems in Optimization. Wiley, New York
International Conference on Robotics and Biomimetics, Guilin, (2010)
pp. 2134–2139 (2009) 38. Yao, X.; Liu, Y.; Lin, G.: Evolutionary programming made faster.
34. Cui, Z.; Zeng, J.; Yin, Y.: An improved PSO with time-varying IEEE Tran. Evol. Comput. 3, 82–102 (1999)
accelerator coefficients. In: Eighth international conference on 39. Digalakis, J.G.; Margaritis, K.G.: On benchmarking functions for
intelligent systems design and applications, Kaohsiung, pp 638– genetic algorithms. Intern. J. Comput. Math. 77, 481–506 (2001)
643 (2008) 40. Mirjalili, S.; Mirjalili, S.M.; Lewis, A.: Grey wolf optimizer. Adv.
35. Fukuyama, Y.; Yoshida, H.: A particle swarm optimization for reac- Eng. Soft. 69, 46–61 (2014)
tive power and voltage control in electric power system. In: IEEE 41. Mirjalili, S.; Mirjalili, S.M.; Yang, X.-S.: Binary bat algo-
congress on evolutionary computation, Seoul, Korea, pp. 87–93 rithm. Neural Comput. Appl. 1–19 (2013). doi:10.1007/
(2001) s00521-013-1525-5
36. Molga, M.; Smutnicki, C.: In: Test functions for optimization needs
(2005)

123

You might also like