1 Introduction

Population-based meta-heuristics have been the dominant methods to find optimal or good solutions to many complex optimization problems in reasonable time [22]. The popularity of meta-heuristics has increased considerably due to their key role in learning and knowledge discovery within the emerging fields of big data and machine learning. These meta-heuristics derive their inspiration from mimicking intelligent processes arising in nature, and are commonly classified into two categories: evolutionary (EA) and swarm intelligence algorithms [13]. The most popular EA algorithms are genetic (GA) [17] and differential evolution (DE) [47]. As for the swarm intelligence category, this includes particle swarm (PSO) [47], cuckoo search (CS) [58], whale optimization [32], monarch butterfly optimization (MBO) [53], moth search (MSA) [52], and Harris hawks optimization (HHO) [19] among others.

Although the development of meta-heuristics has witnessed tremendous progress in recent years, there is still much room for improvement as no single algorithm can solve all problems efficiently as per the “No Free Lunch” theorem [55]. Recently, a new population-based meta-heuristic called slime mould algorithm (SMA) has been proposed by Li et al. [27]. SMA is inspired by a unique slime mould, i.e., physarum polycephalum, which is an organism that can live freely as a single cell but can also form communicating aggregates when foraging food sources. In order to find food, slime mould starts the search process with a randomly distributed population. Once having identified food concentration during the random search, the slime mould will approach and wrap the food and secrete enzyme to digest it, while retaining certain exploration capability to search for better food sources. In order to imitate slime mould’s exploration and exploitation behaviors, authors in [27] proposed a bio-oscillating policy that separates the population into two groups according to their fitness. The first group, called positive group, is the group of individuals from the population with the best fitness whereas the other one, labeled as negative group, is the one consisting of those with the lowest fitness. The better fitness group will be exploited to find the best possible solution whereas the lower fitness one will be used to explore outward regions. In addition, a vibration parameter based on the sigmoid function is introduced to simulate food-grabbing behavior of slime mould.

Despite SMA being a recent algorithm, it has demonstrated excellent performance compared to state-of-the-art meta-heuristics in many fields. However, one of the key disadvantages of SMA lies in its relatively long run time and high computational cost. Being applied successfully in multiple fields, in this work we investigate the enhancement of the algorithm by augmenting it with a population size adaptation method that can reduce its prohibitively long run time. Population size plays a very important role in both run-time efficiency and optimization effectiveness of meta-heuristics and thus by balancing exploration and exploitation characteristics of the SMA algorithm, its performance and computational cost can be improved [27]. In order to balance exploration and exploitation phases of an algorithm, population size adaptation schemes can automatically adjust population size according to population diversity during the search process thus enhancing performance and reducing run time. Population size adaptation has been widely studied in genetic algorithms [5, 25], differential evolution [40, 50], artificial bee colony optimization [9], swarm intelligence [7, 12, 41] and recently to sine cosine algorithm [3]. However, to the best of the author’s knowledge, no such work has been reported for SMA.

To fill this research gap, we propose herein an adaptive fluctuant population size SMA algorithm called FP-SMA. Unlike the original SMA where population size is fixed in every epoch, FP-SMA will adaptively change population size to enhance run time characteristics of both exploitation and exploration phases of the algorithm. Population adaptation concept used in the proposed algorithm is a cluster-based approach borrowed from K-mean clustering algorithm. The threshold used to start the adaptation process is a scaled sigmoid function that decreases smoothly initially, then dramatically midway, and again smoothly near the end of algorithm execution. Once population diversity is out of the threshold, population size increases or decreases using a sine wave pattern (for randomization). Therefore, key contributions of this study can be summarized as the following:

  • Propose an adaptive fluctuant population size slime mould algorithm (FP-SMA) that automatically adjust population size during the search process according to population diversity to effectively balance exploitation and exploration characteristics of conventional SMA algorithm.

  • FP-SMA performance is analyzed over several benchmark functions, including 13 standard and 30 IEEE CEC2014 benchmark functions.

  • Simulation results revealed that FP-SMA can achieve significant reductions in the number of function evaluations as compared to conventional SMA without impacting solution quality.

The remainder of the paper is organized as follows. Section 2 highlights SMA algorithm, literature review, and population diversity adaptation techniques. Section 3 provides details of the proposed algorithm. Experimental results are reported and analyzed in Sect. 4. Finally, conclusions and some future directions are presented in Sect. 5.

2 Background

In this section, we introduce SMA algorithm’s mathematical models followed by the short literature review of SMA, and finally brief discussion of population diversity-based adaptation techniques [56] that motivated this work.

2.1 SMA introduction

In [27], a new stochastic optimizer called slime mould algorithm (SMA) was proposed. The algorithm models the morphological changes of slime mould, namely Physarum polycephalum, when foraging. Slime mould is eukaryote where its organic matter utilizes a process called Plasmodium as its main process to seek food. Once a food source is identified, the slime mould surrounds it and secrete enzymes to digest it. The foraging process of the Slime mould is divided into three phases, where the first process is finding food source, followed by wrapping, and then food grabble. The mathematical model for the various processes of the slime mould is described as follows [27]:

$$\begin{aligned} \overrightarrow{X}(t+1) = \left\{ \begin{array}{lcr} rand \cdot (UB-LB)+LB &{} &{} rand<z \\ \overrightarrow{X_b}(t) + \overrightarrow{vb} \cdot \left( \overrightarrow{W} \cdot \overrightarrow{X_A}(t) - \overrightarrow{X_B}(t) \right) &{} &{} r < p\\ \overrightarrow{vc} \cdot \overrightarrow{X}(t) &{} &{} r \ge p \\ \end{array} \right. \end{aligned}$$
(1)

with rand and r are random values \(\in \) [0,1], UB/LB representing the upper/lower bound of the search space, and value z is used to balance exploration and exploitation characteristics. As for \(\overrightarrow{X}\) and \(\overrightarrow{X_{b}}\), they represent the locations of current and best fitness individuals at iteration t, respectively, where best fitness here represents the individual with the highest odor concentration. \(\overrightarrow{X_{A}}\) and \(\overrightarrow{X_{B}}\) are two randomly selected individuals from the slime mould population. Parameter \(\overrightarrow{vc}\) is a linearity decreasing value from 1 to 0 whereas \(\overrightarrow{vb}\) is a variable \(\in \) [a,-a] where a is calculated using [27]:

$$\begin{aligned} a = arctanh(-(\frac{t}{T})+1) \end{aligned}$$
(2)

where T represents maximum number of iterations. Parameter \(\overrightarrow{W}\) is a vector representing the slime mould weight and is described mathematically as [27]:

$$\begin{aligned}&\overrightarrow{W}(SmellIndex(i))\left\{ \begin{array}{rc} 1+r\cdot \log \left( \frac{F_{b}-S(i)}{F_{b}-F_{w}}+1 \right) , condition \\ 1-r\cdot \log \left( \frac{F_{b}-S(i)}{F_{b}-F_{w}}+1 \right) , otherwise \\ \end{array} \right. \end{aligned}$$
(3)
$$\begin{aligned}&SmellIndex = sort(S) \end{aligned}$$
(4)

where \(F_{b}\)/\(F_{w}\) represents best/worst fitness value at the current iteration and r being a random number \(\in \) [0,1]. Moreover, S(i) represents the individual’s fitness, condition indicates the rank of S(i) in the first half of the population (i.e., the positive group). In Eq. (4), the term SmellIndex denotes the result of sorting S in an ascending order. Parameter p in Eq. (1) is calculated using [27]:

$$\begin{aligned} p = tanh(|S(i)-DF)|) \end{aligned}$$
(5)

where DF is the overall global best fitness and S(i) represents the individual’s fitness.

A flowchart for SMA algorithm is depicted in Fig. 1 where it starts with initializing parameters DPTLBUBz, and a random population \(\overrightarrow{X_i}(t=0)\). In each iteration, SMA will calculate individuals’ fitness and find the best one in the current iteration. The next population is then updated according to Eq. (1) and conditional weighting parameter W. This iterative process is repeated until maximum number of iteration is reached where the best fitness and the solution are stored as the final result.

Fig. 1
figure 1

SMA flowchart

In Eq. (1), the exploration capability is guaranteed with a probability of at least z while exploitation is performed with a probability of at least \(1-p\). When rand is less than z, SMA will take a random walk within the boundaries defined by LB and UB. However, when another random number r is larger than p, SMA will perform exploitation and search in the neighbourhood of the current location. When r is less than p, SMA will wrap around the current best position, \(\overrightarrow{X_b}(t)\), with wrapping direction and radius depending on the current position’s fitness. Such behaviors can be much more evident when plugging the definition of W in Eq. (3) back to Eq. (1) when \(r<p\) resulting in [27]:

$$\begin{aligned} \overrightarrow{X}(t+1)= & {} \overrightarrow{X_b}(t)+\overrightarrow{vb}\nonumber \\&\cdot \left( \overrightarrow{X_A}(t)-\overrightarrow{X_B}(t)\pm r\cdot \log \left( \frac{F_{b}-S(i)}{F_{b}-F_{w}}+1 \right) \right) \end{aligned}$$
(6)

The SMA algorithm will wrap the food in two directions depending on the fitness of the current position’s with the radius depending on the amplitude of \(\overrightarrow{vb}\) and population variance. In Eq. (1), \(\overrightarrow{vb}\) and \(\overrightarrow{vc}\) are two tuning parameters oscillating towards 0 with iterations imitating food-grabbing behaviour and hence exploitation around the best food source.

2.2 Literature review

SMA and its variants were successfully applied to many problems such as image segmentation [34, 61], breast cancer detection [29, 36], COVID-19 early detection [4, 46], parameters estimation of photovoltaic cells [14, 31, 33], medical data classification [54], feature selection [23], proportional-integral-derivative (PID) motor speed controllers [11, 43], power systems [38, 45], bearings defect identification [51], travelling salesman problem [30], and machine learning models parameter tuning for support vector machine [8] and artificial neural network [63] to name a few.

However, SMA may suffer from some shortcomings such as slow convergence rate because of being trapped in local minima and having an unbalanced exploitation and exploration phases. Therefore, to further improve SMA performance, researchers have reported a variety of specific stochastic operators such as Levy distribution [4, 10], cosine function for controlling parameters [16, 18], quantum rotation gate [59], opposition-based learning [35, 45, 54], and chaotic operator [31]. In addition, SMA has been hybridized with other metaheuristics such as Harris hawk optimization [60], whale optimization [1], particle swarm [15], differential evolution [20, 29], and arithmetic optimization algorithm [62] to efficiently solve specific optimization problems. Furthermore, variants of SMA to solve discrete binary [2, 26] and multi-objective optimization problems [21, 24, 44] have been proposed. However, none of these works have considered population size adaptation to enhance the performance of SMA.

2.3 Population adaptation

Population size adaption has become prevalent in many population-based metaheuristic algorithms (MAs). The choice of a proper population size can substantially enhance the efficiency of many meta-heuristic algorithms including SMA. In a typical linear population size reduction algorithm, there is a large number of individuals in the population initially to enhance exploration capability. During population evolution, population size is decreased linearly until reaching smallest population size at the end of algorithm execution in order to allow for proper exploitation. However, for more complex objective functions, it is also possible to increase population size towards the end of the search process to avoid premature convergence or stagnation. A common criteria to control population size direction is to use population diversity as a metric (i.e., population distribution). For example, in [6, 39, 42, 56, 57], the authors proposed to use population diversity to start and stop population adaption process in differential evolution. The following is a review of population diversity adaptation technique based on the work presented in [56]. Parameters and variables associated with this technique are listed in Table 1.

Table 1 Population diversity parameters and variables [56]

Population diversity is measured by mean of the Euclidean distances in each feature described as follows:

$$\begin{aligned} DI_t = \sqrt{\frac{1}{P_t}\sum _{i=1}^{P_t}\sum _{j=1}^{D}(x_{i,j}-{\bar{x}}_j)^2} \end{aligned}$$
(7)

where value \({\bar{x}}_j\) is calculated as:

$$\begin{aligned} {\bar{x}}_j = \frac{1}{P_t}\smash {\displaystyle \sum _{i=1}^{P_t}x_{i,j}} \end{aligned}$$
(8)

During the evolution process, a relative measure of population diversity is calculated using:

$$\begin{aligned} RD_t = \frac{DI_t}{DI_{t=0}} \end{aligned}$$
(9)

where the lower and upper bound for \(RD_t\) \(\in \) \([0.9 \times \gamma R_{FES,t}, 1.1 \times \gamma R_{FES,t}]\) where \(\gamma \) is a scaling factor and \(R_{FES,t}\) is the relative number of depleted function evaluations given by:

$$\begin{aligned} R_{FES,t} = \frac{\text {current number of function evaluations}}{\text {number of function evaluation allowed}} \end{aligned}$$
(10)

When \(RD_t\) drops below the lower bound (i.e., \(0.9 \times \gamma R_{FES,t}\)), \(P_t\) will increase by 1 whereas when it exceeds the upper bound (i.e., \(1.1 \times \gamma R_{FES,t}\)) it will decreased by 1. Such a technique results in a linearly fluctuant population that utilizes population diversity based on Euclidean distance.

In [48], the authors proposed using a pseudo randomization technique to change population size where population size \(P_t\) in the t-th function evaluation is calculated using:

$$\begin{aligned} P_t = P_{min}+\left[ \frac{M_t \times D - P_{min}}{2} \times \left( \cos \left( \frac{t\times \pi }{M_t\times D^2}\right) +1 \right) \right] \end{aligned}$$
(11)

where “\([~] \)” is a rounding operator, \(P_{min}\) is minimum population size, D is the dimension of the function to be optimized, and \(M_t\) is a linear reduction parameter defined as:

$$\begin{aligned} M_t = \frac{(M-1)\times (T-t)}{T}+1 \end{aligned}$$
(12)

with T being maximum function evaluations and t being the index of current function evaluation. Parameter M is a function of initial population size and problem dimension which is calculated using:

$$\begin{aligned} M = \frac{P_{init}}{D} \end{aligned}$$
(13)

In this paper, we propose to use population diversity to start and stop population adaptation similar to Poláková et al. [39] but cluster the population by applying K-mean algorithm. As pointed out in [39], in the SMA process when best food sources are gradually grabbed, it is expected that populations are gradually contracted around food sources with the best fitness, and hence \(DI_t\) will gradually decreases toward 0. By tracking the value of \(DI_t\), SMA convergence rate can be envisioned and population size can be adapted accordingly. If \(DI_t\) is high, then the population is too diverse and population size can be reduced. If \(DI_t\) is low, the population is contracted and if that happens during the initial phase of SMA, more population should be added to enhance exploration capability. However, if SMA is approaching the last few iterations, population size should be reduced to save computation time. If \(DI_t\) is decreasing dramatically to a small value during the evolution process, resetting the population can also be considered to avoid being stuck at local optimal. The proposed algorithm herein is based on this concept and its details are described in the next section.

3 FP-SMA and analysis

In this section, the proposed algorithm, called FP-SMA, is described in details. Suppose at the t-th epoch, there are \(x_{i},i=1,...,P_t\) individuals in the population. By applying the K-mean algorithm, each individual \(x_i\) is associated to a group center \(x_{c_i}\) resulting in population diversity at iteration t-th as:

$$\begin{aligned} DI_t = \frac{1}{P_t}\smash {\displaystyle \sum _{i=1}^{P_t}(x_i-x_{c_i})^2} \end{aligned}$$
(14)

The initial population diversity, \(DI_{init}=DI_{t=0}\), is stored as a reference to define relative diversity (\(RD_t\)) during the evolution process as defined in Eq. (9) with relative expected population diversity \(R_{EP}\) calculated as:

$$\begin{aligned} R_{EP} = 1-\gamma \frac{1+arctanh(\beta (x-0.5))}{2} \text { where } x=\frac{t}{T}\in [0,1] \end{aligned}$$
(15)

where t is current iteration index, T is the total number of iterations, and \(\beta \) is a scaling factor with a default value of 10. \(\gamma \in [0,1]\) is the fraction of relative population diversity expected to be consumed during the SMA process and hence \(R_{EP}\) can be treated as the expected relative population diversity at the current iteration. Initially, it is expected that \(R_{EP}\approx 1\) but then towards the end of the evolution process becomes \(R_{EP}\approx 1-\gamma \). It is also expected that \(R_{EP}\) decreases smoothly when \(R_{EP}\approx 1\), then abruptly halfway through the process, and then smoothly again until \(R_{EP}\approx 1-\gamma \).

Value \(R_{EP}\) is used as a reference to trigger population adaptation. During the optimization, if current relative population diversity \(RD_t\) is less than \(\upsilon _{low}= 0.9\times R_{EP}\) or larger than \(\upsilon _{high}= 1.1\times R_{EP}\), then population adaptation will start. Optionally, if \(R_{EP} < \epsilon \) and \(P_t<P_{min}\) then the population is reset to its initial size. The term \(P_{min}\) is the required minimum population size to guarantee minimum amount of exploration. Typically values for \(P_{min}=\frac{P_{init}}{2}\) and \(\epsilon =0.1\).

Once population adaptation is started, \(\Delta _t\) points are added/removed to/from the current population with \(\Delta _t\) defined as

$$\begin{aligned} \Delta _t = \max \left( \left[ \frac{M_t \times D}{2} \times \left( \sin ^2\left( \frac{t\times \pi }{\Lambda } \right) + 1 \right) \right] , 1\right) \end{aligned}$$
(16)

The definition of \(\Delta _t\) is similar to that of \(P_t\) in Eq. (11) except for parameter \(\Lambda \), which is a problem-specific parameter to control fluctuation period. In the original definition of \(P_t\), the period of fluctuation is \(2\times M_t\times D^2\) which may fluctuate too slowly for practical engineering design problems. Note that if \(\Lambda = \infty \), then \(\Delta _t\) is fixed to be one and thus is rolling back to the approach proposed in [48]. Therefore, population size change can be summarized as:

$$\begin{aligned} P_{t+1} = \left\{ \begin{array}{lcr} P_{init} &{} \text { }if \text { } &{} RD_t< \epsilon \text { and } P_t< P_{min} \\ P_t-\Delta _t &{} \text { }if \text { } &{} RD_t< \epsilon \text { and } P_t> P_{min} \\ P_t+\Delta _t &{} \text { }if \text { } &{} \epsilon< RD_t< \upsilon _{low} \text { and } P_t < P_{init} \\ P_t-\Delta _t &{} \text { }if \text { } &{} RD_t> \upsilon _{high} \text { and } P_t > P_{min} \\ P_t &{} &{} otherwise \\ \end{array} \right. \end{aligned}$$
(17)

As depicted in Fig. 2, the solid green line shows the expected \(R_{EP}\) as a function of t. When the actual \(RD_t\) is moving outside the region covered by dashed blue lines, population adaptation will change by \(\Delta _t\). Furthermore, if \(RD_t\) is always dropping below \(\epsilon \) when population size is already at its minimum level (i.e., \(P_t=P_{min}\)), population size can be reset.

Fig. 2
figure 2

\(R_{EP}, \upsilon _{low}, \upsilon _{high}\) and \(\epsilon \) against \(\frac{t}{T}\)

The FP-SMA algorithm is illustrated in Algorithm 1. The input to the algorithm is the fitness function to evaluate \({\mathcal {F}}\). In lines 2–6, parameters and the population are initialized and population diversity are calculated using Eq. (14) before starting the iterations. In lines 8–13, the fitness of each individual in the population will be evaluated and then sorted accordingly before being divided into two groups, positive and negative group. Each individual then will have its position updated according to Eq. (1). Line 14 through 15 are the newly proposed steps where \(DI_t\), \(RD_t\) and \(R_{EP}\) are updated accordingly. Finally in line 16, population size for next epoch is updated as defined by the fluctuation strategy described in Eq. (17).

figure a

The time complexity of FP-SMA depends on number of iterations (T), population size (P), function dimension (D) and is bounded by the computation performed within the while loop (lines 7–17). Therefore, based on simple analysis of the main compute intensive processes executed during the while loop, one can compute FP-SMA’s time complexity. For each iteration, computational complexity depends on fitness evaluation and sorting (line-8) which can be performed in \({\mathcal {O}}(P\log {}P)\), weight update (line-9), and position update (lines 11-13) where both can be performed in \({\mathcal {O}}(P*D)\). K-means clustering (lines 14–16) for population size adaptation can be performed in \({\mathcal {O}}(P*K)\) for a fixed number of iterations and attributes, where K is the number of clusters. Therefore, the final time complexity of FP-SMA is: \({\mathcal {O}}(T*((P\log {}P)+(P*D)+(P*K))\) which is comparable with the original SMA. However, in our case, the average value for P is less due to the adaptive nature at each epoch as compared to SMA which has fixed value for all iterations.

4 Experiment results

In this section, we apply FP-SMA on Ackley benchmark function to validate the relationship between relative population diversity and population size in addition to convergence characteristics. Moreover, the performance of FP-SMA as compared to original SMA using a set of 13 benchmarks and CEC2014 functions [27, 28, 37] will be discussed. The performance metrics used to compare solution quality will be fitness values, whereas for run time, the number of functional evaluations will be used. All results have been obtained using Python 3.7 running on Intel® Core™2 Quad CPU Q8400 @ 2.66 GHz with a 8 GB RAM and a 64-bit OS.

4.1 Convergence analysis

In convergence analysis experiment, FP-SMA was applied to Ackley benchmark function which is widely used as a multivariate test function for optimization problems [49]. It is described mathematically by Eq. (18) and plotted in Fig. 3.

$$\begin{aligned} {\mathcal {F}}(X) = -a \exp \left( -b\sqrt{\frac{1}{D}\displaystyle \sum _{i=1}^{D}X_i^2} \right) -\exp \left( \frac{1}{D}\displaystyle \sum _{i=1}^{D}cos(X_i) \right) + a+\exp (1) \end{aligned}$$
(18)

The Ackley function is characterized by its nearly flat outer region and a global optimal at the center (\(X^*=0\)) with many local optimal close by. Recommended variable values for Ackley benchmark are \(a=20,b=0.2,c=2\pi \). As for the parameters used in FP-SMA implementation they are: D=100, \(P_{init}\)=200, T=1000, z=0.03, \(\Lambda \)=100, \(\epsilon \)=0.1, \(\gamma \)=1. These parameters are used throughout the remainder of this discussion.

Fig. 3
figure 3

Ackley function with \(a=20,b=0.2,c=2\pi ,D=2\)

Figure 4a shows population size \(P_t\) and relative population diversity \(RD_t\) during FP-SMA execution whereas Fig. 4b shows best fitness evolution. In the first tens of epochs, \(RD_t\) is decreasing dramatically from 1 to below 0.5 with best fitness improved to around \(10^0\) indicating convergence toward a local optimal. However, to continue the exploration process, population size is still fixed at its initial value of \(P_{init}=200\). During the first 500 epochs, population size is slightly changing due to \(RD_t\) fluctuation to maintain the balance between exploration and exploitation. After 500 iterations, \(R_{EP}\) drops considerably as are seen in Fig. 4a leading to a sharp decrease in population size down to \(P_{min}=100\). Keeping this minimum population size is sufficient to reach the global minimum at around the 540th iteration.

Fig. 4
figure 4

FP-SMA Performance on Ackley benchmark

Figure 5 shows population distribution only in the first two dimensions during the optimization process. At the beginning when \(t=0\), the population is randomly distributed between the lower and upper bound; however, as execution continues, the population is gradually concentrated around multiple centers. As a result, population size can be decreased gradually to save computation without effecting exploitation characteristics. At \(t=750\), the algorithm converges toward two centers indicating that minimum population size is sufficient for exploitation.

Fig. 5
figure 5

Population distribution

Table 2 Results for standard benchmark

4.2 Benchmarks comparisons

FP-SMA was also evaluated using 13 standard benchmark function (see Table 5 in appendix) commonly used to evaluate optimization algorithms and additional 30 benchmarks from CEC2014 [28]. The results are the average values for 30 independent runs of the algorithms on each benchmark. Tables 2 and 3 show the best fitness on both standard and CEC2014 benchmarks, respectively. In the tables, column \(f_{min_{x}}\) specifies the fitness value followed by standard deviation (\(\sigma \)). Column \(\Upsilon \) indicates the fitness of SMA whether it is better, equal or worst than FP-SMA using symbols "−", "\(=\)" or "\(+\)," respectively. Moreover, a comparison between SMA and FP-SMA performance in terms of function evaluations are represented in column \(\delta (\%)\) which represents the percentile decrease in the number of function evaluations calculated using:

$$\begin{aligned} \delta = \frac{{\# {\text{function evaluations(SMA)}} - \# {\text{function evaluations(FP - SMA)}}}}{{\# {\text{function evulations (SMA)}}}} \end{aligned}$$

Tables 2 and 3 compares that attained best fitness for SMA and FP-SMA algorithms on standard and CEC2014 benchmarks, respectively. The tables show that FP-SMA was able to reduce the number of function evaluation for standard benchmark on average by approximately 28% and CEC2014 ones by 24%. In 8 out of 13 standard benchmarks and more than half of CEC2014 benchmarks, FP-SMA was able to achieve equivalent or better fitness than the original SMA. FP-SMA was able to achieve the same fitness in 5 CEC2014 benchmarks while having worst fitness in 10 benchmarks with an average fitness loss of about 9% which is much less than the 26% fitness improvement found in the other benchmarks. As a matter of fact, if \(f_{8}\) and \(f_{19}\) are excluded from fitness results, then overall loss in fitness will become negligible (i.e., 0.8%) for the benchmarks with worst fitness.

Table 3 Results for CEC2014 benchmarks

To get a sense of run time enhancement as compared to simply the reduction in number of function evaluations, Table 4 shows run-time characteristics for both SMA and FP-SMA algorithms on standard benchmarks where the values are given in seconds. Column \(\delta \) gives percentile reduction in run-time when using FP-SMA as compared to SMA. It is apparent that on average, FP-SMA provide a reduction of 25% in run-time. The same characteristics are observed for CEC2014 benchmarks (i.e., 22% reduction in run-time) but not shown to keep the discussion concise.

Table 4 Standard benchmarks run-time comparison

Surprisingly, FP-SMA was able to save 64.3% of computational cost associated with \(f_{26}\) as compared to original SMA. By plotting population size (\(P_t\)) and relative diversity (\(RD_t\)) in Fig. 6a and best fitness in Fig. 6b, it can be observed that the relative diversity has dramatically decreased after about 100 epochs, indicating algorithm stagnation. Population size fluctuation can no longer help the algorithm escape its stagnation and after 250 epochs, the relative diversity \(RD_t\) drops to zero and population size is set to its minimum value. This results in a considerable reduction in the number of function evaluations as depicted in Table 3. Since the algorithm was able to identify this condition, it is possible to utilize such an approach to trigger early algorithm termination resulting in further reduction in function evaluation. Note that in these tables, we only presented the saving in the number of function evaluation without changing the terminating condition (i.e., maximum number of iteration). A possible further enhancement to the proposed algorithm is to consider this case and terminate the algorithm to boost reduction in overall execution time.

Fig. 6
figure 6

FP-SMA Performance on \(f_{26}\) benchmark

In summary, the following observations can be made from the experimental results:

  • Population size adaptation based on population diversity played an important role in both run-time efficiency and optimization effectiveness of FP-SMA as compared with SMA.

  • Experimental results on 13 standard and 30 CEC2014 benchmark functions in Tables 2 and 3 have revealed that FP-SMA can achieve 20–30% savings in function evaluations on average while maintaining good solution quality when compared to SMA.

  • As depicted in Table 4, the FP-SMA showed a 25% reduction in run-time on standard benchmarks on average when compared to SMA and thus demonstrating its balanced exploration and exploitation capabilities.

5 Conclusion

This paper proposed an acceleration strategy for SMA algorithm named FP-SMA that adaptively change population size during algorithm execution. A cluster-based population diversity from K-mean is used as an indicator to change population size to balance exploration and exploitation phases of the algorithm. Thresholds for population diversity that trigger population size change are dynamically fine-tuned to appropriate levels during different stages of the algorithm. Once population diversity exceeds the threshold, population size is modified using sine wave function pattern. Simulation results on 13 standard and 30 IEEE CEC2014 benchmark functions have revealed that FP-SMA algorithm can achieve approximately 20% reduction in computation cost while maintaining good solution quality. The performance gain can be attributed to the flexibility offered by FP-SMA to switch between exploration and exploitation phases and the adaptable population size. The proposed algorithm can be found on Github using the link https://github.com/e6la3banoh/FP-SMA. As future work, we would like to study the parallelization of FP-SMA and its extension for multiple-objective SMA [21, 24, 44].