Jain2018 - SSA Algorithm
Jain2018 - SSA Algorithm
A R T I C L E I N F O A B S T R A C T
Keywords: This paper presents a novel nature-inspired optimization paradigm, named as squirrel search algorithm (SSA).
Nature-inspired algorithm This optimizer imitates the dynamic foraging behaviour of southern flying squirrels and their efficient way of
Unconstrained optimization
locomotion known as gliding. Gliding is an effective mechanism used by small mammals for travelling long
Squirrel search algorithm
distances. The present work mathematically models this behaviour to realize the process of optimization. The
efficiency of the proposed SSA is evaluated using statistical analysis, convergence rate analysis, Wilcoxon’s test
and ANOVA on classical as well as modern CEC 2014 benchmark functions. An extensive comparative study
is carried out to exhibit the effectiveness of SSA over other well-known optimizers in terms of optimization
accuracy and convergence rate. The proposed algorithm is implemented on a real-time Heat Flow Experiment to
check its applicability and robustness. The results demonstrate that SSA provides more accurate solutions with
high convergence rate as compared to other existing optimizers.
* Corresponding author.
E-mail addresses: [Link]@[Link] (M. Jain), vijaydee@[Link] (V. Singh), ashansit@[Link] (A. Rani).
[Link]
Received 26 June 2017; Received in revised form 30 November 2017; Accepted 23 February 2018
Available online XXX
2210-6502/© 2018 Elsevier B.V. All rights reserved.
Please cite this article in press as: M. Jain, et al., A novel nature-inspired algorithm for optimization: Squirrel search algorithm, Swarm and
Evolutionary Computation (2018), [Link]
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 1
Brief literature review on nature-inspired optimization algorithms.
Algorithm Inspiration Year
Genetic algorithm (GA) [4] Evolution 1975
Simulated annealing (SA) [5] Annealing process in matallurgy 1983
Particle swarm optimization (PSO) [6] Intelligent social behaviour of bird flock 1995
Artificial fish-swarm algorithm (AFSA) [23] Collective intelligence of fish swarm 2003
Termite algorithm [24] Termite colony 2006
Ant colony optimization (ACO) [7] Ant colony 2006
Artificial bee colony (ABC) [8] Honey Bee 2006
Imperialist competitive algorithm (ICA) [25] Imperialistic competition 2007
Monkey search (MS) [26] Monkey climbing process on trees while looking for food 2007
Group search optimizer (GSO) [27] Animal searching (foraging) behaviour 2009
Firefly algorithm (FF) [28] Social behaviour of fireflies 2009
Gravitational search algorithm (GSA) [20] Law of gravity and mass interactions 2009
Bat algorithm (BA) [29] Echolocation behaviour of bats 2010
Flower pollination algorithm (FPA) [30] Pollination process of flowering species 2012
Fruit fly optimization algorithm (FFOA) [31] Fruit foraging behaviour of fruit fly 2012
Krill herd (KH) [32] Herding behaviour of krill individuals in nature 2012
Mine blast algorithm (MBA) [33] Mine bomb explosion 2013
Dolphin echolocation (DE) [34] Echolocation ability of dolphins 2013
Lightning search algorithm (LSA) [35] Natural phenomenon of lightning 2015
Dragonfly algorithm (DA) [18] Static and dynamic swarming behaviours of dragonflies 2015
Artificial algae algorithm (AAA) [36] Living behaviours of microalgae 2015
Ant lion optimizer (ALO) [37] Hunting mechanism of antlions in nature 2015
Shark smell optimization (SSO) [38] Ability of shark in finding its prey by smell sense 2016
Dolphin swarm optimization algorithm (DSOA) [39] Mechanism of dolphins in detecting, chasing and preying on swarms of sardines 2016
Virus colony search [40] Virus infection and diffusion strategies 2016
Whale optimization algorithm (WOA) [41] Social behaviour of humpback whales 2016
Multi-verse optimizer (MVO) [21] Multi-verse theory 2016
Crow search algorithm (CSA) [42] Intelligent food hiding behaviour of crows 2016
Salp swarm algorithm [43] Swarming behaviour of salps during navigating and foraging in oceans 2017
Grasshopper optimization algorithm [44] Swarming behaviour of grasshoppers 2017
Selfish herd optimizer (SHO) [45] Hamilton’s selfish herd theory 2017
Electro-search algorithm [46] Orbital movement of the electrons around the atomic nucleus 2017
Thermal exchange optimization [47] Newton’s law of cooling 2017
Mouth brooding fish algorithm [48] Life cycle of mouth brooding fish 2017
Weighted superposition attraction (WSA) [49] Superposition principle 2017
Spotted hyena optimizer [50] Social behaviour of spotted hyenas 2017
Butterfly-inspired algorithm [51] Mate searching mechanism of butterfly 2017
Lightning attachment procedure optimization [52] Lightning attachment process 2017
eration. PSO, ACO and ABC can be described as representative algo- fuzzy imperialist competitive algorithm with dynamic parameter adap-
rithms in SIs. Some of the recent SIs are cuckoo search (CS) [16], grey tation [59], fuzzy based water cycle algorithm [60], hierarchical GWO
wolf optimizer (GWO) [17], dragonfly algorithm (DA) [18] and many algorithm [61], interval type-2 fuzzy logic based bat algorithm [62]
more [19]. The physics-based algorithms are inspired from basic phys- and type-2 fuzzy logic based ant colony optimization algorithm [63]
ical laws that exist in universe. Some of the prevailing methods of this etc. In a similar fashion, principles of quantum computing are incorpo-
category are SA, gravitational search algorithm (GSA) [20], multi-verse rated in metaheuristics design and improved performance is claimed
optimizer (MVO) [21] and charged system search (CSS) [22]. A brief through various studies. Quantum inspired binary grey wolf opti-
literature review on nature-inspired algorithms is presented in Table 1. mizer [64], hybrid quantum-inspired genetic algorithm (HQIGA) [65]
Apart from this, recently various modifications are also proposed and quantum inspired particle swarm optimization (QPSO) [66] are
in the basic versions of existing nature-inspired algorithms for solv- some examples of research in this area. All nature-inspired algorithms
ing complex optimization problems. As an illustration, slow conver- possess some common characteristics like: (i) they imitate some natural
gence rate of ABC algorithm is improved in the variant IABC (improved phenomenon (ii) they do not demand gradient information (iii) Employ
ABC) [53] and tested on several reliability optimization problems. random variables (iv) and contain various parameters which must be
The same issue is resolved by incorporation of improved global best defined adequately to solve a problem [40]. Each algorithm offers dis-
guiding mechanism in ABC [54] and improved exploitation capabil- tinctive benefits, from the perspective of robustness, performance in the
ity is also achieved through adaptive limit mechanism. Higher accu- presence of uncertainty and unknown search spaces [40].
racy with quick convergence characteristic is claimed by co-variance The advancement in technology also leads to several complex opti-
guided ABC [55] while solving portfolio optimization problem. Like- mization problems. As an illustration, increased usage of social net-
wise, the performance of basic DE is improved for large scale optimiza- working websites, huge data volumes are generated every second,
tion problems by embedding a simple switching mechanism for two which presents a new optimization problem of effective handling of
control parameters of DE [56]. The issue of low convergence efficiency user generated big data [67]. Another crucial optimization problem is
of basic cuckoo search algorithm is resolved by integrating chaos mech- time-dependent pollution-routing problem [68], which is generated due
anism and the resulting improved cuckoo search (ICS) is successfully to implementation of new environmental legislations for cities having
applied to optimization problem of visible light communications (VLC) problem of congestion. Pollution level increases with congestion due
in smart homes [57]. Successful application of fuzzy logic in diversified to increased emission of greenhouse gases by commercial vehicles of
fields of science has gained significant attraction of metaheuristic devel- freight companies. In spite of the existence of many prominent opti-
opers and hence various optimization algorithms are improved by uti- mization algorithms in literature, scientific community is still develop-
lizing the advantages of fuzzy logic principles. In this context, some of ing new optimization techniques for solving new and more complex
the relevant developments are: fuzzy harmony search algorithm [58], optimization problems under the ideology of continuous improvement
2
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
in order to achieve better design. Moreover, in accordance with “no helps the squirrel in gliding from one tree to the other and makes them
free lunch” (NFL) theorem, there is no single nature-inspired optimiza- capable in modifying lift and drag [72]. The most interesting fact about
tion technique, which can optimally solve all optimization problems flying squirrels is that they do not fly, instead they use a special method
[69]. This means that an optimization algorithm is competent for solv- of locomotion i.e. “Gliding” which is considered to be energetically
ing a certain set of problems but ineffective on other class of problems cheap, allowing small mammals to cover large distances quickly and
[70]. The NFL theorem, certainly, keeps this domain of research open efficiently [72]. Literature suggests that predator avoidance, optimal
and allows the researchers to improve the existing algorithms or pro- foraging and cost of foraging are the primary cause of evolution of glid-
pose new algorithms for better optimization. Hence, the present study ing [73]. Fig. 1a shows the real image of flying squirrel while gliding
proposes a new simple and powerful nature-inspired algorithm called and Fig. 1b shows the slow motion sequences of flying squirrel before
squirrel search algorithm (SSA) for unconstrained numerical optimiza- landing on a tree. The squirrels can optimally use food resources by
tion problems. This algorithm simulates the dynamic foraging strategy showing a dynamic foraging behaviour [76,77]. For instance, to meet
of southern flying squirrels and their efficient way of locomotion known nutritional requirements in autumn, they prefer to eat acorns (a mast
as gliding. The main contributions of the proposed work are as follows: nut) as they are available in abundance while store other nuts such as
hickories in nests, other cavities, and sometimes the ground. During
1. A novel nature-inspired squirrel search optimization algorithm is
winters when nutritional demands are higher due to low temperature,
proposed. The foraging behaviour of flying squirrels is studied thor-
hickory nuts are eaten promptly at the site of discovery during foraging
oughly and modelled mathematically including each and every fea-
and are also taken out from reserve food stores. Therefore, selectively
ture of their food search.
eating some nuts and storing others depending upon the nutritional
2. The proposed algorithm is validated on 33 classical as well as mod-
demands, allows optimum utilization of both types of available mast
ern CEC 2014 benchmark functions.
nuts [77]. This intelligent dynamic foraging behaviour of southern fly-
3. Rigorous comparative study is performed with existing nature-
ing squirrel is the main source of motivation for proposed SSA. In this
inspired optimization algorithms using statistical analysis, conver-
work dynamic foraging strategy and gliding mechanism of flying squir-
gence rate analysis, Wilcoxon’s test and ANOVA.
rels are modelled mathematically to design SSA for optimization.
4. Robustness and effectiveness of SSA as well as other optimizers are
investigated for optimization of two degree of freedom proportional
3. Squirrel search algorithm (SSA)
and integral (2DOFPI) controller for precise temperature control of
heat flow experiment (HFE).
The search process begins when flying squirrels start foraging. Dur-
The rest of this paper is organized as follows: Section 2 discusses ing warm weather (autumn) the squirrels search for food resources by
the motivation of proposed algorithm. In Section 3 basic concepts of gliding from one tree to the other. While doing so, they change their
SSA and its assumptions are discussed. Section 4 presents the details location and explore different areas of forest. As the climatic condi-
about implementation of SSA. Section 5 presents the comparison of SSA tions are hot enough, they can meet their daily energy needs more
with existing optimizers. Section 6 provides the comparative statistical quickly on the diet of acorns available in abundance and hence they
analysis of results on standard benchmark functions. Section 7 provides consume acorns immediately upon finding them. After fulfilling their
the real-time application of the proposed algorithm. Finally the work is daily energy requirement, they start searching for optimal food source
concluded in Section 8. for winter (hickory nuts). Storage of hickory nuts will help them in
maintaining their energy requirements in extremely harsh weather and
reduce the costly foraging trips and therefore increase the probability of
2. Inspiration
survival. During winter, a loss of leaf cover in deciduous forests results
an increased risk of predation and hence they become less active but
Flying squirrels are a diversified group of arboreal and nocturnal
do not hibernate in winter. At the end of winter season, flying squirrels
type of rodents that are exceptionally adapted for gliding locomotion.
again become active. This is a repetitive process and continues till the
Currently, 15 genera and 44 species of flying squirrels are identified and
lifespan of a flying squirrel and forms the foundation of SSA. The fol-
majority of them are found in deciduous forest area of Europe and Asia,
lowing assumptions are considered for simplification of mathematical
particularly South-eastern Asia. The only species found outside Eurasia
model:
and most studied is Glaucomys volans known as southern flying squir-
rel [71]. The flying squirrels are considered to be the most aerodynami- 1. There is n number of flying squirrels in a deciduous forest and one
cally sophisticated having a parachute-like membrane (patagia), which squirrel is assumed to be on one tree.
3
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
2. Every flying squirrel individually searches for food and optimally 4.3. Sorting, declaration and random selection
utilizes the available food resources by exhibiting a dynamic forag-
ing behaviour. After storing the fitness values of each flying squirrel’s location, the
3. In forest, only three types of trees are available such as normal tree, array is sorted in ascending order. The flying squirrel with minimal
oak tree (acorn nuts food source) and hickory tree (hickory nuts fitness value is declared on the hickory nut tree. The next three best
food source). flying squirrels are considered to be on the acorn nuts trees and they
4. The forest region under consideration is assumed to contain three are assumed to move towards hickory nut tree. The remaining flying
oak trees and one hickory tree. squirrels are supposed to be on normal trees. Further through random
selection, some squirrels are considered to move towards hickory nut
In the present study, number of squirrels, n is considered to be 50.
tree assuming that they have fulfilled their daily energy requirements.
4 nutritious food resources (Nfs ) are considered with 1 hickory nut tree
The remaining squirrels will proceed to acorn nut trees (to meet their
and 3 acorn nut trees, whereas 46 trees have no food source. That is
daily energy need). This foraging behaviour of flying squirrel is always
92% of the total population of squirrels is on normal trees, while the
affected by the presence of predators. This natural behaviour is mod-
remaining is on food sources. However, the number of food resources
elled by employing the location updating mechanism with predator
can be varied as per the constraint 1 < Nfs < n where Nfs ∈ ℤ>0 with
presence probability (Pdp ).
one optimal winter food source.
4
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 2. Conceptual model of flying squirrel moving from one tree to another using gliding locomotion.
Fig. 3. (a) Flying squirrel gliding at equilibrium (b) Approximated model of gliding behaviour.
4.5. Aerodynamics of gliding speed always descends at an angle 𝜙 to horizontal and lift-to-drag ratio
or glide ratio, defined as follows [79]:
Gliding mechanism of flying squirrels is described by equilibrium
L∕D = 1∕ tan 𝜙 (7)
glide in which sum of lift (L) and drag (D) force produces a resultant
force (R) whose magnitude is equal and opposite to the direction of The flying squirrels can increase their glide-path length by making
flying squirrel’s weight (Mg). Thus, R provides a linear gliding path smaller glide angle (𝜙) and thus lift-to drag ratio is increased. Here,
(Fig. 3a) to flying squirrel at constant velocity (V) [72,78]. In this work the lift results from downward deflection of air passing over the wings
an approximated model of gliding behaviour is (Fig. 3b) utilized in the and is defined as:
design of optimization algorithm. A flying squirrel gliding at steady
L = 1∕2𝜌CL V 2 S (8)
Fig. 4. (a) Simulated gliding distance (b) Scaling down the gliding distance using sf = 18.
5
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 2
The effect of scaling factor (sf ) on the performance of SSA on four benchmark functions.a
Function Parameter sf = 10 sf = 15 sf = 18 sf = 30 sf = 40 sf = 50 sf = 100
TF1 Mean 2.2000E+01 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 3.3333E-02
SD 6.6664E+01 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 1.8257E-01
TF5 Mean 1.0522E-02 2.1527E-11 3.9454E-22 1.8899E-25 2.9617E-07 2.3469E-06 1.4112E-05
SD 8.5168E-03 7.1976E-11 2.0493E-21 8.4211E-25 1.0297E-06 5.5454E-06 2.7179E-05
TF13 Mean 2.0433E+00 2.4120E-10 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 2.9231E-07
SD 2.0746E+00 5.3536E-10 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 9.3018E-07
TF23 Mean −182.1364 −186.73 −186.73 −186.73 −186.73 −186.73 −186.73
SD 4.3904E+00 1.1788E-07 1.9029E-14 2.2392E-14 2.4831E-11 1.2022E-05 2.5781E-04
a
See Tables 4, 6, 8 and 10 for benchmark functions.
where 𝜌 (=1.204 kgm−3 ) is density of air, CL is called as lift coefficient, being trapped in local optimal solutions. Following steps are involved
V (=5.25 ms−1 ) is speed and S (=154 cm2 ) is the surface area of body in modelling the behaviour:
[79]. The frictional drag is defined as:
a. First calculate the seasonal constant (Sc ) using Eq. (12)
D = 1∕2𝜌V 2 SCD (9) √
√ d
√∑
where CD is the frictional drag coefficient. At slow speed, this drag Stc = √ (FStat,k − FSht,k )2 (12)
component is very high while at high speed it becomes smaller. Hence k=1
from Eq. (7) glide angle at steady state is determined as:
( ) where t = 1, 2, 3.
D
𝜙 = arctan (10)
L
b. Check the seasonal monitoring condition i.e. Stc < Smin where Smin
The approximated gliding distance (dg ) is calculated (Fig. 3b) as fol- is the minimum value of seasonal constant computed as:
lows:
( ) 10E−6
hg Smin = (13)
dg = (11) (365)t∕(tm ∕2.5)
tan 𝜙
where hg (=8 m) is the loss in height occurred after gliding. All the where t and tm are the current and maximum iteration values respec-
parametric values including CL and CD , required to compute dg are tively. The value Smin affects the exploration and exploitation capa-
considered from the real data [72,78,80]. Thus a flying squirrel may bilities of the proposed method. Larger value of Smin promotes explo-
vary its glide-path length or dg by simply changing the lift-to-drag ratio ration while smaller values of Smin enhance the exploitation capa-
as per the desired landing location. The simulations are performed by bility of algorithm. For any effective metaheuristic, there must be a
incorporating random variations in CL in the range 0.675 ≤ CL ≤ 1.5 proper balance between these two phases [82]. Although, this bal-
and CD is considered to be fixed at 0.60. ance is maintained by gliding constant Gc (Eq. (4ab), Eq. (5ab) and
Flying squirrels generally travel a horizontal gliding distance rang- Eq. (6ab)), but it may be improved by adaptively changing the value
ing from 5 to 25 m in a single glide [72]. Gliding distance in the of Smin during the course of iterations.
proposed model is considered to be in the range of 9–20 m which
is validated through Fig. 4a. The value of dg is quite large and may c. If seasonal monitoring condition is found true (i.e. winter season
introduce large perturbations in Eq. (4ab), Eq. (5ab) and Eq. (6ab), is over), randomly relocate those flying squirrels which could not
which may cause unsatisfactory performance of the algorithm. The explore the forest for optimal winter food source.
value of dg is scaled down to achieve acceptable performance of the
algorithm. dg is divided by a suitable non-zero value called as scaling
factor (sf ) obtained through rigorous experimentation on benchmark 4.7. Random relocation at the end of winter season
functions. Table 2 presents some recorded results of experimentation. It
is observed during experimentation that value of scaling factor sf may As discussed previously, the end of winter season makes flying squir-
be varied from 16 to 37 in order to achieve the desired level of accu- rels active due to low foraging cost. The flying squirrels which could not
racy without affecting stability of algorithm. However, in the present explore the forest for optimal food source in winter and still survived
work, sf = 18 provides sufficient perturbation range of dg in the interval may forage in new directions. The incorporation of this behaviour in
[0.5, 1.11] (Fig. 4b) and satisfactory performance is achieved for most modelling may enhance the exploration capability of proposed algo-
of the benchmark functions. Thus sf helps to achieve the desired bal- rithm. It is assumed that, only those squirrels which could not search
ance between exploration and exploitation phases, which is a manda- the hickory nuts food source and still survived will move to different
tory requirement for designing an efficient metaheuristic. directions in order to find better food source. The relocation of such
flying squirrels is modelled through the following equation:
4.6. Seasonal monitoring condition
FSnew
nt = FSL + Lévy(n) × (FSU − FSL ) (14)
Seasonal changes significantly affect the foraging activity of flying
squirrels [81]. They suffer significant heat loss at low temperatures, as where Lévy distribution encourages better and efficient search space
they posses high body temperature and small size which makes foraging exploration. Lévy flight is a powerful mathematical tool used by the
cost high as well as risky due to the presence of active predators. Cli- researchers for improving global exploration capability of various meta-
matic conditions force them to be less active in winters as compared to heuristic algorithms [16,83–87]. Lévy flights help to find new candidate
autumn [77]. Thus movement of flying squirrels is affected by weather solutions far away from the current best solution. It is a kind of random
changes and inclusion of such behaviour may provide a more realistic walk in which step length is drawn from a Lévy distribution. This distri-
approach towards optimization. Therefore a seasonal monitoring condi- bution is often expressed by a power-law formula L(s) ∼ |s|−1−𝛽 where
tion is introduced in SSA which prevents the proposed algorithm from 0 < 𝛽 ≤ 2 is an index. Lévy distribution is stated mathematically as fol-
6
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
7
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
direction towards and near to global optimum point. If R1 < Pdp , the first phase, new directions for movement of search agents (FStat )
i.e. predator presence is felt by a flying squirrel, then it may glide in a close to optimal solutions are obtained using the globally best solu-
random direction or may move to a nearby safe location. Few random tion (FStht ) (Eq. (4ab)). In the second phase, one part of randomly
locations are shown through vectors in Fig. 5b. In the terminology of selected search agents (FStnt ) is promoted to move towards near opti-
optimization algorithms, this random orientation of location vectors, mal solutions (FStat ) (Eq. (5ab)). In the last phase remaining search
may enhance the exploration phase of proposed algorithm. Likewise, agents (FStnt ) are moved towards global optimum solution (FStht )
the other cases of proposed technique illustrated in Fig. 5c–f may be (Eq. (6ab)). In other words, PSO updates all solutions in the pattern
analyzed. It is thus inferred that proposed technique may explore the matrix by single strategy, however SSA employs three strategies in
complete search space quite efficiently. different regions of pattern matrix. Thus position updating mecha-
It is observed in many studies that “new” algorithms lack rigorous nism of SSA differs from PSO and its variants i.e. Cognitive only PSO
theoretical background and most of these algorithms are basically pre- and Social only PSO [91].
vious algorithms under new clothes and interpretation [88]. Hence, cri- 2. In PSO, the random numbers (ri1 and ri2 ) are obtained from uni-
terion for releasing a new algorithm becomes much higher. There must form distribution in the interval [0, 1], while SSA uses behaviourally
be significant differences in the new algorithm from the others in order inspired random variations in gliding distance (dg ) in the interval
to release a new algorithm [89]. Therefore in the present study SSA is [0.5, 1.11].
compared on conceptual grounds with other similar metaheuristics. 3. Flying squirrel movement is affected by predator presence, which is
modelled using a probabilistic behaviour. The inclusion of predator
5. Conceptual comparative analysis of SSA with other presence probability, suddenly redirects the location of any flying
metaheuristic algorithms squirrel and hence improves the exploration capability of algorithm
(Eq. (4ab), Eq. (5ab) and Eq. (6ab)), whereas PSO does not utilize
Broadly, all nature-inspired metaheuristics imitate two distinct fea- probabilistic phenomenon.
tures of nature i.e. adaptability and choice of the fittest, which gives 4. The simulation of flying squirrel behaviour provides an opportunity
them a similar appearance superficially. Most of the algorithms utilize to introduce a seasonal condition in SSA. This invokes the algorithm
the concept of pattern matrix, which is constructed through randomly several times to initiate search in different directions and thus the
generated solutions of optimization problem under consideration [90]. algorithm does not stuck in local optimal solutions. This feature is
The pattern matrix is recursively updated at each iteration through a not present in PSO due to the natural behaviour of swarm.
suitable updating mechanism. This mechanism injects new attributes or
patterns in the pattern matrix while maintaining diversity in solutions. 5.2. Artificial bee colony algorithm
Metaheuristic algorithms are generally differentiated on the basis of
their solution updating strategy. In this work, SSA is compared with par- ABC algorithm mimics the foraging process of honey bees. A honey
ticle swarm optimization (PSO), artificial bee colony (ABC), bat algo- bee colony is composed of three kinds of bees:
rithm (BA) and firefly algorithm (FF).
• Employed bee: Each employed bee is connected to a nectar rich food
source and strives to search new enriched food sources of nectar
5.1. Particle swarm optimization algorithm
in the neighbourhood of its present food source. It memorizes the
location of new food source, only if it finds that amount of nectar is
PSO imitates the social behaviour of flock of birds. It starts optimiza-
more than its associated present food source. An employed bee, also
tion using randomly generated solutions commonly known as artificial
shares this information with onlooker bees near the dancing region
particles. Each particle in swarm has an associated randomly generated
of hive, through a special dance.
velocity. If Xi is the initial position of ith particle in swarm with velocity
• Onlooker bee: Onlooker bees judge the information received by
Vi then position updating mechanism of PSO can be defined as follows
employed bees and move towards new food sources, due to which
[91]:
the probability of finding a better food source also increases.
Xi (t + 1) = Xi (t ) + Vi (t + 1) (18) • Scout bee: An employed bee is converted into scout bee, if its asso-
ciated food source is fully exploited and it searches new enrich food
source randomly.
Vi (t + 1) = wVi (t ) + C1 ri1 (Pbesti − Xi (t )) + C2 ri2 (Gbest − Xi (t )) (19)
In the basic ABC algorithm, food sources are considered as solu-
where w is the inertial weight, C1 and C2 are cognitive and social con- tions for an optimization problem. The amount of nectar present in a
stants respectively [91,92]. The ri1 and ri2 are uniformly distributed food source directly indicates the quality of food source and hence the
random numbers in the interval [0, 1] [93], Pbesti is the best previous quality of solution. It is also considered that 50% of the total popula-
position (local best solution) of ith particle and Gbest is the best position tion consists of employed bees (or food sources) and remaining 50%
among all the particles (global best solution). are onlooker bees. The location of ith food source (or employed bee) is
represented by a vector Xi = (Xi1 , Xi2 , … , Xid ) and all food sources (SN)
5.1.1. SSA versus PSO can be randomly located initially by Eq. (20) [94]:
SSA also initiates optimization process similar to PSO by movement
Xij = Xjmin + randj (0, 1)(Xjmax − Xjmin ) (20)
of search agents in the search space, however the movement mecha-
nism is entirely different. Some of the major differences are described where i = 1, … , SN, j = 1, … , d where d is the problem dimension, Xjmin
as follows:
and Xjmax are the minimum and maximum values of jth dimension of
1. In PSO, new direction for movement of ith
particle is obtained by problem respectively. The movement of employed bees towards new
Pbesti and Gbest i.e. cumulative effect of both is considered. How- food sources is modelled through Eq. (21):
ever, SSA uses sorted information and divides the pattern matrix
Vij = Xij + 𝜙ij (Xij − Xkj ) (21)
initially into three regions like global optimum solution (FStht ), near
optimal solutions (FStat ) and random solutions (FStnt ). The random where k ∈ (1, 2, … , SN ) and j are randomly selected indices provided
solutions (FStnt ) are further randomly bifurcated to redirect the k ≠ i. The 𝜙ij is a uniformly distributed random number within [−1, 1].
search towards globally optimum solution (FStht ) and near optimal The movement of onlooker bees is also decided as per Eq. (21) but uti-
solutions (FStat ). Thus new patterns are injected in three phases. In lizes probabilistic information calculated from roulette wheel method
8
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
[94]. In the employed and onlooker bee phases, every bee tries to dis- 5.2.1. SSA versus ABC
cover a more qualified food source until a predefined number of runs ABC and SSA apparently looks quite similar, however technically
named as “Limit” is exceeded [95]. If the quality of solution (fitness both present several differences in their formulation and updating
value) is not elevated, till the predefined Limit, the corresponding bee mechanism.
is declared as scout bee and its location is randomly reinitialized using
1. Both ABC and SSA work on the effective division of labour i.e. pat-
Eq. (20). This process continues until the stopping criterion is satisfied.
tern matrix is divided into various regions. In ABC, half of the pop-
9
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 3
Parametric settings of algorithms.
Name of Parameter GA PSO BA FF MVO KH SSA
Crossover fraction 0.8 – – – – – –
Selection Tournament – – – – – –
Crossover Arithmetic – – – – – –
Mutation Adaptive feasible – – – – – –
C1 and C2 – 2 – – – – –
Inertia weight (w) – 0.9 – – – – –
Loudness – – 0.5 – – – –
Pulse rate – – 0.5 – – – –
fmin , fmax – – 0, 2 – – – –
𝛼 – – – 0.25 – – –
𝛽 – – – 0.20 – – –
𝛾 – – – 1 – – –
WEPmax , WEPmin – – – – 1, 0.2 – –
Vf – – – – – 0.02 –
Dmax – – – – – 0.005 –
Nmax – – – – – 0.01 –
Nfs – – – – – – 4
Gc – – – – – – 1.9
Pdp – – – – – – 0.1
Table 4
The description of classical unimodal and separable benchmark functions.
Function Name Expression d Range Fmin
∑
d
TF1 Step TF1(x) = (xj + 0.5)2 30 [-5.12, 5.12] 0
j=1
∑d
TF2 Sphere TF2(x) = xj2 30 [-100, 100] 0
j=1
∑d
TF3 Sum Squares TF3(x) = jxj2 30 [-10, 10] 0
j=1
∑d
TF4 Quartic TF4(x) = jxj4 + rand 30 [-1.28, 1.28] 0
j=1
ulation belongs to employed bees and the remaining half is treated an arithmetic recombination of the component and corresponding
as onlooker bees. In contrary, SSA initially sorts the pattern matrix component from another solution vector. This mechanism shows a
in ascending order of fitness and then divides it, which is controlled conceptual resemblance with binomial crossover of DE algorithm
by the user. In the present work, first 8% of population belongs to [95]. On the other hand, the updating strategy of SSA is a kind
flying squirrels on food sources and remaining population of squir- of directed search approach i.e. the new solutions are forced to
rels is considered on normal trees. However, this percentage may move towards best solutions and therefore predefined neighbour-
vary and depends upon the available food sources and hence it is hood selection is considered. The best solution(FStht ) and sorted
modelled as a user defined variable. series of locally best solutions (FStat ) are considered (Eq. (4ab), Eq.
2. In ABC, a new solution is generated by probabilistic neighbour- (5ab) and Eq. (6ab)) while altering the solution vector. Apart from
hood selection. The updating mechanism of ABC (Eq. (21)) basically this, SSA also uses a probabilistic random location updating strat-
replaces one randomly selected component of ith solution vector by egy which randomly alters the current solution vector (Eq. (4ab),
Table 5
Statistical results obtained by GA, PSO, BA, FF, MVO, KH and SSA through 30 independent runs on classical unimodal and separable benchmark functions.
Function GA PSO BA FF MVO KH SSA
TF1 Best 0 0 2.3690E+03 0 0 0 0
Worst 1 8.000E+01 1.7712E+04 0 2 0 0
Mean 1.3333E-01 8.0333E+00 8.9976E+03 0 4.6667E-01 0 0
SD 3.4575E-01 1.4571E+01 3.5579E+03 0 5.7135E-01 0 0
10
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 6
The description of classical unimodal and non-separable benchmark functions.
Function Name Expression d Range Fmin
TF5 Beale TF5(x) = (1.5 − x1 + x1 x2 )2 + (2.25 − x1 + x1 x22 )2 + 2 [-4.5, 4.5] 0
(2.625 − x1 + x1 x23 )2
TF6 Easom TF6(x) = − cos(x1 ) cos(x2 ) exp(−(x1 − 𝜋)2 − (x2 − 𝜋)2 ) 2 [-100, 100] −1
TF7 Matyas TF7(x) = 0.26(x12 + x22 ) − 0.48x1 x2 2 [-10, 10] 0
TF8 Colville TF8(x) = 100(x12 − x2 )2 + (x1 − 1)2 + (x3 − 1)2 + 4 [-10, 10] 0
90(x32 − x4 )2 + 10.1(x2 − 1)2 + (x4 − 1)2 + 19.8(x2 − 1)(x4 − 1)
∑d
∑d
∑d
TF9 Zakharov TF9(x) = xj2 + ( 0.5jxj )2 + ( 0.5jxj )4 10 [-5, 10] 0
j=1 j=1 j=1
d
∑ ∏d
TF10 Schwefel 2.22 TF10(x) = |xj | + j=1 |xj | 30 [-10, 10] 0
j=1
∑ ∑
d j
TF11 Schwefel 1.2 TF11(x) = ( xk )2 30 [-100, 100] 0
j=1 k=1
∑
d
TF12 Dixon-Price TF12(x) = (x1 − 1)2 + j(2xj2 − xj − 1)2 30 [-10, 10] 0
j=2
Table 7
Statistical results obtained by GA, PSO, BA, FF, MVO, KH and SSA through 30 independent runs on classical unimodal and non-separable benchmark functions.
Function GA PSO BA FF MVO KH SSA
TF5 Best 2.1186E-15 1.3624E-14 2.8649E-11 1.2360E-11 9.2179E-09 4.4607E-13 2.1832E-29
Worst 1.0484E-06 4.9376 E-01 7.6207E-01 3.9985E-09 7.6207E-01 1.5103E-09 2.7633E-20
Mean 1.5004E-07 6.2888E-02 1.5241E-01 8.4259E-10 5.0805E-02 1.7322E-10 9.5584E-22
SD 2.9899E-07 1.6315E-01 3.1004E-01 7.7095E-10 1.9334E-01 3.0481E-10 5.0400E-21
TF6 Best −1 −1 −1 −1 −1 −1 −1
Worst −1 −1 0 0 0 0 −1
Mean −1 −1 −3.3347E-02 −7.3333E-01 −9.6664E-01 −9.6666E-01 −1
SD 7.9114E-12 2.8316E-11 1.8257E-01 4.4978E-01 1.8257E-01 1.8257E-01 0
Eq. (5ab) and Eq. (6ab)) to improve exploration phase of algorithm. 5.3. Bat algorithm
3. Scout bee phase of ABC algorithm randomly reinitializes the com-
pletely exploited food source (Eq. (20)) on the basis of uniform dis- BA simulates the echolocation behaviour of bats [29]. In BA, n vir-
tribution. Similarly, SSA also incorporates the concept of random tual bats (solutions) are represented by position vector Xi , velocity vec-
relocation of the flying squirrels, which could not explore the forest tor Vi and frequency vector Fi in a d-dimensional search space. Initially
for optimal winter food source, but it is based on Lévy distribution. the bats are randomly distributed. The position of ith bat in population
4. In ABC, the scout bee phase is initiated if a predefined number of is updated as follows:
runs is exceeded. Apparently, this seems similar to seasonal moni-
Xi (t + 1) = Xi (t ) + Vi (t + 1) (22)
toring condition in SSA, but this concept is behaviourally inspired
and hence adaptively updated during run-time. Vi (t + 1) = Vi (t ) + (Xi (t ) − Gbest )Fi (23)
11
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 8
The description of classical multimodal and separable benchmark functions.
Function Name Expression d Range Fmin
TF13 Bohachevsky1 TF13(x) = x12 + 2x22 − 0.3cos(3𝜋 x1 ) − 0.4cos(4𝜋 x2 ) + 0.7 2 [-100, 100] 0
TF14 Booth TF14(x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)2 2 [-10, 10] 0
∑d
TF15 Michalewicz2 TF15(x) = − sin(xj )(sin(jxj2 ∕𝜋))20 2 [0, 𝜋 ] −1.8013
j=1
d
∑
TF16 Michalewicz5 TF16(x) = − sin(xj )(sin(jxj2 ∕𝜋))20 5 [0, 𝜋 ] −4.6877
j=1
d
∑
TF17 Michalewicz10 TF17(x) = − sin(xj )(sin(jxj2 ∕𝜋))20 10 [0, 𝜋 ] −9.6602
j=1
∑
d
TF18 Rastrigin TF18(x) = (xj2 − 10cos(2𝜋 xj ) + 10) 30 [-5.12, 5.12] 0
j=1
Table 9
Statistical results obtained by GA, PSO, BA, FF, MVO, KH and SSA through 30 independent runs on classical multimodal and separable benchmark functions.
Function GA PSO BA FF MVO KH SSA
TF13 Best 0 4.4298E-14 1.6438E+00 4.9443E-08 1.0021E-05 2.8890E-08 0
Worst 2.0161E-09 1.3643E-09 4.3439E+02 9.1103E-06 1.5693E-03 1.4369E-06 0
Mean 1.8113E-10 1.4898E-10 6.9719E+01 3.4682E-06 3.5952E-04 2.2989E-07 0
SD 4.4508E-10 2.7559E-10 1.1194E+02 2.6636E-06 3.7081E-04 2.9457E-07 0
where 𝛽 is a uniformly distributed random number in the range [0, 1]. 5.3.1. SSA versus BA
The exploration capability of the algorithm is enhanced by employing a
local search strategy. If the solution satisfies a certain condition (rand > 1. BA generates new direction of movement of ith bat by considering
pulse rate(r )) then a new solution is generated through random walk the best position (Gbest) obtained by any bat so far. BA is basically a
[29]. balanced combination of PSO and local search [96]. In SSA, move-
ments of flying squirrels are directed by globally best flying squirrel
Table 10
The description of classical multimodal and non-separable benchmark functions used in experimental test 4.
Function Name Expression d Range Fmin
√
sin2 ( x2 +x2 )−0.5
1 2
TF19 Schaffer TF19(x) = 0.5 + 2 [-100, 100] 0
(1+0.001(x2 +x2 ))2
1 2
TF20 Six Hump Camel Back TF20(x) = 4x1 − 2.1x1 + 13 x16 + x1 x2 − 4x22 + 4x24
2 4
2 [-5, 5] −1.03163
TF21 Boachevsky2 TF21(x) = x12 + 2x22 − 0.3cos(3𝜋 x1 )cos(4𝜋 x2 ) + 0.3 2 [-100, 100] 0
TF22 Boachevsky3 TF22(x) = x12 + 2x22 − 0.3cos(3𝜋 x1 + 4𝜋 x2 ) + 0.3 2 [-100, 100] 0
∑5 5
∑
TF23 Shubert TF23(x) = ( jcos(j + 1)x1 + j)( jcos((j + 1)x2 + j)) 2 [-10, 10] −186.73
j=1 j=1
∑
d−1
TF24 Rosenbrock TF24(x) = 100(xj+1 − xj2 )2 + (xj − 1)2 30 [-30, 30] 0
(
j=1 )
∑
d (∏ ( x −100 ))
1 d
TF25 Griewank TF25(x) = 4000 (xj − 100)2 − j=1
cos j √ +1 30 [-600, 600] 0
j
j=1
( √ ) ( )
∑ 2
d
∑d
TF26 Ackley TF26(x) = −20exp −0.2 1d xj − exp 1d cos(2𝜋 xj ) + 20 + e 30 [-32, 32] 0
j=1 j=1
12
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 6. Variation in global optimization results for benchmark functions (a) TF1, (b) TF2, (c) TF3 and (d) TF4.
(FStht ) as well as few locally found best flying squirrels (FStat ). one. The brightness as well as its attractiveness, both are inversely
2. In BA the exploration is enhanced by randomly updating bat loca- proportional to the distance between them and hence both decrease
tion depending upon the condition i.e. rand > pulse rate(r ). How- as their distance increases. If there is no brighter one, then the
ever the exploration phase of SSA is enhanced by relocating those respective firefly will move in random direction.
flying squirrels which could not explore the forest for optimal winter 3. The brightness of any firefly is determined from the value of fitness
food source. In BA, random walk is generally implemented on the function.
basis of normal distribution, however better exploration is achieved
The movement of ith firefly towards more attractive jth firefly is
in SSA using Lévy distribution.
determined by Eq. (25) [82]:
3. In position updating mechanism of BA (Eq. (23)), the difference
(Xi (t ) − Gbest) is multiplied by a random number Fi , however in SSA −𝛾 rij2
Xi = Xi + 𝛽0 e (Xj − Xi ) + 𝛼𝜖i (25)
proper balance between exploration and exploitation is achieved in
Eq. (4ab), Eq. (5ab) and Eq. (6ab) by incorporating gliding constant where 𝛼 is randomization parameter, 𝜖i is a random number considered
(Gc ) additionally. from Gaussian distribution, 𝛾 is fixed light absorption coefficient, 𝛽0 is
attractiveness at r = 0, rij is the Euclidean distance between two fireflies
i and j and calculated as follows:
5.4. Firefly algorithm √
√ l=d
√∑
FF algorithm proposed by Yang [82] is based on the concept of rij = ‖Xi − Xj ‖ = √ (Xil − Xjl )2 (26)
flashing light production by fireflies. Fireflies produce light using the l=1
phenomenon of bioluminescence for attracting the partners for mating. The equation Eq. (25) is basically composed of three terms: first term
These flashes are also used to attract the prey or to warn the preda- represents the present location of ith firefly, second term denotes the
tor. FF algorithm considers that interaction of fireflies is governed by attraction of ith firefly towards more attractive jth firefly and last one is
following assumptions [97]: random walk.
1. One firefly will be attracted by all other fireflies regardless of their
sex. 5.4.1. SSA versus FF
2. Attraction is directly proportional to their light intensity (or bright- Both FF and SSA are population based techniques, however differ-
ness). Thus less bright firefly will tend to move towards the brighter ences in the techniques are as follows:
13
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 7. Convergence rate comparison for benchmark functions (a) TF1, (b) TF2, (c) TF3 and (d) TF4.
Fig. 8. Variation in global optimization results for benchmark functions (a) TF5, (b) TF6, (c) TF7 (d) TF8 (e) TF9, (f) TF10, (g) TF11 and (h) TF12.
14
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 9. Convergence rate comparison for benchmark functions (a) TF5, (b) TF6, (c) TF7 (d) TF8 (e) TF9, (f) TF10, (g) TF11 and (h) TF12.
Fig. 10. Variation in global optimization results for benchmark functions (a) TF13, (b) TF14, (c) TF15, (d) TF16, (e) TF17 and (f) TF18.
15
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 11. Convergence rate comparison for benchmark functions (a) TF13, (b) TF14, (c) TF15, (d) TF16, (e) TF17 and (f) TF18.
As discussed previously, an efficient metaheuristic must possess a evaluate the performance of SSA. The first and second experimental
proper balance between exploration and exploitation. However, there study confirm the exploitation capability of proposed algorithm, while
is no thumb rule [98] to achieve this. The minor differences in solution the third and fourth experimental study check the exploration capabili-
updating strategy and random distributions may create huge impact ties. The fifth experimental study is designed on the basis of 7 modern
on performance of the designed algorithm [90]. Consequently, SSA numerical optimization problems considered from IEEE CEC 2014 spe-
becomes a good competitor for existing metaheuristics. cial session and competition on single objective real-parameter numer-
ical optimization. These benchmark functions have several novel fea-
tures such as novel basic problems and functions are shifted, rotated,
6. Experimental study expanded, and combined variants of the most complicated mathemati-
cal optimization problems presented in literature [102].
The performance of proposed SSA is analyzed by carrying out In the last decade a large number of metaheuristics is proposed by
rigorous experimentation on classic and modern numerical optimiza- various authors. Some of them are modified or improved versions of the
tion problems. Initially experimentation is conducted on 26 well- basic algorithms while others are based on an entirely new concept. It
known classic benchmark test functions [99,100]. These functions are is observed from the literature that newly developed or modified meta-
described as continuous, discontinuous, linear, non-linear, unimodal, heuristics are not explained in detail [103]. Further, the techniques are
multimodal, convex, non-convex, separable and non-separable. How- documented with partial parametric settings due to which exact replica-
ever for testing and validation of a new algorithm, functional features tion of experiments and results, become almost impossible. Moreover,
like dimensionality, modality and separability are relatively more sig- there is no uniformity in the selection of benchmark test suit and exper-
nificant. It is considered that difficulty of problem increases with the imental conditions are not identical as original ones [103,104]. There-
increase in function dimensions as the search space increases exponen- fore in the present work, basic versions of commonly used optimization
tially [101]. The modality of a function is considered as the number algorithms are implemented and used for the purpose of comparative
of ambiguous peaks in the function surface. A function having two analysis. In each experimental study, the performance of proposed SSA
or more ambiguous peaks is called as multimodal function. An algo- is compared with six nature-inspired optimization algorithms, namely,
rithm that encounters these peaks during their search process may get GA, PSO, BA, FF, MVO and KH. The population size and maximum iter-
trapped in these local optimum solutions. This can affect the search ations are set to be 50 and 500 respectively in first four experimental
process adversely and may get diverted in a different direction far tests for fair comparison. While for fifth experimental study, maximum
away from the optimal region. On the other hand, separability refers iterations are 6000 to get 300000 number of function evaluation (NFEs)
to the difficulty level offered by different benchmark test functions. In as per the CEC 2014 recommendation [102]. The commonly used para-
general, separable functions are easier to solve in comparison to their metric settings of all algorithms are listed in Table 3. 30 independent
non-separable counterpart as each variable of the function is indepen- runs of each algorithm are considered for every benchmark function
dent from other variable. In the present study, 26 classic benchmark in each experimentation and best results are boldfaced throughout the
functions are classified on the basis of their modality and separability paper.
(Tables 4, 6, 8 and 10) and four experimental tests are performed to
16
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 12. (a) Perspective view of Rastrigin function (TF18), (b) Position of flying squirrels after 1st iteration, (c) 5th iteration, (d) 10th iteration and (e) 15th iteration.
6.1. Experimental test 1 TF3 and other methods fail. To analyze the performance of proposed
algorithm for unimodal functions, ANOVA test (Fig. 6) and convergence
This experimentation evaluates the effectiveness and accuracy of rate analysis (Fig. 7) is also performed. The data obtained for 30 inde-
SSA while solving benchmark functions with unimodal and separable pendent runs is plotted (Fig. 6) and it is observed that performance of
inherent characteristics (Table 4). The best, worst, mean and standard SSA is satisfactory for all functions. This is due to the reason that 25th
deviation (SD) of the results obtained from each algorithm after 30 and 75th percentiles of samples collected for SSA decline towards the
independent trials are recorded in Table 5. It is clear from results that minimum solution within a narrow interquartile range. The compari-
SSA achieves success in finding global optimum on TF1, TF2 and TF3. son of convergence rate shown in Fig. 7, reveals that SSA converges
None of the algorithm could find global optimum solution for TF4 but faster than other algorithms and hence possesses superior convergence
results of FF are better than all other methods. For TF1, the performance capability for such optimization problems.
of SSA is found identical to FF and KH but far better than other methods.
Only SSA could reach the global optimum region while solving TF2 and
17
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 11
Statistical results obtained by GA, PSO, BA, FF, MVO, KH and SSA through 30 independent runs on classical multimodal and non-separable benchmark functions.
Function GA PSO BA FF MVO KH SSA
TF19 Best 0 0 9.7159E-03 4.6615E-03 2.1517E-06 1.2317E-07 0
Worst 1.6601E-11 9.7159E-03 3.7329E-01 3.8682E-02 9.7159E-03 2.0569E-05 9.7159E-03
Mean 1.6854E-12 3.5625E-03 1.4118E-01 1.1809E-02 1.3113E-03 3.3604E-06 9.7159E-04
SD 4.1693E-12 4.7621E-03 1.0699E-01 7.3682E-03 3.3529E-03 4.1334E-06 2.9646E-03
6.2. Experimental test 2 30 independent runs of each algorithm while solving these multimodal
functions are presented in Table 9. It is evident from the results that SSA
This experimental test is conducted to observe the performance and is superior on TF13, TF14, TF15 and TF18 whereas its performance is
consistency of SSA in solving the unimodal but non-separable functions comparable to other methods on TF16. GA defeats SSA and other meth-
(Table 6). The difficulty level of this test is bit higher in comparison to ods for TF17 as it succeeds in finding the best solution. Variance anal-
Test 1, as functions under consideration have non-separable character- ysis (Fig. 10) of all algorithms also depict that SSA has less median
istics. The statistical results of 30 independent runs, obtained by SSA value (marked by red “-”) in comparison to other optimization methods
and other optimization methods are presented in Table 7. It is clear for TF13, TF14, TF15 and TF18. However the results of SSA are found
from the results obtained that SSA outperforms the other techniques slightly deviated in comparison to other methods for TF16 and TF17
as it could find global minimum or near global minimum for all eight (Fig. 10d–e).
benchmark functions. To analyze consistency and overall performance Fig. 11 shows the recorded convergence characteristics of all algo-
of SSA, the data obtained from 30 runs is used for ANOVA test and its rithms while solving multimodal and separable functions. It is revealed
results are plotted in Fig. 8. The performance of SSA is found satisfac- from the results that SSA offers better convergence rate in comparison
tory and consistent for all functions because 25th and 75th percentiles of to other six optimization algorithms for TF13, TF14, TF15 and TF18.
samples collected for SSA decline towards the minimum solution within KH algorithm defeats SSA in case of TF16 by providing very fast conver-
a narrow interquartile range. Further the convergence rate (Fig. 9) of gence response. KH and SSA have good convergence response for TF17
SSA is found better than other optimization algorithms for each bench- but could search only near optimal solution whereas GA found best
mark functions. It is observed from the two experimental studies that optimal solution with average convergence response. Further search
performance of SSA is quite accurate as well as consistent for unimodal history of SSA is also recorded (Fig. 12b–e) while solving very com-
functions. This is due to the realistic modelling of selection ability of plicated Rastrigin function (Fig. 12a). It is observed from Fig. 12 that
flying squirrels for optimal food sources. The flying squirrels search flying squirrels explored the search space in the beginning but within
around the neighbourhood of previously visited solutions which pro- 15 iterations most of the flying squirrels moved towards the optimal
vides adequate exploitation capability to SSA. winter food source. Hence SSA offers a promising performance even for
complex multimodal function.
6.3. Experimental test 3
6.4. Experimental test 4
The purpose of this experimental test is to check the exploration
capability of proposed SSA as functions under consideration have mul- This test is designed with highest difficulty level in comparison to
timodal and separable characteristics. The details of these functions are previous experimental studies. The functions under consideration have
provided in Table 8. The recorded results of statistical analysis over multimodal as well as non-separable characteristics. Table 10 presents
18
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 13. Variation in global optimization results for benchmark functions (a) TF19, (b) TF20, (c) TF21, (d) TF22, (e) TF23, (f) TF24, (g) TF25 and (h) TF26.
Fig. 14. Convergence rate comparison for benchmark functions (a) TF19, (b) TF20, (c) TF21, (d) TF22, (e) TF23, (f) TF24, (g) TF25 and (h) TF26.
the details of 8 such benchmark functions with both low as well as other optimization methods for highly complex multimodal as well as
high dimensions. The recorded statistical results for 8 benchmark func- non-separable benchmark functions. The results of experimental test
tions are given in Table 11. It is revealed from the results that SSA 3 and 4 reveal that SSA provides better exploration capability than
outperforms the other methods in 7 cases out of 8 benchmark func- the algorithms under consideration. This is due to the incorporation
tions. SSA found the best optimal solution for TF19 but could not main- of attributes regarding predator presence and seasonal conditions in
tain consistency as its standard deviation (SD) is much higher than foraging behaviour of flying squirrels.
GA. All the methods achieve global optimal solution in case of TF20
and TF23 but SSA is found to be most consistent as it offers low-
6.5. Experimental test 5
est SD. Results of ANOVA test (Fig. 13) also confirm the satisfactory
performance of proposed algorithm. Further convergence rate compar-
The aim of this test is to evaluate the effectiveness and robust-
ison of optimization algorithms (Fig. 14) reveals that SSA has supe-
ness of SSA. Therefore most intensely investigated benchmark functions
rior convergence behaviour with sufficient accuracy in comparison to
used in IEEE CEC 2014 are considered for the purpose. CEC 2014 was
19
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 12
The brief description of CEC 2014 benchmark functions.
Function Name d Type Range Fmin
TF27 Rotated High Conditioned Elliptic Function (CEC1) 30 U, N [-100, 100] 100
TF28 Rotated Bent Cigar Function (CEC2) 30 U, N [-100, 100] 200
TF29 Shifted and Rotated Rosenbrock’s Function (CEC4) 30 M, N [-100, 100] 400
TF30 Hybrid Function 1 (CEC17) 30 – [-100, 100] 1700
TF31 Composition Function 1 (CEC23) 30 – [-100, 100] 2300
TF32 Composition Function 2 (CEC24) 30 – [-100, 100] 2400
TF33 Composition Function 3 (CEC25) 30 – [-100, 100] 2500
Table 13
Statistical results obtained by GA, BA, FF, MVO, KH, DA, PSO and SSA through 30 independent runs on CEC 2014 benchmark functions with 30 Dim.
Function GA PSO BA FF MVO KH SSA
TF27 Best 3.2121E+08 2.3069E+07 7.2988E+07 5.1533E+05 9.9798E+05 5.6081E+06 9.1044E+04
Worst 4.6388E+08 4.0159E+08 2.5523E+09 5.6161E+06 5.9024E+06 9.2581E+07 1.7503E+06
Mean 3.8257E+08 1.2539E+08 1.0102E+09 2.1259E+06 2.8959E+06 1.7788E+07 8.1899E+05
SD 3.5194E+07 7.9595E+07 6.1849E+08 1.0427E+06 1.1257E+06 1.6125E+07 4.0164E+05
TF28 Best 5.3189E+10 4.4691E+09 3.2169E+10 1.4579E+03 5.8171E+03 7.6562E+03 2.1599E+02
Worst 6.1164E+10 3.5536E+10 9.2830E+10 2.8959E+04 3.8972E+04 2.3140E+05 2.8185E+04
Mean 5.7641E+10 1.4965E+10 6.6295E+10 1.2959E+04 1.7855E+04 5.3419E+04 1.0049E+04
SD 1.9557E+09 8.4581E+09 1.5227E+10 9.0248E+03 1.0593E+04 4.6499E+04 9.8268E+03
Fig. 15. Variation in global optimization results for benchmark functions (a) TF27, (b) TF28, (c) TF29, (d) TF30, (e) TF31, (f) TF32 and (g) TF33.
20
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Fig. 16. Convergence rate comparison for benchmark functions (a) TF27, (b) TF28, (c) TF29, (d) TF30, (e) TF31, (f) TF32 and (g) TF33.
Table 14
The effect of variation of loudness (A) keeping pulse rate (r = 0.5) on the performance of BA.
Function Parameter A=2 A = 1.5 A=1 A = 0.5 A = 0.35 A = 0.25 A = 0.1
TF1 Mean 7.0930E+03 5.1973E+03 7.3436E+03 8.0254E+03 9.3928E+03 1.0877E+04 1.0908E+04
SD 2.7636E+03 1.8015E+03 2.5153E+03 2.9201E+03 3.4583E+03 3.4485E+03 3.7797E+03
TF9 Mean 7.1232E+01 5.6507E+01 7.7221E+01 5.6136E+01 6.4776E+01 7.8141E+01 6.6976E+01
SD 7.3551E+01 5.5139E+01 8.5643E+01 5.4799E+01 4.8650E+01 7.2201E+01 7.6153E+01
TF13 Mean 5.9455E+01 7.4631E+01 6.7279E+01 1.0399E+02 1.8798E+02 9.0527E+01 1.1965E+02
SD 1.1565E+02 1.3289E+02 9.6873E+01 1.3961E+02 2.2600E+02 1.2093E+02 1.6878E+02
TF19 Mean 1.7442E-01 1.3640E-01 1.3837E-01 1.7202E-01 1.6560E-01 2.0144E-01 1.7363E-01
SD 1.1084E-01 1.0372E-01 1.0675E-01 1.0729E-01 1.0390E-01 1.2805E-01 1.1782E-01
a special session and competition on single objective real-parameter on [102]. In this experimental study seven CEC 2014 functions are con-
numerical optimization problems. These modern benchmark functions sidered with at least one function from each category and the details
are especially equipped with various novel characteristics such as basic are provided in Table 12. Results obtained from each algorithm after
problems with shifting and rotation. Several hybrid and composite test 30 independent runs are recorded in Table 13. As mentioned previously
problems are designed by extracting features dimension-wise for vari- CEC 2014 functions are specially developed with complex features, con-
ous problems, graded level of linkages, rotated trap problems, and so sequently all the algorithms can hardly find the global optimum for
Table 15
The effect of variation of pulse rate (r) keeping loudness (A = 0.5) on the performance of BA.
Function Parameter r = 0.9 r = 0.7 r = 0.6 r = 0.4 r = 0.35 r = 0.25 r = 0.1
TF1 Mean 1.0321E+04 1.0506E+04 9.0349E+03 7.6793E+03 8.2458E+03 7.7267E+03 7.9055E+03
SD 4.4886E+03 3.8773E+03 3.7198E+03 3.2272E+03 2.9998E+03 2.7651E+03 2.4840E+03
TF9 Mean 1.6041E+02 7.3076E+01 8.7196E+01 5.6924E+01 4.1761E+01 3.5332E+01 1.6734E+01
SD 1.1298E+02 5.3355E+01 9.6869E+01 6.3834E+01 5.8968E+01 5.1145E+01 2.7489E+01
TF13 Mean 2.8402E+02 1.9413E+02 1.2746E+02 3.8388E+01 9.6727E+01 3.9832E+01 1.9349E+01
SD 3.1375E+02 1.8680E+02 1.8827E+02 6.4789E+01 1.7199E+02 1.0640E+02 4.4570E+01
TF19 Mean 2.1917E-01 1.6084E-01 2.2677E-01 1.9497E-01 1.5760E-01 1.9243E-01 1.5051E-01
SD 1.4373E-01 1.0701E-01 1.3385E-01 1.0420E-01 1.2137E-01 1.1164E-01 1.0732E-01
Table 16
The effect of variation of cognitive constant (C1 ) keeping social constant (C2 = 2) and inertial weight (w = 0.9) on the performance of PSO.
Function Parameter C1 = 0.1 C1 = 0.3 C1 = 0.5 C1 = 1 C1 = 1.3 C1 = 1.5 C1 = 2
TF13 Mean 5.7648E-08 9.6235E-09 2.4484E-09 3.6733E-10 3.9848E-10 2.1737E-10 1.6065E-10
SD 2.5309E-07 4.5132E-08 9.0398E-09 1.3841E-09 6.9483E-10 4.8836E-10 3.4677E-10
21
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 17
The effect of variation of 𝛼 keeping 𝛽 = 0.20 and 𝛾 = 1 on the performance of FF.
Function Parameter 𝛼 = 0.1 𝛼 = 0.25 𝛼 = 0.3 𝛼 = 0.5 𝛼 = 0.6 𝛼 = 0.8 𝛼 = 0.9
TF24 Mean 3.9836E+01 5.3053E+01 1.3151E+02 8.2012E+01 2.2691E+02 7.3650E+01 8.7297E+01
SD 2.6186E+01 7.4206E+01 2.3776E+02 8.6548E+01 4.5404E+02 8.2723E+01 1.0847E+02
Table 18
The effect of variation of WEPmax keeping WEPmin = 0.2 on the performance of MVO.
Function Parameter WEPmax = 0.3 WEPmax = 0.6 WEPmax = 0.9 WEPmax = 1.1 WEPmax = 1.3 WEPmax = 1.5
TF14 Mean 1.1116E-06 5.4784E-07 6.1269E-07 6.4180E-07 5.0591E-07 4.6799E-07
SD 1.2520E-06 6.8009E-07 5.1326E-07 6.9295E-07 4.6183E-07 2.9343E-07
Table 19
The effect of variation of Vf keeping Dmax = 0.005 and Nmax = 0.01 on the performance of KH algorithm.
Function Parameter Vf = 0.05 Vf = 0.1 Vf = 0.3 Vf = 0.6 Vf = 0.8 Vf = 1 Vf = 1.2
TF21 Mean 4.8877E-08 8.8259E-08 1.4895E-06 7.4618E-06 2.1242E-05 3.9623E-05 6.1378E-05
SD 7.2770E-08 8.2842E-08 1.7252E-06 9.0189E-06 2.8757E-05 5.8044E-05 6.5387E-05
Table 20
The effect of different types of Crossover on the performance of GA while keeping Crossover fraction = 0.8, Tournament
selection and Adaptive feasible mutation.
Function Parameter Crossover = Scattered Crossover = Single point Crossover = Two points
TF18 Mean 9.2654E+00 1.1424E+01 1.0583E+01
SD 3.3379E+00 2.9959E+00 2.9161E+00
Table 21
The effect of different types of selection operators on the performance of GA while keeping Crossover fraction = 0.8, Arithmetic
crossover and Adaptive feasible mutation.
Function Parameter Selection = Stochastic uniform Selection = Uniform Selection = Roulette
TF18 Mean 1.1026E+02 1.9581E+02 1.0917E+02
SD 3.0096E+01 3.0510E+01 2.8615E+01
Table 22
Results of Wilcoxon’s test for SSA against other six algorithms for each benchmark function with 30 independent runs (𝛼 = 0.05).
Function GA vs SSA SPSO vs SSA BA vs SSA FF vs SSA MVO vs SSA KH vs SSA
p-value win p-value win p-value win p-value win p-value win p-value win
TF1(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF2(x) 8.2702E-10 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF3(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF4(x) 5.3430E-08 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF5(x) 4.8415E-13 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF6(x) 3.0495E-05 + 4.9499E-14 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF7(x) 3.1E-03 + 7.8683E-05 + 1.6911E-17 + 3.5480E-14 + 1.8614E-08 + 1.0603E-07 +
TF8(x) 9.2268E-014 + 1.7411E-10 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF9(x) 9.2268E-14 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF10(x) 5.2425E-16 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF11(x) 1.6911E-17 + 4.9940E-01 – 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF12(x) 1.6911E-17 + 7.6236E-14 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF13(x) 6.125E-01 – 2.2395E-06 + 3.5555E-09 + 5.039E-01 – 8.2025E-04 + 5.32E-02 –
TF14(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF15(x) 1.0858E-010 + 6.438E-01 – 5.9588E-08 + 9.1611E-08 + 9.357E-01 – 5.4264E-04 +
TF16(x) 7.9701E-04 + 1.6911E-17 + 1.6911E-17 + 1.000E+00 – 1.2362E-04 + 1.000E+00 –
TF17(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF18(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF19(x) 1.5721E-13 + 3.1900E-02 + 1.6911E-17 + 3.2131E-16 + 1.7733E-05 + 1.5474E-14 +
TF20(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF21(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF22(x) 3.8938E-13 + 1.6911E-17 + 1.6911E-17 + 1.1331E-15 + 1.6911E-17 + 1.6911E-17 +
TF23(x) 6.7645E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF24(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF25(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF26(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF27(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 9.3016E-11 + 2.3507E-15 + 1.6911E-17 +
TF28(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 6.333E-01 – 8.2025E-04 + 7.5007E-09 +
TF29(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 5.91E-02 – 1.42E-02 + 4.6453E-07 +
TF30(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.4295E-09 + 4.8415E-13 + 1.6911E-17 +
TF31(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF32(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
TF33(x) 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 + 1.6911E-17 +
+/- 32/1 31/2 33/0 29/4 32/1 31/2
22
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 23
Average Error rates offered by various algorithms including SSA, for 33 benchmark functions.
Function GA SPSO BA FF MVO KH SSA
TF1(x) 1.5004E-07 6.2888E-02 1.5241E-01 8.4259E-10 5.0805E-02 1.7322E-10 9.5584E-22
TF2(x) 0 0 9.6665E-01 2.6667E-01 3.3359E-02 3.334E-02 0
TF3(x) 1.2012E-06 8.4738E-13 2.9659E-11 3.1396E-10 1.3148E-08 8.8694E-12 1.542E-25
TF4(x) 1.8113E-10 1.4898E-10 6.9719E+01 3.4682E-06 3.5952E-04 2.2989E-07 0
TF5(x) 9.8814E-11 8.7865E-11 6.7996E-10 5.4966E-09 5.8026E-07 1.9913E-10 9.5859E-25
TF6(x) 0 0 1.851E-01 0 0 0 0
TF7(x) 1.6854E-12 3.5625E-03 1.4118E-01 1.1809E-02 1.3113E-03 3.3604E-06 9.7159E-04
TF8(x) 0 0 8.162E-02 0 0 0 0
TF9(x) 1.3099E-01 8.7221E-11 8.1426E+01 2.6897E-06 2.7308E-04 3.0532E-08 0
TF10(x) 9.8046E-02 4.5608E-11 8.6329E+01 2.2231E-06 2.0969E-04 2.2472E-08 0
TF11(x) 5.8943E+01 0 4.2382E+01 7.2509E-01 0 0 0
TF12(x) 2.9713E-02 1.3576E+00 1.1779E+02 6.9498E-02 1.3897E-02 1.4707E+00 1.4309E-09
TF13(x) 1.783E-01 6.661E-01 1.1902E+00 1.1710E-01 6.996E-01 5.381E-01 3.398E-01
TF14(x) 9.3216E-03 2.8145E+00 5.7976E+01 1.9991E-05 3.3653E-04 5.7794E-01 5.2215E-09
TF15(x) 7.697E-01 2.5608E+00 3.9254E+00 1.1805E+00 2.2744E+00 1.8256E+00 2.0702E+00
TF16(x) 1.3333E-01 8.0333E+00 8.9976E+03 0 4.6667E-01 0 0
TF17(x) 4.4609E+00 1.3576E+03 3.9384E+04 1.1597E-02 7.8582E-01 5.7558E-02 4.1689E-08
TF18(x) 6.4872E+01 1.9168E+02 2.9497E+03 6.5045E-01 6.1321E-01 5.7491E-02 1.5201E-07
TF19(x) 2.9543E+00 1.0084E+00 1.8324E+01 3.2152E-02 2.6035E-01 5.2810E-02 5.0192E-01
TF20(x) 1.0964E+01 1.8341E+01 6.3387E+05 3.7229E-01 4.2299E+00 4.5339E+01 5.1849E-04
TF21(x) 6.2471E+01 1.3279E+04 5.4751E+05 2.3487E+01 5.6701E+01 7.6901E+00 1.6925E-05
TF22(x) 1.0939E+01 9.1204E+05 6.6428E+07 1.6842E+02 4.5532E+02 1.0894E+02 9.4919E-01
TF23(x) 1.3711E+00 1.1892E+03 1.8201E+05 3.0817E+00 5.1183E+00 1.7567E+00 2.2412E-01
TF24(x) 6.3527E+01 1.0309E+02 1.2192E+02 2.5069E+01 1.1867E+02 1.2391E+01 4.9059E-07
TF25(x) 1.9528E-01 1.5221E+01 3.4748E+02 5.6221E-03 7.4609E-01 3.9617E-02 3.435E-06
TF26(x) 3.2409E+00 9.8587E+00 1.9834E+01 5.3031E-02 1.5634E+00 1.4531E+00 1.3915E-04
TF27(x) 3.8257E+08 1.2538E+08 1.0102E+09 2.1258E+06 2.8958E+06 1.7788E+07 8.1889E+05
TF28(x) 5.7641E+10 1.4965E+10 6.6295E+10 1.2759E+04 1.7655E+04 5.3219E+04 9.8490E+03
TF29(x) 1.5112E+04 1.2374E+03 1.2185E+04 7.7080E+01 9.6110E+01 9.7230E+01 5.7170E+01
TF30(x) 1.0728E+07 9.8506E+05 5.0363E+07 1.2221E+05 1.8251E+05 1.3206E+06 2.4451E+04
TF31(x) 2.54E+02 3.7640E+02 7.6260E+02 3.1530E+02 3.1550E+02 3.1610E+02 2.00E+02
TF32(x) 2.0530E+02 2.6290E+02 3.8010E+02 2.0790E+02 2.2370E+02 2.2290E+02 2.00E+02
TF33(x) 2.0090E+02 2.26E+02 2.4990E+02 2.0510E+02 2.0480E+02 2.0550E+02 2.00E+02
Table 24
Ranking of algorithms using MAE.
Algorithm MAE Rank
SSA 2.5874E+04 1
FF 6.8539E+04 2
MVO 9.3862E+04 3
KH 5.8069E+05 4
SPSO 4.5734E+08 5
GA 1.7586E+09 6
BA 2.0431E+09 7
23
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 25
The effect of food sources (Nfs ) on the performance of SSA with fixed population size n = 50.
Function Parameter Nfs = 5% Nfs = 8% Nfs = 15% Nfs = 30% Nfs = 40% Nfs = 60% Nfs = 80%
TF5 Best 4.5368E-27 1.6006E-28 4.1323E-30 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00
Worst 1.5066E-19 2.3501E-21 6.5759E-22 3.3398E-23 2.4831E-24 2.0901E-26 1.9083E-28
Mean 5.2319E-21 9.9573E-23 3.7590E-23 1.2447E-24 9.9097E-26 1.0914E-27 1.2975E-29
SD 2.7485E-20 4.3201E-22 1.3303E-22 6.0955E-24 4.5557E-25 3.8835E-27 4.3389E-29
TF18 Best 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00
Worst 9.9506E-01 2.4154E-03 6.6716E-05 1.0149E-05 5.9740E-06 1.2212E-08 2.1005E-07
Mean 3.3318E-02 8.2779E-05 6.0602E-06 3.8840E-07 2.5481E-07 1.4247E-09 7.4923E-09
SD 1.8165E-01 4.4060E-04 1.5405E-05 1.8467E-06 1.0969E-06 3.2659E-09 3.8335E-08
TF26 Best 1.3483E-10 4.4079E-10 1.1789E-10 3.4897E-12 2.5757E-14 6.1284E-14 8.8818E-16
Worst 2.5999E-03 5.7313E-04 1.3847E-03 2.8901E-05 5.4130E-04 1.1424E-05 3.6429E-06
Mean 1.0417E-04 2.6091E-05 7.6173E-05 4.2343E-06 3.4202E-05 1.2524E-06 2.3699E-07
SD 4.7571E-04 1.0590E-04 2.6303E-04 8.5141E-06 1.0745E-04 2.8913E-06 7.0631E-07
Table 26
The effect of population size (n) on the performance of SSA with Nfs = 8%.
Function Parameter n = 10 n = 20 n = 30 n = 40 n = 60 n = 70
TF5 Best 1.1003E-19 1.6719E-25 2.1846E-28 1.1229E-28 7.3426E-29 3.1382E-29
Worst 1.5457E-07 3.0109E-18 1.8408E-17 1.2910E-20 2.6412E-21 3.4424E-23
Mean 5.1643E-09 2.6981E-19 6.6253E-19 7.2629E-22 8.9792E-23 1.6093E-24
SD 2.8219E-08 6.4466E-19 3.3557E-18 2.4352E-21 4.8192E-22 6.2747E-24
TF18 Best 4.4906E-12 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00
Worst 1.3421E+02 2.9849E+01 2.9849E+01 2.6864E-04 5.9108E-07 5.6132E-07
Mean 2.4862E+01 5.9748E+00 1.0007E+00 1.3586E-05 3.2866E-08 3.0694E-08
SD 3.1152E+01 1.2141E+01 5.4486E+00 4.9998E-05 1.2477E-07 1.0906E-07
However Wilcoxon’s test, the most frequently used non-parametric sta- algorithm and ‘-’ sign indicates that reference algorithm is inferior to the
tistical test is considered for the present work and results are summa- compared one. The results of last row show that SSA has large number
rized in Table 22. The test is performed by considering the best solution of ‘+’ counts as compared to other optimization algorithms. It confirms
obtained by each algorithm for each benchmark function with 30 inde- that SSA demonstrates statistically significant and superior performance
pendent runs and 95% significance level (𝛼 = 0.05). In Table 22 ‘+’ sign than the six compared algorithms in Wilcoxon test under 95% level of
indicates that reference algorithm performed better than the compared
Fig. 19. Block diagram of 2DOFPI control scheme for temperature control of HFE.
24
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
Table 27
Statistical analysis of algorithms for objective function J.
Function GA PSO BA FF MVO KH SSA
J Mean 4.7209E+01 5.0446E+01 5.1236E+01 6.5932E+01 4.6276E+01 4.0149E+01 4.0149E+01
SD 2.7214E+01 5.2425E+00 2.6044E+01 4.0365E+01 2.7573E+01 3.6604E+00 3.6604E+00
Fig. 22. IAE comparison of designed controllers for set point tracking.
as it reached the global optimum solution 475 times out of 990 runs
Fig. 20. Convergence rate comparison for objective function J (Fig. 17).
Fig. 21. Performance comparison of designed controllers for set point tracking.
25
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
well as stability of the algorithm. Increased percentage of Nfs leads transfer function of HFE under consideration is identified as follows:
to more points in search space around which search is focused. Thus −0.2405s + 1.721
P (s ) = 2 (29)
new solutions are generated and better exploration of search space is s + 1.17s + 0.2
achieved. Hence, Nfs is an attribute of SSA which provides flexibility The controller parameters are tuned to meet the following design objec-
to vary exploration capability of the algorithm. However, there is no tive:
thumb rule for selection of Nfs and it depends on the nature of prob-
lem. Similarly, increase in population size also optimizes the problem J = 0.30IAE + 0.25tr + 0.45ts (30)
more accurately. However higher values of n provide accuracy at the
where IAE is integral absolute error, tr is rise time and ts is settling time.
cost of computational effort while lower values of n leads to unsatisfac-
The aim is to find a solution set [Kp , Ki , 𝛽] while optimizing J.
tory performance of the algorithm. Thus proper selection of Nfs and n
Apart from SSA, other existing algorithms are also employed for
is necessary for satisfactory performance of SSA. The experimentation
tuning of 2DOFPI controller and statistical analysis for 30 independent
results reveal the robustness of proposed algorithm as classical and CEC
runs is presented in Table 27. The common parameters of algorithms are
2014 functions are considered for optimization. Further efficiency and
considered to be same for fair comparison. For example same paramet-
consistency of developed SSA is proved using standard procedures i.e.
ric search range is considered (0 ≤ Kp ≤ 5, 0 ≤ Ki ≤ 5 and 0 ≤ 𝛽 ≤ 2)
convergence rate analysis, Wilcoxon’s test and ANOVA. The practical
and maximum 800 number of function evaluations are allowed. It is
applicability of proposed algorithm is also tested by implementing it on
observed from Table 27 that SSA outperforms all other algorithms
a real-time system.
except KH. The SSA and KH performed equally well on the defined
combinatorial optimization problem on statistical grounds. However
7. Real-time experimental study SSA provides accelerated convergence (Fig. 20) in comparison to KH
as well as other techniques. The parameters obtained after tuning by
In this study, a common problem of process industry known as SSA i.e. Kp = 0.7524, Ki = 0.6978 and 𝛽 = 0.9972 are used in 2DOFPI
“controller tuning” is considered and SSA is employed for the purpose. controller for real-time control of HFE. The experimental results are
The results obtained are compared statistically with the existing opti- compared with conventional Tyreus-Luyben tuned 1DOFPI controller.
mization algorithms. Finally, the performance of SSA optimized con- It is revealed from Fig. 21a that SSA2DOFPI controller provides more
troller is validated and compared with the conventional controller on a precise and tight temperature control of HFE. The obvious reason is that
real-time hardware known as Heat Flow Experiment. SSA2DOFPI controller makes more precise variations in control signal
(Fig. 21b) in comparison to conventional controller. Quantitative anal-
ysis based on IAE (Fig. 22) also confirms the superiority of proposed
7.1. 2DOFPI control scheme for heat flow experiment technique. The successful implementation of SSA on real-time system
proves the robustness and suitability of the algorithm for complex opti-
The Quanser Heat Flow Experiment (HFE) is an excellent platform mization problems.
for researchers to design and validate a new control strategy. It con-
sists of a blower followed by a coil-based heater at one end and three 8. Conclusion
equidistant temperature sensors in duct with other end open (Fig. 18a).
The apparatus is enclosed by solid Plexiglass chamber and the objective A novel nature-inspired squirrel search algorithm is designed for
is to ensure a constant temperature profile inside the duct area. Fig. 18b unconstrained optimization problems. The foraging behaviour of south-
shows the laboratory setup of HFE. The plant can be controlled using ern flying squirrels is studied and modelled mathematically including
MATLAB/Simulink environment installed on a personal computer, how- each and every feature of their food search for the desired optimization.
ever WinCon 5.2 software facilitates the real-time control of HFE. Ana- The proposed algorithm is tested using several classical and modern
log signals ranging from 0 to 5 V are used to control the power of heater unconstrained benchmark functions. It is observed from the compara-
and speed of fan. These control signals are generated by controller and tive statistical analysis that SSA achieves the global optimum solutions
applied to HFE apparatus through a data acquisition (DAQ) board. The with remarkable convergence behaviour in comparison to the other
temperature variation inside the chamber depends on the magnitude reported optimizers. Further in case of modern highly complex CEC
of applied input voltage signals and measured at three different points 2014 benchmark functions, all the algorithms can hardly find the global
along the duct by temperature sensors. The output of sensors is avail- optimum solution but the performance of SSA is found accurate and
able on three analog input channels of DAQ board. As one side of the consistent. Moreover, the SSA is successfully applied to design 2DOFPI
duct is open, changes in ambient environment directly affect the tem- controller for temperature control of HFE. Hence it is concluded that
perature inside the plant. Hence, it becomes difficult to maintain a SSA offers quite competitive results in comparison to other reported
constant temperature profile inside the chamber by conventional one optimizers for numerical optimization as well as real-time problems.
degree of freedom proportional and integral (1DOFPI) controller. In The present work provides a basic framework of SSA for low dimension
the present work, a two degree of freedom PI (2DOFPI) control scheme optimization problems, which may be further extended to large scale
(Fig. 19) is employed to control the temperature of HFE. The output optimization and constrained optimization problems. In future SSA may
(Uc ) of controller is generally expressed as follows [107,108]: also be used for multi-objective optimization problems. The proposed
[ ] method may also be applied to solve NP-hard combinatorial optimiza-
1
Uc ( s ) = K p 𝛽 R ( s ) − Y ( s ) + {R(s) − Y (s)} tion problems found in real-world.
Ti s
Ki References
= Kp {𝛽 R(s) − Y (s)} + {R(s) − Y (s)} (28)
s
[1] I. BoussaïD, J. Lepagnot, P. Siarry, A survey on optimization metaheuristics, Inf.
In Eq. (28), there are three unknown parameters: proportional gain Sci. 237 (2013) 82–117.
[2] A. Gogna, A. Tayal, Metaheuristics: review and application, J. Exp. Theor. Artif.
(Kp ), integral gain (Ki ) and set-point weighing factor (𝛽 ) known as con- Intell. 25 (4) (2013) 503–526.
troller parameters. Appropriate value of these parameters leads to pre- [3] E. Talbi, Metaheuristics: from Design to Implementation, Wiley Series on Parallel
cise control action and stable performance of the plant. The situation and Distributed Computing, Wiley, 2009.
[4] J.H. Holland, Adaptation in Natural and Artificial Systems, University of
presents a combinatorial optimization problem whose optimal solution
Michigan Press, Ann Arbor, MI, 1975.
may be obtained heuristically. In the present study, SSA is applied for [5] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing,
tuning of 2DOFPI controller which leads to SSA2DOFPI controller. The Science 220 (4598) (1983) 671–680.
26
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
[6] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: [42] A. Askarzadeh, A novel metaheuristic method for solving constrained engineering
Proceedings of the Sixth International Symposium on Micro Machine and Human optimization problems: crow search algorithm, Comput. Struct. 169 (2016) 1–12.
Science, IEEE, 1995, pp. 39–43. [43] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp
[7] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell. swarm algorithm: A bio-inspired optimizer for engineering design problems,
Mag. 1 (4) (2006) 28–39. Advances in Engineering Software.
[8] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numeric [44] S. Saremi, S. Mirjalili, A. Lewis, Grasshopper optimisation algorithm: theory and
function optimization, in: IEEE Swarm Intelligence Symposium, vol. 8, 2006, pp. application, Adv. Eng. Software 105 (2017) 30–47.
687–697. [45] F. Fausto, E. Cuevas, A. Valdivia, A. González, A global optimization algorithm
[9] A. Noshadi, J. Shi, W.S. Lee, P. Shi, A. Kalam, Optimal PID-type fuzzy logic inspired in the behavior of selfish herds, Biosystems 160 (2017) 39–55.
controller for a multi-input multi-output active magnetic bearing system, Neural [46] A. Tabari, A. Ahmad, A new optimization method: electro-search algorithm,
Comput. Appl. 27 (7) (2016) 2031–2046. Comput. Chem. Eng. 103 (2017) 1–11.
[10] P. Nguyen, J.-M. Kim, Adaptive ECG denoising using genetic algorithm-based [47] A. Kaveh, A. Dadras, A novel meta-heuristic optimization algorithm: thermal
thresholding and ensemble empirical mode decomposition, Inf. Sci. 373 (2016) exchange optimization, Adv. Eng. Software 110 (2017) 69–84.
499–511. [48] E. Jahani, M. Chizari, Tackling global optimization problems with a novel
[11] R. Arnay, F. Fumero, J. Sigut, Ant colony optimization-based method for optic algorithm-mouth brooding fish algorithm, Applied Soft Computing.
cup segmentation in retinal images, Appl. Soft Comput. 52 (2017) 409–417. [49] A. Baykasoğlu, Ş. Akpinar, Weighted superposition attraction (WSA): a swarm
[12] K.Z. Gao, P.N. Suganthan, T.J. Chua, C.S. Chong, T.X. Cai, Q.K. Pan, A two-stage intelligence algorithm for optimization problems–part 1: unconstrained
artificial bee colony algorithm scheduling flexible job-shop scheduling problem optimization, Appl. Soft Comput. 56 (2017) 520–540.
with new job insertion, Expert Syst. Appl. 42 (21) (2015) 7652–7663. [50] G. Dhiman, V. Kumar, Spotted hyena optimizer: A novel bio-inspired based
[13] A. Gandomi, X. Yang, S. Talatahari, A. Alavi, Metaheuristic Applications in metaheuristic technique for engineering applications, Advances in Engineering
Structures and Infrastructures, Elsevier Science, 2013. Software.
[14] X. Yang, A. Gandomi, S. Talatahari, A. Alavi, Metaheuristics in Water, [51] X. Qi, Y. Zhu, H. Zhang, A new meta-heuristic butterfly-inspired algorithm,
Geotechnical and Transport Engineering, Elsevier Science, 2012. Journal of Computational Science.
[15] S. Das, P.N. Suganthan, Differential evolution: a survey of the state-of-the-art, [52] A.F. Nematollahi, A. Rahiminejad, B. Vahidi, A novel physical based
IEEE Trans. Evol. Comput. 15 (1) (2011) 4–31. meta-heuristic optimization method known as Lightning Attachment Procedure
[16] X.S. Yang, S. Deb, Cuckoo search via Lévy flights, in: World Congress on Nature Optimization, Appl. Soft Comput. 59 (2017) 596–621.
Biologically Inspired Computing, vol. 2009, NaBIC, 2009, pp. 210–214. [53] S. Ghambari, A. Rahati, An improved artificial bee colony algorithm and its
[17] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software 69 application to reliability optimization problems, Applied Soft Computing, https://
(2014) 46–61. [Link]/10.1016/[Link].2017.10.040.
[18] S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for [54] F. Zhong, H. Li, S. Zhong, A modified ABC algorithm based on
solving single-objective, discrete, and multi-objective problems, Neural Comput. improved-global-best-guided approach and adaptive-limit strategy for global
Appl. 27 (4) (2016) 1053–1073. optimization, Appl. Soft Comput. 46 (2016) 469–486.
[19] X. Yang, Z. Cui, R. Xiao, A. Gandomi, M. Karamanoglu, Swarm Intelligence and [55] D. Kumar, K. Mishra, Portfolio optimization using novel co-variance guided
Bio-inspired Computation: Theory and Applications, Elsevier insights, Elsevier artificial bee colony algorithm, Swarm Evol. Comput. 33 (2017) 119–130.
Science, 2013. [56] A. Ghosh, S. Das, S.S. Mullick, R. Mallipeddi, A.K. Das, A switched parameter
[20] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: a gravitational search differential evolution with optional blending crossover for scalable numerical
algorithm, Inf. Sci. 179 (13) (2009) 2232–2248. optimization, Appl. Soft Comput. 57 (2017) 329–352.
[21] S. Mirjalili, S.M. Mirjalili, A. Hatamlou, Multi-verse optimizer: a nature-inspired [57] G. Sun, Y. Liu, M. Yang, A. Wang, S. Liang, Y. Zhang, Coverage optimization of
algorithm for global optimization, Neural Comput. Appl. 27 (2) (2016) 495–513. VLC in smart homes based on improved cuckoo search algorithm, Comput.
[22] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged system Network. 116 (2017) 63–78.
search, Acta Mech. 213 (3) (2010) 267–289. [58] C. Peraza, F. Valdez, M. Garcia, P. Melin, O. Castillo, A new fuzzy harmony
[23] X. Li, A new intelligent optimization-artificial fish swarm algorithm, Doctor search algorithm using fuzzy logic for dynamic parameter adaptation, Algorithms
thesis, Zhejiang University of Zhejiang, China. 9 (4) (2016) 69.
[24] R. Martin, W. Stephen, Termite: a swarm intelligent routing algorithm for [59] E. Bernal, O. Castillo, J. Soria, F. Valdez, Imperialist competitive algorithm with
mobilewireless Ad-Hoc networks, in: Stigmergic Optimization, Springer, 2006, dynamic parameter adaptation using fuzzy logic applied to the optimization of
pp. 155–184. mathematical functions, Algorithms 10 (1) (2017) 18.
[25] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm [60] E. Méndez, O. Castillo, J. Soria, A. Sadollah, Fuzzy dynamic adaptation of
for optimization inspired by imperialistic competition, in: 2007 IEEE Congress on parameters in the water cycle algorithm, in: Nature-inspired Design of Hybrid
Evolutionary Computation, 2007, pp. 4661–4667. Intelligent Systems, Springer, 2017, pp. 297–311.
[26] A. Mucherino, O. Seref, O. Seref, O.E. Kundakcioglu, P. Pardalos, Monkey search: [61] L. Rodríguez, O. Castillo, J. Soria, P. Melin, F. Valdez, C.I. Gonzalez, G.E.
a novel metaheuristic search for global optimization, in: AIP Conference Martinez, J. Soto, A fuzzy hierarchical operator in the grey wolf optimizer
Proceedings, vol. 953, AIP, 2007, pp. 162–173. algorithm, Appl. Soft Comput. 57 (2017) 315–328.
[27] S. He, Q.H. Wu, J.R. Saunders, Group search optimizer: an optimization [62] J. Perez, F. Valdez, O. Castillo, P. Melin, C. Gonzalez, G. Martinez, Interval type-2
algorithm inspired by animal searching behavior, IEEE Trans. Evol. Comput. 13 fuzzy logic for dynamic parameter adaptation in the bat algorithm, Soft Comput.
(5) (2009) 973–990. 21 (3) (2017) 667–685.
[28] X.-S. Yang, Firefly algorithms for multimodal optimization, in: International [63] F. Olivas, F. Valdez, O. Castillo, C.I. Gonzalez, G. Martinez, P. Melin, Ant colony
Symposium on Stochastic Algorithms, Springer, 2009, pp. 169–178. optimization with dynamic parameter adaptation based on interval type-2 fuzzy
[29] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: Nature Inspired logic systems, Appl. Soft Comput. 53 (2017) 74–87.
Cooperative Strategies for Optimization (NICSO 2010), Springer, 2010, pp. [64] K. Srikanth, L. K. Panwar, B. Panigrahi, E. Herrera-Viedma, A. K. Sangaiah, G.-G.
65–74. Wang, Meta-heuristic framework: Quantum inspired binary grey wolf optimizer
[30] X.-S. Yang, Flower pollination algorithm for global optimization, in: International for unit commitment problem, Computers & Electrical Engineering, [Link]
Conference on Unconventional Computing and Natural Computation, Springer, org/10.1016/[Link].2017.07.023.
2012, pp. 240–249. [65] D. Konar, S. Bhattacharyya, K. Sharma, S. Sharma, S.R. Pradhan, An improved
[31] W.-T. Pan, A new fruit fly optimization algorithm: taking the financial distress hybrid quantum-inspired genetic algorithm (HQIGA) for scheduling of real-time
model as an example, Knowl. Base Syst. 26 (2012) 69–74. task in multiprocessor system, Appl. Soft Comput. 53 (2017) 296–307.
[32] A.H. Gandomi, A.H. Alavi, Krill herd: a new bio-inspired optimization algorithm, [66] R. Logesh, V. Subramaniyaswamy, V. Vijayakumar, X.-Z. Gao, V. Indragandhi, A
Commun. Nonlinear Sci. Numer. Simulat. 17 (12) (2012) 4831–4845. hybrid quantum-induced swarm intelligence clustering for the urban trip
[33] A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: a recommendation in smart city, Future Generation Computer Systems, [Link]
new population based algorithm for solving constrained engineering optimization org/10.1016/[Link].2017.08.060.
problems, Appl. Soft Comput. 13 (5) (2013) 2592–2612. [67] F. Pulgar-Rubio, A. Rivera-Rivas, M.D. Pérez-Godoy, P. González, C.J. Carmona,
[34] A. Kaveh, N. Farhoudi, A new optimization method: dolphin echolocation, Adv. M. del Jesus, MEFASD-BD: multi-objective evolutionary fuzzy algorithm for
Eng. Software 59 (2013) 53–70. subgroup discovery in big data environments - a MapReduce solution, Knowl.
[35] H. Shareef, A.A. Ibrahim, A.H. Mutlag, Lightning search algorithm, Appl. Soft Base Syst. 117 (2017) 70–78.
Comput. 36 (2015) 315–333. [68] A. Franceschetti, E. Demir, D. Honhon, T. Van Woensel, G. Laporte, M. Stobbe, A
[36] S.A. Uymaz, G. Tezel, E. Yel, Artificial algae algorithm (AAA) for nonlinear global metaheuristic for the time-dependent pollution-routing problem, Eur. J. Oper.
optimization, Appl. Soft Comput. 31 (2015) 153–171. Res. 259 (3) (2017) 972–991.
[37] S. Mirjalili, The ant lion optimizer, Adv. Eng. Software 83 (2015) 80–98. [69] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE
[38] O. Abedinia, N. Amjady, A. Ghasemi, A new metaheuristic algorithm based on Trans. Evol. Comput. 1 (1) (1997) 67–82.
shark smell optimization, Complexity 21 (5) (2016) 97–116. [70] S. Mirjalili, Moth-flame optimization algorithm: a novel nature-inspired heuristic
[39] W. Yong, W. Tao, Z. Cheng-Zhi, H. Hua-Juan, A new stochastic optimization paradigm, Knowl. Base Syst. 89 (2015) 228–249.
approach dolphin swarm optimization algorithm, Int. J. Comput. Intell. Appl. 15 [71] B.S. Arbogast, A brief history of the new world flying squirrels: phylogeny,
(02) (2016) 1650011. biogeography, and conservation genetics, J. Mammal. 88 (4) (2007) 840–849.
[40] M.D. Li, H. Zhao, X.W. Weng, T. Han, A novel nature-inspired algorithm for [72] K. Vernes, Gliding performance of the northern flying squirrel (glaucomys
optimization: virus colony search, Adv. Eng. Software 92 (2016) 65–88. sabrinus) in mature mixed forest of eastern Canada, J. Mammal. 82 (4) (2001)
[41] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software 95 1026–1033.
(2016) 51–67.
27
M. Jain et al. Swarm and Evolutionary Computation xxx (2018) 1–28
[73] S. Jackson, P. Schouten, Gliding Mammals of the World, CSIRO Publishing, 2012. [93] S. Das, A. Abraham, A. Konar, Particle swarm optimization and differential
[74] [Link] evolution algorithms: technical analysis, applications and hybridization
[75] [Link] perspectives, in: Ying Liu et al. (Ed.), Advances of Computational Intelligence in
[76] Animal facts: Flying squirrel, [Link] Industrial Systems. Studies in Computational Intelligence, Springer, Berlin,
animal-facts-flying-squirrel. Heidelberg, 2008, pp. 1–38.
[77] R.B. Thomas, P.D. Weigl, Dynamic foraging behavior in the southern flying [94] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony algorithm
squirrel (glaucomys volans): test of a model, Am. Midl. Nat. 140 (2) (1998) based on genetic operators, Inf. Sci. 297 (2015) 154–170.
264–270. [95] A. Rajasekhar, N. Lynn, S. Das, P. Suganthan, Computing with the collective
[78] J. W. Bahlman, S. M. Swartz, D. K. Riskin, K. S. Breuer, Glide performance and intelligence of honey bees–A survey, Swarm Evol. Comput. 32 (2017) 25–48.
aerodynamics of non-equilibrium glides in northern flying squirrels (glaucomys [96] S. Mirjalili, S.M. Mirjalili, X.-S. Yang, Binary bat algorithm, Neural Comput. Appl.
sabrinus), Journal of The Royal Society Interface 10(80). 25 (3–4) (2014) 663–681.
[79] U.M. Norberg, Evolution of vertebrate flight: an aerodynamic model for the [97] X.-S. Yang, Nature-inspired Metaheuristic Algorithms, Luniver press, 2010.
transition from gliding to active flight, Am. Nat. 126 (3) (1985) 303–327. [98] G.I. Sayed, A.E. Hassanien, A.T. Azar, Feature selection via a novel chaotic crow
[80] K.L. Bishop, The relationship between 3-d kinematics and gliding performance in search algorithm, Neural Comput. Appl. (2017) 1–18.
the southern flying squirrel, glaucomys volans, J. Exp. Biol. 209 (4) (2006) [99] M.-Y. Cheng, D. Prayogo, Symbiotic organisms search: a new metaheuristic
689–701. optimization algorithm, Comput. Struct. 139 (2014) 98–112.
[81] P. Stapp, P.J. Pekins, W.W. Mautz, Winter energy expenditure and the [100] M.-Y. Cheng, L.-C. Lien, Hybrid artificial intelligence-based PBA for benchmark
distribution of southern flying squirrels, Can. J. Zool. 69 (10) (1991) 2548–2555. functions and facility Layout design optimization, J. Comput. Civ. Eng. 26 (5)
[82] I. Fister, I.F. Jr., X.-S. Yang, J. Brest, A comprehensive review of firefly (2012) 612–624.
algorithms, Swarm Evol. Comput. 13 (2013) 34–46. [101] M. Jamil, X.-S. Yang, A literature survey of benchmark functions for global
[83] L. Guo, G.-G. Wang, A.H. Gandomi, A.H. Alavi, H. Duan, A new improved krill optimisation problems, Int. J. Math. Model. Numer. Optim. 4 (2) (2013)
herd algorithm for global numerical optimization, Neurocomputing 138 (2014) 150–194.
392–402, [Link] [102] J. Liang, B. Qu, P. Suganthan, Problem definitions and evaluation criteria for the
[84] X.-S. Yang, Firefly Algorithm, Lévy Flights and Global Optimization, Springer, CEC 2014 special session and competition on single objective real-parameter
London, 2010, pp. 209–218. numerical optimization, Computational Intelligence Laboratory, Zhengzhou
[85] H. Hakli, H. Uğuz, A novel particle swarm optimization algorithm with Lévy University, Zhengzhou China and Technical Report, Nanyang Technological
flight, Appl. Soft Comput. 23 (2014) 333–345. University, Singapore.
[86] R. Jensi, G.W. Jiji, An enhanced particle swarm optimization with Lévy flight for [103] N. Veček, M. Mernik, M. Črepinšek, A chess rating system for evolutionary
global optimization, Appl. Soft Comput. 43 (2016) 248–261. algorithms: a new method for the comparison and ranking of evolutionary
[87] J. Xie, Y. Zhou, H. Chen, A novel bat algorithm based on differential operator and algorithms, Inf. Sci. 277 (2014) 656–679.
Lévy flights trajectory, Comput. Intell. Neurosci. 2013 (2013) 13, [Link] [104] A.E. Eiben, M. Jelasity, A critical note on experimental research methodology in
org/10.1155/2013/453812, 453812. EC, in: Proceedings of the Congress on Evolutionary Computation, vol. 1, 2002,
[88] I. Fister Jr., U. Mlakar, J. Brest, I. Fister, A new population-based nature-inspired pp. 582–587.
algorithm every month: is the current era coming to the end? in: Proceedings of [105] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of
the 3rd Student Computer Science Research Conference, University of Primorska nonparametric statistical tests as a methodology for comparing evolutionary and
Press, 2016, pp. 33–37. swarm intelligence algorithms, Swarm Evol. Comput. 1 (1) (2011) 3–18.
[89] K. Sörensen, Metaheuristics-the metaphor exposed, Int. Trans. Oper. Res. 22 (1) [106] E. Nabil, A modified flower pollination algorithm for global optimization, Expert
(2015) 3–18. Syst. Appl. 57 (2016) 192–203.
[90] P. Civicioglu, E. Besdok, A conceptual comparison of the cuckoo-search, particle [107] V.M. Alfaro, R. Vilanova, Model-reference robust tuning of 2DoF PI controllers
swarm optimization, differential evolution and artificial bee colony algorithms, for first-and second-order plus dead-time controlled processes, J. Process Contr.
Artif. Intell. Rev. 39 (2013) 315–346. 22 (2) (2012) 359–374.
[91] J. Yan, W. He, X. Jiang, Z. Zhang, A novel phase performance evaluation method [108] N. Pachauri, V. Singh, A. Rani, Two degree of freedom PID based inferential
for particle swarm optimization algorithms using velocity-based state estimation, control of continuous bioreactor for ethanol production, ISA (Instrum. Soc. Am.)
Appl. Soft Comput. 57 (2017) 517–525. Trans. 68 (2017) 235–250.
[92] B. Akay, A study on particle swarm optimization and artificial bee colony
algorithms for multilevel thresholding, Appl. Soft Comput. 13 (6) (2013)
3066–3091.
28