0% found this document useful (0 votes)
19 views44 pages

A Range Free Localization Algorithm For IoT Networks

Uploaded by

sayj0o
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views44 pages

A Range Free Localization Algorithm For IoT Networks

Uploaded by

sayj0o
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Received: 20 November 2020 | Revised: 7 April 2021 | Accepted: 15 May 2021

DOI: 10.1002/int.22524

RESEARCH ARTICLE

A range‐free localization algorithm for IoT


networks

Saeid Barshandeh1 | Mohammad Masdari2 |


Gaurav Dhiman3 | Vahid Hosseini 4
| Krishna K. Singh5

1
Afagh Higher Education Institute,
Urmia, Iran Abstract
2
Computer Engineering Department, Internet of things (IoT) is a ubiquitous network that
Urmia Branch, Islamic Azad University, helps the system to monitor and organize the world
Urmia, Iran
3
through processing, collecting, and analyzing the data
Department of Computer Science,
Government Bikram College of produced by IoT objects. The accurate localization of
Commerce, Patiala, India IoT objects is indispensable for most IoT applications,
4
Computer Engineering Department, especially healthcare monitoring. Utilizing GPS as the
Payame Noor University, Urmia, Iran
5
positioning system is not cost‐efficient and does not
Faculty of Engineering & Technology,
Jain (Deemed‐to‐be University), apply to some environments (e.g., deep forests, oceans,
Bengaluru, India inside the buildings, etc.). Hereupon, copious position
estimation approaches are developed in the literature.
Correspondence
Saeid Barshandeh, Afagh Higher Among range‐free approaches, distance vector‐Hop
Education Institute, Iran, Urmia. (DV‐Hop) is the widely used algorithm due to its
Email: saeid_barshandeh@[Link]
straightforward applicability and can estimate the po-
sition of unknown objects that are far‐off the anchors.
Due to its low accuracy, various techniques were pro-
posed to increase the accuracy of basic DV‐Hop. In the
most recent approach, meta‐heuristic algorithms were
used, the results of which were promising. In the
present paper, Tunicate Swarm Algorithm and Harris
hawk optimization were initially hybridized. After-
thought, the resulting hybrid algorithm was enhanced
by appending a new phase. Then, the proposed hybrid
algorithm was intermingled with the DV‐Hop algo-
rithm. In the first set of experiments, the proposed
hybrid algorithm was evaluated on 50 test functions
using average, SD, box plot, and p‐value criteria. In the

Int J Intell Syst. 2021;1–44. [Link]/journal/int © 2021 Wiley Periodicals LLC | 1


2 | BARSHANDEH ET AL.

second part, the proposed localization algorithm's ef-


ficiency was investigated in twenty‐eight different
manners using node localization error, average locali-
zation error, and localization error variance metrics.
The effectiveness of the contributions was evident from
the experimental results.

KEYWORDS
DV‐Hop, hybrid optimization, internet of things, localization,
localization error

1 | INTRODUCTION

Nowadays, the Internet of Things (IoT) possesses a significant share in real‐world intelligent
applications such as monitoring, automating, controlling, managing, data gathering, and track-
ing. The IoT comprises a group of devices, also called smart devices or smart objects.1 These
objects are embedded with electronics, sensors, actuators module, software, and communication
components, which allow the objects to connect and communicate with other objects or a base
station. With wireless communications advancements such as the Internet, radio frequency
identification (RFID), Bluetooth, Zigbee, GPRS, 3G, and so on, IoT has attracted extensive at-
tention from researchers in the past few years. In general, the IoT's main aim is setting up a
network of the objects, which leads to better performance and utilization of the existing systems.2
The IoT devices can be dispersed in small places such as warehouses, campuses, inside
buildings, or scattered over large areas such as industrial zones, smart cities, forests, deserts,
and deep oceans.3 The IoT devices' accurate location is essential for most IoT networks'
information‐based applications comprising healthcare monitoring, traffic management, object
tracking, measurement of the environmental conditions, event detection, disaster detection,
security problems, routing, and so on. In other words, IoT devices' data will be useless without
knowing the location of the information in most applications. For instance, Figure 1 demon-
strates the necessity and importance of localization in a healthcare application.
According to Figure 1, discovering the devices' exact location is a crucial challenge in IoT
applications. Discovering the location of the devices by global positioning system (GPS) is
prevalent; however, installing GPS modules in IoT devices is not recommended since it is not
practical in all situations due to the following reasons:

• GPS installation costs are eminent and not reasonable in motionless devices (in immovable
devices, localization is done once).
• Using GPS significantly increases the energy consumption, while the energy of the devices is
minimal.
• Most of the devices have small sizes, and the GPS could not be installed.
• In harsh environments (e.g., underwater, in deep forests, inside the buildings) do not usually
work (do not receive signals).
• Its positioning accuracy is invariably not ample in the indoor, mine tunnel, and canyon
environments.
BARSHANDEH ET AL. | 3

F I G U R E 1 Indispensability of the localization in healthcare applications [Color figure can be viewed at


[Link]]

Due to the above‐mentioned shortcomings, the researchers began to develop innovative locali-
zation algorithms using interactions and connectivity information between the objects.4 In the lo-
calization algorithms, the position of several objects is known by GPS or other techniques such as
manual configuration called anchors or beacons. However, the remaining objects are initially una-
ware of their geographic position, named unknown objects or devices. The localization process aims
to estimate the position of all unknown objects using the anchors' position.5 The non‐GPS localization
algorithms are categorized into two main groups, namely, range‐based and range‐free algorithms.
The range‐based algorithms use device‐to‐device and direct connections between anchors and un-
known nodes to estimate the objects' location. The range‐based algorithms are appropriate for single‐
hop environments, in which objects communicate only with their immediate neighbor objects.
Range‐free localization algorithms desire to actuate the connectivity information in the
form of the number of hops between each of the two objects in the network. The range‐free
algorithms utilize ad‐hoc concepts and are accommodative for large multihop systems.
Time of arrival (TOA), received signal strength indicator (RSSI), time‐difference of arrival
(TDOA), and angle of arrival (AOA) are among the well‐known range‐based localization al-
gorithms. Approximate point in triangulation (APIT), centroid localization algorithm, amor-
phous localization algorithm, distance vector‐Hop (DV‐Hop), Improvised tetrahedron
localization, HiRLoc localization algorithm, localization based on sphere intersections, locali-
zation based on Voronoi regions, dynamic triangulation localization, and target tracking lo-
calization is the widely used range‐free localization algorithms. The localization algorithms are
summarized in Figure 2.
Among the range‐free localization algorithms, the distance vector‐Hop (DV‐Hop) algorithm is the
renowned and widely used algorithm for positioning the IoT objects, described in Section 3. DV‐Hop
uses a multihop approach to estimate objects' locations in the networks, far away from the anchors.5
The multihop approach has some advantages: estimating the location using a smaller number of
anchors and locating the objects even if these nodes are geographically multiple hops away from the
anchors. Nevertheless, most multihop localization algorithms require robust connectivity among the
objects and high‐density within the network to provide accurate location estimates.5 Although DV‐
Hop, compared to the other localization algorithms, could be implemented effortlessly at a low cost, it
4 | BARSHANDEH ET AL.

F I G U R E 2 Classification of localization algorithms. AOA, angle of arrival; APIT, approximate point in


triangulation; DV‐hop, distance vector‐Hop; RSSI, received signal strength indicator; TDOA, time‐difference of
arrival; TOA, time of arrival [Color figure can be viewed at [Link]]

has low positioning accuracy that is unacceptable in some applications. For this reason, researchers
have conducted numerous studies with the intention of increasing the accuracy of the DV‐Hop
algorithm. In this regard, nature‐inspired optimization algorithms have achieved promising results
among the various proposed solutions.6
Recently, innovative sets of algorithms called nature‐inspired optimization algorithms have been
developed by inspiring natural phenomena. Seagull optimization algorithm (SOA),7 Rat swarm op-
timizer (RSO),8 Dart game optimizer (DGO),9 spring search algorithm (SSA),10 group teaching op-
timization algorithm (GTOA),11 emperor penguins colony (EPC),12 sooty tern optimization algorithm
(STOA),13 tunicate swarm algorithm (TSA),14 emperor penguin optimizer (EPO),15 equilibrium op-
timizer (EO),16 black widow optimization algorithm (BWO),17 and Mayfly optimization algorithm
(MA)18 are the instances of the most recently developed nature‐inspired optimization algorithms.
These algorithms were applied widely to obliterate IoT‐related issues.19–21
In the current paper, a novel enhanced hybrid optimization algorithm was initially pro-
posed. Thenceforth, the resulting algorithm was embedded with the DV‐Hop algorithm to
improve its positioning accuracy. The proposed localization algorithms could be exploited to
estimate the location of various IoT devices embracing smart objects, people, animals, sensors,
and so on. The critical contributions of the paper can be summarized as follows:

• A novel hybrid optimization algorithm called TSHH is introduced.


• A new phase was added to the TSHH to enhance the efficiency of it.
• The TSHH was tested on classical and CEC 2017 test functions from different categories.
• The hybrid algorithm was statistically compared with state‐of‐the‐art algorithms in terms of
average and standard deviation. It was also visually compared using box plots.
• The Wilcoxon signed‐rank test validated the results.
• An innovative and efficient localization algorithm was proposed based on DV‐Hop and the
hybrid optimization algorithm.
• The proposed localization algorithm was tested on 28 different environments.
BARSHANDEH ET AL. | 5

• The performance of the proposed localization algorithm was investigated by the node lo-
calization error (NLE), average localization error (ALE), and localization error variance
(LEV) metrics.
• The results of the proposed localization algorithm were compared with those of other meta‐
heuristic embedded DV‐Hop algorithms.

The rest of the paper is organized as below: In Section 2, the literature on the localization
problem is presented. In Section 3, the background algorithms are discussed. In Section 4, the
details of the hybrid optimization algorithm and the proposed localization algorithm are stated.
In Section 5, experimental results are provided. Finally, in Section 6, a summary and future
perspectives are given.

2 | RELATED W ORKS

Recently, numerous nature‐inspired‐based range‐free localization algorithms have been proposed


by scholars. In this section, prominent studies are discussed in the localization scope. For in-
stance, in Reference [22], Kanwar and Kumar presented a DV‐Hop‐based range‐free localization
algorithm enhanced by the runner‐root optimization algorithm. They also introduced a novel
correction factor to correct hop size in the DV‐Hop algorithm. They evaluated their algorithm on
wireless sensor networks of different sizes. The experiments' results demonstrated the efficiency
of the correction factor and the runner‐root optimization algorithm's effectiveness.
Furthermore, in Reference [23], Han et al. improved the DV‐hop algorithm's accuracy using
the differential evolution algorithm. They used random mutation operators to increase the
population's diversity and prevent the differential evolution algorithm's premature con-
vergence. Besides, the particle swarm optimization algorithm's social learning aspect was
embedded in the crossover operation to accelerate the convergence speed. Han et al. applied
their algorithms to the two‐dimensional wireless sensor networks and achieved a higher
accuracy.
Additionally, Chai et al. used an enhanced version of the whale optimization algorithm to
improve the DV‐Hop algorithm.24 They used two information exchange approaches to enhance
the whale optimization algorithm's search capability and population diversity. The enhanced
whale optimization algorithm was adopted to optimize the wireless sensor network
localization.
Moreover, in Reference [25], Bhat and Venkata provided a range‐free localization algorithm
based on DV‐Hop and Harris hawks optimization (HHO) algorithm. The proposed localization
algorithm was applied to a heterogeneous wireless sensor network. The algorithm used dif-
ferent coverage ranges of wireless sensor nodes to classify neighbor nodes into two incoming
and outgoing neighbor nodes. Moreover, in the proposed algorithm, the area minimization
technique was exploited to reduce the search area. The performance of the proposed locali-
zation algorithm was evaluated on two‐ and three‐dimensional networks.
Meng et al. presented a novel localization algorithm based on K‐value Collinearity opti-
mized by an improved version of the grey wolf algorithm.26 The proposed algorithm had two
phases. In the first stage, the initial location of the sensor nodes was found by the K‐value. In
the second phase, the enhanced gray wolf optimization algorithm improves the initial locations
to reduce the localization error.
6 | BARSHANDEH ET AL.

In a similar vein, in Reference [6], Wang et al. proposed an enhanced DV‐Hop using the
multiobjective NSGA‐II. They enhanced the DV‐Hop accuracy by introducing an augmented
constraint strategy based on all anchors. They applied the proposed localization algorithm to
the internet of things based on a wireless sensor network, and the proposed algorithm illu-
strated a better precision compared to similar algorithms.
Besides, in Reference [27], Song et al. presented a novel localization algorithm. The
glowworm swarm optimization algorithm was initially enhanced by chaotic mutation and
chaotic inertial weight in the presented algorithm. Followingly, the proposed chaotic glow-
worm was used to control the fireflies' movements once they got trapped in local optima.
Thenceforth, the hybrid algorithm was combined with the DV‐Hop localization algorithm to
estimate the wireless sensors' location.
Likewise, Kaur et al. presented a novel localization algorithm based on grey wolf optimi-
zation and the DV‐Hop algorithm.28 The grey wolf optimization algorithm was exploited to
optimize the average distance per hop in the first step. Another version of the grey wolf
optimization algorithm (a weighted version) was then applied to unknown nodes to get a
weighted average distance per hop to consider all anchors' impact. The proposed algorithm was
applied to two‐dimensional (2D) and three‐dimensional (3D) wireless networks and achieved
promising results.
Besides, in Reference [29], Zhou et al. provided a DV‐Hop localization algorithm using the
bacterial foraging optimization algorithm. They tested the proposed localization algorithm on a
wireless multimedia sensor network, and the simulation results indicated the success of their
contributions.
Additionally, Shi et al. presented an enhanced localization algorithm based on particle
swarm optimization and path matching algorithms. The path matching algorithm was used to
find the shortest path of optimal anchor‐anchor and calculate the average hop distance between
unknown nodes and nearest anchors. Subsequently, a modified particle swarm optimization
algorithm was used to reduce the localization error by optimizing the nodes' initial location.
Furthermore, Sharma and Kumar designed a localization algorithm based on enhanced
DV‐Hop and teaching‐learning‐based optimization algorithm.30 In the current paper, the hop
sizes were modified by a correction factor. Besides, the collinearity concept was introduced to
decrease localization error. Thenceforth, the teaching‐learning‐based optimization algorithm
was utilized to increase the localization accuracy.
In Reference [31], Li et al. proposed a new optimization algorithm and an enhanced
DV‐Hop‐based positioning algorithm. In this paper, three different strategies in a compact way
improved the cat swarm optimization algorithm. The improved parallel compact cat swarm
algorithm was tested on CEC 2013 test functions, and the results indicated its superiority on
competitor algorithms. Afterward, the proposed cat swarm optimization algorithm was em-
bedded in the basic DV‐Hop algorithm to increase localization accuracy. The performance of
the proposed DV‐Hop‐based localization algorithm was investigated on various wireless sensor
networks. The results demonstrated the effectiveness of the proposed positioning algorithm.
Moreover, in Reference [32], the authors presented a novel wireless sensor node localization
algorithm based on an enhanced A⁎ algorithm and 3D DV‐Hop algorithm. In the presented
model, the basic A⁎ and DV‐Hop algorithms are enhanced using some strategies enhanced
adopted from NSGA‐II. Besides, hop‐count values are corrected, and genetic operators,
including crossover and mutation, are employed to make the Pareto front more desirable. The
proposed localization algorithm is evaluated on wireless sensor networks and compared with
similar algorithms, which results exposed the effectiveness of the contributions.
BARSHANDEH ET AL. | 7

Likewise, Kanwar and Kumar proposed a DV‐Hop‐based localization algorithm enhanced


using particle swarm optimization algorithm.33 In the proposed localization algorithm, sensor
nodes' energy consumption is reduced by shortening the communications between unknown
sensor nodes and beacon nodes. Also, the execution time is minimized by positioning just
sensor nodes displaced from their initial position. Additionally, a promotion concept is utilized
to enhance prepositioned proportioning. Eventually, the particle swarm optimization‐based
DV‐Hop algorithm is tested on various WSN environments, which the obtained results exposed
the superiority of the algorithm. The above‐mentioned state‐of‐the‐art scheme and some other
prominent approaches are summarized in Table 1.

TABLE 1 State‐of‐the‐art of range‐free localization algorithms based on optimization algorithms


No. Scheme Approach Optimized by Network Environment Year
30
1 DV‐Hop TLBO WSN 2D 2017
34
2 DV‐Hop GA WSN 3D 2017
35
3 DV‐Hop PSO WSN 2D 2017
36
4 DV‐Hop PSO IoT 2D 2018
37
5 DV‐Hop SA WSN 2D 2018
29
6 DV‐Hop BFO WMSN 2D 2018
38
7 DV‐Hop PSO WSN 2D 2018
39
8 DV‐Hop ACO‐PSO WSN 2D 2018
40
9 DV‐Hop DE WSN 2D 2018
28
10 DV‐Hop GWO WSN 3D 2018
41
11 DV‐Hop PSO WSN 2D 2018
42
13 DV‐Hop AIA WSN 2D 2019
27
14 DV‐Hop GSO WSN 2D 2019
43
15 DV‐Hop COA WSN 3D 2019
6
16 DV‐Hop NSGA_II IoT 2D 2019
26
19 K‐value GWO WSN 2D 2020
collinearity
25
20 DV‐Hop HHO WSN 3D 2020
24
22 DV‐Hop WOA WSN 2D 2020
22
24 DV‐Hop RRO WSN 2D 2020
23
25 DV‐Hop DE‐PSO WSN 2D 2020
31
26 DV‐Hop CSO WSN 2D 2021
32
27 DV‐Hop A⁎ WSN 3D 2021
33
28 DV‐Hop PSO WSN 2D 2021
Abbreviations: DV‐Hop, distance vector‐Hop; GWO, grey wolf optimizer; HHO, Harris hawks optimization; IoT, internet of
things; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
8 | BARSHANDEH ET AL.

3 | PRELIM IN AR IES

3.1 | Tunicate swarm algorithm

Tunicate swarm algorithm (TSA) is a robust nature‐inspired optimization algorithm developed by


Kaur et al. by inspiring the navigation and foraging behavior of tunicates in nature.44 The tunicates
can find food sources in the deep of the ocean by jet propulsion. The jet propulsion behavior is
based on three conditions called avoidance of the conflicts between tunicates, convergence towards
the best tunicate, and remaining close to the best tunicate,44 which are described below.

3.1.1 | Avoiding the conflicts among tunicates



⎯ →

In this phase, vector A is used to avoid conflict and calculate a new position. The vector A is
modeled as:
→⎯

⎯ G
A = ⎯→ (1)
M

⎯ ⎯⎯⎯→
G = c2 + c3 − WF (2)
⎯⎯⎯→
WF = 2. c1 (3)

SF = ⌊Pmin + c1. Pmax − Pmin ⌋ (4)



⎯ ⎯⎯⎯→ ⎯→

where G is the gravity force, WF is the water flow advection in the deep ocean, SF is the social
force between tunicates, c1, c2, and c3 are random numbers in [0,1], the Pmin and Pmax are the
initial and subordinate speeds of social interactions, respectively.

3.1.2 | Movement towards the direction of the best neighbor

This movement of the tunicates is modeled by Equation (5).

⎯⎯⎯→ ⎯→

Dis = |FS − ra. x (t )| (5)
⎯⎯⎯→ ⎯→

where Dis is the distance between the tunicate and food source, FS is the food source (the most
optimal tunicate), x (t ) is the position of the tunicate in tth iteration, and ra is a random number
between [0,1].

3.1.3 | Movement towards the best tunicate

This phase of the algorithm models the tunicates' movement towards the best tunicate (the tunicate,
i.e., the nearest one to the food source). This behavior of tunicates is modeled as follows:

⎧ ⎯→
⎯ →
⎯ ⎯⎯⎯→
⎪ FS + A . Dis ra ≥ 0.5
x (t + 1) = ⎨
⎪ ⎯→
(6)
⎯ →
⎯ ⎯⎯⎯→
⎩ FS − A . Dis otherwise
BARSHANDEH ET AL. | 9

The x (t + 1) is the new position of the tunicate. It is worth noting that in the TSA, the first
two tunicates update their position using Equation (6).

3.1.4 | Swarm intelligence behavior

As mentioned above, the first two tunicates update their position according to the food source
position. The remaining tunicates follow the swarm and update their position using Equation
(7), which models the tunicates' swarm intelligence behavior.

x (t + 1) + x (t )
x (t + 1) = (7)
2 + c1

The TSA algorithm's pseudocode is illustrated in Algorithm 1 (more details are presented in
Referemnce [44]).

Algorithm 1. Pseudocode of the TSA algorithm


% Setting parameters
Specify the number of tunicates and stopping criteria
Set the parameter values
% Initialization
Initialize the position of tunicates in the search space randomly
Calculate the fitness of the tunicates
Find the best tunicate (food source) from the initial tunicates
% Main loop
while stopping criteria not satisfied
for i = 1: number of tunicates
Calculate c1, c2, c3, and r1
if i < 3
Update position of ith tunicate using Equation (6)
else
Update position of ith tunicate using Equation (7)
end if
Calculate the fitness of the new tunicate
Update position of the best tunicate (food source)
end for
end while
Return the best tunicate

3.2 | HHO algorithm

The HHO is another potent bioinspired optimization algorithm developed by Heidari et al.45
The HHO is inspired by the hunting strategies of Harris hawks in nature. These strategies are
mathematically modeled in two parts: exploration and exploitation, described as follows (for
details, see Reference [45]).
10 | BARSHANDEH ET AL.

3.2.1 | Exploration

In this part, the position of the Harris hawks is updated using Equation (8).

⎧ Xrand (t ) − r1|Xrand (t ) − 2r2 X (t )|q ≥ 0.5


X (t + 1) = ⎨ (8)
⎩ (Xrabbit (t ) − Xm (t )) − r3 (lb + r4 (ub − lb)) q < 0.5

where t is the current iteration, X (t ) is the position of the Harris hawk, X (t + 1) is the updated
position of the Harris hawk, Xrabbit (t ) is the position of rabbit, Xrand (t ) is the position a hawk
selected randomly from the current population, lb and ub are lower and upper bounds of the
problem space, r1, r2, r3, r4 and q are random numbers in [0,1], and Xm (t ) is average positions of
the Harris hawks that can be calculated as below:

N
1
Xm (t ) =
N
∑Xi (t ) (9)
i =1

3.2.2 | Exploitation

The exploitation part models the soft besiege, hard besiege, Soft besiege with progressive rapid
dives, and Hard besiege with the Harris hawks' progressive rapid dives behaviors. In HHO,
these movement equations are selected based on two factors called r and E . The r is a random
number in [0,1], and the E is calculated as follows:

⎛ t⎞
E = 2E0 ⎜1 − ⎟ (10)
⎝ T⎠

The E0 is the initial energy of prey in (−1,1), and the T is the maximum number of
iterations. It is worth mentioning that when | E| ≥ 1, the Harris hawks update their position
using the exploration part; otherwise, they update their position according to the exploitation
equations.

Soft besiege
When r ≥ 0.5 and |E| ≥ 0.5, the Harris hawk uses this behavior to update its position. This
behavior is modeled as below:

X (t + 1) = ∆X (t ) − E | JXrabbit (t ) − X (t )| (11)

∆X (t ) = Xrabbit (t ) − X (t ) (12)

where ∆X (t ) is the distance between the Harris hawk and the rabbit, and j is the random jump
strength of the rabbit that can be calculated using Equation (13).

J = 2(1 − r5) (13)

where r5 is a random number between (0,1).


BARSHANDEH ET AL. | 11

Hard besiege
When r ≥ 0.5 and |E| < 0.5, the Harris hawk updates its position using Equation (14).

X (t + 1) = Xrabbit (t ) − E| ∆X (t )| (14)

In this scenario, the prey is exhausted and has very low escaping energy.

Soft besiege with progressive rapid dives


When |E| ≥ 0.5 and r < 0.5, the Harris hawk uses Equation (15) to move to a new
position.

⎧Y Fit (Y ) < Fit (X (t ))


X (t + 1) = ⎨ (15)
⎩ Z Fit (Z ) < Fit (X (t ))

Y = Xrabbit (t ) − E| JXrabbit (t )−X (t )| (16)

Z = Y + S × LF (dim) (17)

The dim is the dimension of the problem, S is a random vector by the size of 1 × D , and the
LF is the levy‐flight function, which is modeled as follows:

u×σ
LF (x ) = 0.01 × 1
(18)
|v|β

⎛ ⎞β
1

σ=⎜
(
⎜ Γ(1 + β ) × sin 2
πβ
( ) ⎟
⎟ (19)
⎜⎜ Γ 1 + β × β × 2( β−1
⎝ ( )
2
2 )
) ⎟⎟

where u, v are random numbers with normal distribution in (0,1), and β is constant with the
value of 1.5.

Hard besiege with progressive rapid dives


When |E| < 0.5 and r < 0.5, the Harris hawk uses the last strategy to update its position, which
is modeled by the use of Equation (20).

⎧Y Fit (Y ) < Fit (X (t ))


X (t + 1) = ⎨ (20)
⎩ Z Fit (Z ) < Fit (X (t ))

Y = Xrabbit (t ) − E| JXrabbit (t )−Xm (t )| (21)

Z = Y + S × LF (D) (22)
12 | BARSHANDEH ET AL.

Algorithm 2. The pseudocode of the HHO algorithm.

% Setting parameters
Specify the number of Harris hawks and stopping criteria
Set the parameter values
% Initialization
Initialize the position of Harris hawks in the search space randomly
Calculate the fitness of the initial Harris hawks
Find the best Harris hawk (rabbit) from the initial population
% Main loop
while stopping criteria not satisfied
foreach Harris hawk
Update E0 , E and J
if |E| ≥ 1
Update the position of Harris hawk using Equation (8)
else
if r ≥ 0.5 and |E| ≥ 0.5
Update position of the Harris hawk using Equation (11)
else if r ≥ 0.5 and |E| < 0.5
Update position of the Harris hawk using Equation (14)
else if r < 0.5 and |E| ≥ 0.5
Update position of the Harris hawk using Equation (15)
else if r < 0.5 and |E| < 0.5
Update position of the Harris hawk using Equation (20)
end if
end if
Calculate the fitness of the Harris hawk
end foreach
Update position of the best Harris hawk (rabbit)
end while
Return the best Harris hawk

3.3 | DV‐Hop algorithm

DV‐Hop is the most widely used localization algorithm among the range‐free localization
algorithms. The main aim of DV‐Hop is to estimate the location of unknown objects using
other objects that possess prior knowledge of their location. The DV‐Hop algorithm has three
main steps, which are described as follows.
In the first step, the anchors broadcast a packet containing their location and a hop‐counter,
which is initially set to 1. The objects that received the packet store the information add a unit
to the hop‐counter value and rebroadcast the packet to its neighbors. The neighbor objects that
received more than one packet from an anchor kept the packet with the minimum value of
hop‐counter and dropped the other packets from the anchor. Along with this mechanism's
benefits, all objects get minimum hop‐count values from the anchors, as illustrated in Figure 3.
The anchors' average hop size was calculated using minimum hop counts and the anchors'
actual distance in the second step. The average hop size was calculated using Equation (23). In
Equation (23), AvgHopSizei was the average hop size of i th anchor, Na is the number of anchors
BARSHANDEH ET AL. | 13

F I G U R E 3 The process of packet broadcasting and calculating the hop‐counts [Color figure can be viewed at
[Link]]

( X Ai , YAi ) is the position of i th anchor, ( X AQ , YAQ ) is the position of Q th anchor, and


MinHopCounti, Q is the minimum hop‐count between i th and Q th anchors.
In the third step, each unknown object estimates its distance from the anchors, using
Equation (24). In Equation (24), DistK , i is the estimated distance between k th unknown object
and i th anchor, AvgHopSizei is the average hop size of i th anchor, and MinHopCountk, i is the
minimum hop‐count between k th unknown object and i th anchor.

N
∑Qa=1 (X Ai − X AQ )2 + (YAi − YAQ )2
AvgHopSizei = N
(23)
∑Qa=1MinHopCounti, Q

DistK , i = AvgHopSizei × MinHopCountk, i (24)

⎧ (X A − X )2 + (YA − YO )2 = Dist 2
⎪ 1 Ok 1 k K, 1
⎪ (X − X )2 + (Y − Y )2 = Dist 2
⎨ A2 Ok A2 Ok K, 2 (25)
⎪ …
⎪ (X Aj − X )2 + (YAj − Y )2 = DistK,j
2
⎩ O k O k

⎧ X 2 X 2 − 2(X A − X A ) XO + Y 2 Y 2 − 2(YA − YA ) YO = Dist 2 − Dist 2


⎪ A1 − Aj 1 j k A1 − Aj 1 j k K, 1 K,j
⎪ X 2 X 2 − 2(X − X ) X + Y 2 Y 2 − 2(Y − Y ) Y = Dist 2 − Dist 2
⎨ A2 − Aj A2 Aj Ok A2 − Aj A2 Aj Ok K, 2 K,j (26)
⎪ …
⎪ X A2 − X A2 − 2(X Aj−1 − X Aj ) XOk + YA2 − YA2 − 2(YAj−1 − YAj ) YOk = DistK,j
2 2
‐1 − DistK,j
⎩ j−1 j j −1 j
14 | BARSHANDEH ET AL.

⎡ 2(X A1 − X Aj ) 2(YA1 − YAj ) ⎤


⎢ ⎥
A= ⎢ 2(X A2
− X Aj
) 2(YA2
− YAj
) ⎥ (27)
⎢ … … ⎥
⎢ 2(X Aj−1 − X Aj ) 2(YAj−1 − YAj ) ⎥
⎣ ⎦

⎡ X 2 X 2 + Y 2 Y 2 − Dist 2 + Dist 2 ⎤
⎢ A1 − Aj A1 − Aj K, 1 K,j

⎢ X A2 − X A2 + YA2 − YA2 − DistK,2 2 + DistK,j
2 ⎥
B=⎢ 2 j 2 j
⎥ (28)

⎢ 2 2 2 2 2 2 ⎥
⎢⎣ X Aj−1 − X Aj + YAj−1 − YAj − DistK,j ‐1 + DistK,j ⎥⎦

⎡ XO ⎤
X = ⎢ k⎥ (29)
⎣ YOk ⎦

X = (AT A)−1AT B (30)

In the fourth step, the location of the unknown objects is estimated using the trilateration
method. The trilateration method uses the expressions stated in Equation (25) to estimates
location. In the expressions of Equation (25), (XOk,YOk ) is the position of k th unknown object,
(X A1, YA1) is the position of the first anchor, and DistK,1 is the distance between k th unknown
object and the first anchor. Likewise, Equation (26) depicts the expansion of Equation (25).
Equation (26) can be written in matrix form as AX = B , which A, X, and B are calculated
using Equations (27), (28), and (29), respectively.
In the end, the least square estimation approach was benefited to estimate the location of
k th unknown object (X). Equation (30) expresses the least square estimation approach.
The prominent notations and their description are provided in Table 2 to improve readability.

4 | P R O P O S E D L O C AL I ZA T I O N M E T H O D
(TSHH ‐ DV‐ H O P )

This section presents the proposed localization algorithm's details based on DV‐Hop and an
enhanced hybrid optimization algorithm. As mentioned earlier, the DV‐Hop algorithm has
drawbacks that make it inaccurate for most real‐world usages, such as industrial and health-
care applications. Utilizing nature‐inspired optimization algorithms is one of the most efficient
approaches for increasing the DV‐Hop algorithm's accuracy and performance. However, the
optimization algorithms also have their downsides that should be considered. Among these
shortcomings, low exploitation, insufficient exploration, weak convergence, and stagnation in
local optimums can be mentioned.
Each meta‐heuristic algorithm has a unique movement strategy by which the location of solu-
tions is updated. Each of these strategies has a set of strengths and a set of weaknesses. In the
proposed hybrid optimization algorithm, the strategies of the TSA and HHO algorithms are used.
Followingly, The TSA algorithm has gratifying exploitation capability; however, it has a low searching
ability and quickly gets stuck in local optimums. The HHO algorithm has acceptable exploitation and
exploration capabilities, yet it has a low convergence rate and is conducive to falling into the trap of
local optima. One of the proven methods of eliminating the weaknesses of the algorithms is to use
BARSHANDEH ET AL. | 15

TABLE 2 Nomenclature
Notation Description
DV‐Hop Distance Vector‐Hop algorithm
HHO The Harris hawk optimization
TSA The tunicate swarm algorithm
D Problem's dimension
G⃗ The gravity force in TSA
WF⃗ The water flow advection in the deep ocean
SF⃗ The social force between tunicates
Pmin The initial speeds of social interactions
Pmax 4 The subordinate speeds of social interactions
FS⃗ The best solution in TSA
t Current iteration
T Maximum number of iterations
x (t ) The current position
Xrand The best solution in HHO
lb The lower bound of problem space
ub The upper bound of problem space
Xm The average positions of the Harris hawks
N Number of Harris hawks
E Energy
E0 Initial Energy
∆X The distance between the Harris hawk and the rabbit
J The jump strength of the rabbit
LF The levy‐flight function
ALE The average localization error
NLE The node location error
LEV The localization error variance
Nu Number of unknown objects
R Communication range
Piest The estimated position of i th unknown object

Piact The actual position of i th unknown object

multialgorithm strategies.46,47 In each iteration of the proposed hybrid optimization algorithm, one of
the algorithms' strategies was randomly selected to update the solutions. In this case, the solutions do
not follow a particular pattern and search in all directions. Additionally, applying multiple randomly
selected movement strategies to the solutions will lead to the exit of the trapped solutions in the local
16 | BARSHANDEH ET AL.

optimums. This combination of TSA and HHO algorithms eliminates their limitations and augments
their advantages.
Furthermore, another enhancement approach was added to the resulting hybrid algorithm
to extend its searching capability as much as possible. The approach is appended to the hybrid
optimization algorithm as a new phase and randomly selected in each iteration like TSA and
HHO phases. Equation (31) models the enhancement approach.

Xi (t + 1) = Xi (t ) + δ (Xi (t ) − Xj (t )) (31)

where δ is a random number in [−1,1], and j = 1,2, …, N . Additionally, the greedy selection
mechanism was used to increase the convergence rate of the TSHH algorithm. Algorithm 3
represents the pseudocode of the proposed hybrid optimization algorithm (TSHH). In the
following, TSHH was embedded into the basic DV‐Hop algorithm to find a more accurate IoT
object position in real‐world applications. In the first place, the unknown objects' position was
estimated using Equation (30) in the DV‐Hop algorithm. Later, the estimated positions were
delivered to the TSHH algorithm. TSHH algorithm strived to correct the estimated positions
and increase the accuracy. The average localization error (ALE) is the widely used criterion that
has been exploited as the objective function in the numerous localization algorithms. The
average localization error was calculated using Equation (32).

Algorithm 3. Pseudocode of the proposed hybrid optimization algorithm (TSHH)

Input: Specifications of the problem, number of solutions (nSol ), and stopping criteria
Output: Optimal solution for the problem

Set the parameter values of the algorithms


Initialize solutions ( X ) randomly within the problem space
Calculate the fitness (F ) of the initial solutions
Find the best solution (BestX ) and its fitness (BestF )

while (stopping criteria not reached)


for i = 1: nSol
r = rand;
if r < γ1
Update Xi using HHO algorithm
else if r < γ2
Update Xi using TSA algorithm
else
Update Xi using Equation (31)
end if
Check the feasibility of the new Xi
Calculate the fitness of new Xi
Apply greedy selection mechanism
end for
Update BestX and BestF
end while
return BestX and BestF
BARSHANDEH ET AL. | 17

N
∑i =1
u
|Piest − Piact|
ALE = (32)
Nu × R

In Equation (32), Nu is the number of unknown objects, R is the communication range, Piest
is the estimated position of i th unknown object, and Piact is the actual position of the i th object.
Therefore, the objective function of localization can be expressed by Equation (33):

⎡ ∑Nu (X − X )2 + (Y − Y )2 ⎤
f (x , y ) = min ⎢ i =1 ⎥
iest iact iest iact
(33)
⎢⎣ Nu × R ⎥⎦

In Equation (33), ( Xiest , Yiest ) is the estimated position of i th object and ( Xiact , Yiact ) is the actual
position of i th object. The flowchart of the proposed TSHH‐DV‐Hop localization algorithm is
illustrated in Figure 4. An illustrative example of the localization process and the proposed
algorithm's place is presented in Figure 5.

FIGURE 4 Flowchart of the proposed localization algorithm (TSHH‐DV‐Hop)

F I G U R E 5 An illustrative example of the proposed localization process [Color figure can be viewed at
[Link]]
18 | BARSHANDEH ET AL.

5 | EX PERIMENTAL RESU LTS

Numerous experiments have been conducted to substantiate the superiority of the proposed
localization algorithm. The experiments were generally divided into two main categories. In the
first set of experiments, the proposed hybrid optimization algorithm (TSHH) was evaluated on
50 widely used test functions comprising classical test functions and CEC 2017 test functions.
The results of the experiments on the TSHH algorithm are provided in appendix A. In the
second part of the experiments, the proposed localization algorithm's efficiency (TSHH‐DV‐
Hop) was investigated in various scenarios. The results of the proposed algorithm were com-
pared with the outcomes of HHO,45 TSA,44 grey wolf optimizer (GWO),48 sine cosine algorithm
(SCA),49 whale optimization algorithm (WOA),50 and crow search algorithm (CSA)51 to eval-
uate the performance of it in both sets of experiments. The algorithms' parameter values are
presented in Table 3, which are the authors' suggested values. Furthermore, to make reliable
comparisons, the number of solutions in all algorithms was set to 30; the stopping criterion was
defined to reach 500 iterations. Likewise, all experiments were conducted in the same en-
vironment with the specifications shown in Table 4.

TABLE 3 The parameter values of the algorithms


Algorithm Parameter Value
TSHH γ1 0.35

γ2 0.7

TSA Pmin 1
Pmax 4
GWO α [20]
r1,r2 rand
CSA AP 0.2
fl 2
ri rand
WOA α 2 to 0
r, p rand
A, l [−1,1]
SCA r1, r2 , r3 , and r4 rand
HHO r1, r2 , r3 , r4 , and q rand
E0 (−1,1)
J 2 × (1 − rand )
Note: rand is a random number inside [0,1].
Abbreviations: CSA, crow search algorithm; GWO, grey wolf optimizer; HHO, Harris hawks optimization; TSA, tunicate swarm
algorithm; WOA, whale optimization algorithm.
BARSHANDEH ET AL. | 19

TABLE 4 Running environment specifications


Name Value
Hardware
RAMCPU Core i5
RAMFrequency 3.1 GHz
RAMRAM 8 GB
Hard drive 1 TB
Software
Operating system Windows 10
Language MATLAB R2017a

5.1 | Evaluation metrics

The following sections dispense the detailed analysis of the proposed localization algorithm (TSHH‐
DV‐Hop) in terms of various metrics comprising node location error (NLE), average localization error
(ALE), and localization error variance (LEV) metrics. The NLE can be calculated as follows35:

NLEi = (Xiest − Xiact )2 + (Yiest − Yiact )2 (34)

where NLEi is the location error of the i th unknown object ( Xiest , Yiest ) is the estimated position
of i th object, and ( Xiact , Yiact ) is the actual position of i th object. Additionally, the ALE is for-
mulated as below35:

N
∑i =1
u
(Xiest − Xiact )2 + (Yiest − Yiact )2
ALE = (35)
Nu × R

The Nu is the quantity of unknown objects, and R is the communication range. The LEV is
also calculated by Equation (36).35

LEV =
N
∑i =1
u
( (Xiest − Xiact )2 + (Yiest − Yiact )2 − ALE × R ) (36)
Nu × R2

TABLE 5 Simulation parameters


Parameter Value
Deployment area 100 × 100‐400 × 400m2
Unknown objects 100‐400 objects
Anchors 15‐50
Communication range 10‐40 m
Number of solutions 50
Number of iterations 200
20 | BARSHANDEH ET AL.

In the experiments, the algorithms' performance has been evaluated on environments with
different deployment sizes, the various number of unknown objects, anchors density, and
communication ranges. The parameters of the simulations are stated in Table 5. Due to the
randomness of localization algorithms, each algorithm ran 20 times independently, and
their NLE, ALE, and LEV of the best execution were analyzed in all scenarios. It is worth
mentioning that the results of the TSHH‐DV‐Hop algorithm were compared with the results of
basic DV‐Hop, Harris hawks optimization‐based DV‐Hop (HHO‐DV‐Hop), tunicate swarm
algorithm‐based DV‐Hop (TSA‐DV‐Hop), grey wolf optimizer‐based DV‐Hop (GWO‐DV‐Hop),
sine cosine algorithm‐based DV‐Hop (SCA‐DV‐Hop), whale optimization algorithm‐based
DV‐Hop (WOA‐DV‐Hop), and crow search algorithm‐based DV‐Hop (CSA‐DV‐Hop).

5.2 | Experiments on environments with different sizes

This section assesses the superiority of the TSHH‐DV‐Hop localization algorithm on environments
with different sizes. It investigates the influence of the deployment area's size on the localization
algorithms' performance. In these simulations, 150 fixed unknown objects, 30 anchors, and a 25 m
communication range are considered. The size of the deployment area is varied from 100 × 100m2
to 400 × 400m2. Furthermore, in all simulation areas, the unknown objects and anchors are
scattered randomly in the environment, and the same environment is used for all algorithms.
Figure 6 represents the NLE of the algorithms in different‐size environments.
With the increase in the size of the deployment area, the algorithms' NLE is also increased.
However, it can be deduced from the graphs of Figure 6 that the NLE of TSHH‐DV‐Hop is
much less than competitor algorithms in all environments. Therefore, it can be argued that in
the small‐scale and large‐scale environments, the proposed algorithm estimates the location of

F I G U R E 6 Node location error of the algorithms in environments with different sizes (A) 100 m2,
(B) 200 m2, (C) 300 m2, and (D) 400 m2 [Color figure can be viewed at [Link]]
BARSHANDEH ET AL. | 21

TABLE 6 Obtained results by the algorithm in areas with different sizes


HHO‐ TSA‐ GWO‐ SCA‐ WOA‐ CSA‐ TSHH‐
Area Criterion DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop
100m2 ALE 0.9323 0.9907 0.7198 0.9573 1.3853 1.4768 0.1169
LEV 1.1798 1.5671 1.1184 1.4384 2.3163 2.7396 0.0188
150m2 ALE 1.4049 1.5217 1.1526 1.5465 2.1072 2.2570 0.2015
LEV 2.8715 3.6863 3.0711 3.7912 5.4510 6.0901 0.0999
200m2 ALE 2.0486 2.1697 1.3862 2.1259 2.7538 2.9856 0.2777
LEV 6.0869 7.4070 3.8816 6.9209 9.3757 11.8069 0.1602
250m2 ALE 2.3199 2.3952 1.6771 2.5645 3.2882 3.5513 0.3360
LEV 7.8042 9.0407 6.4726 11.0291 13.4878 15.1393 0.1931
300m2 ALE 2.9904 2.9180 1.9629 2.8927 4.0911 4.3636 0.4290
LEV 12.8135 12.7098 8.5252 13.1789 20.1715 24.4767 0.3094
350m2 ALE 3.3448 3.3280 2.1626 3.2949 4.6271 4.8912 0.4584
LEV 15.9819 16.7954 10.7799 17.0251 25.3845 31.7606 0.4332
400m2 ALE 3.3448 3.3280 2.1626 3.2949 4.6271 4.8912 0.4584
LEV 15.9819 16.7954 10.7799 17.0251 25.3845 31.7606 0.4332
Note: The best results are presented in bold.
Abbreviations: ALE, average localization error; DV‐Hop, distance vector‐Hop; GWO, grey wolf optimizer; HHO, Harris hawks
optimization; LEV, localization error variance; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.

F I G U R E 7 ALE and LEV line graphs of the algorithms on environments with different sizes. ALE, average
localization error; LEV, localization error variance [Color figure can be viewed at [Link]]

objects with a minor error. Also, to further investigate algorithms' performance in environ-
ments with different sizes, the algorithms' ALE and LEV are calculated in the environments.
Table 6 statistically presents the results of algorithms' ALE and LEV. Moreover, Figure 7
illustrates the ALE and LEV line graphs in all environments.
22 | BARSHANDEH ET AL.

According to the statistical results of Table 6, it can be understood that the TSHH‐DV‐Hop
algorithm outperformed other algorithms by achieving further down values of ALE and LEV in
all environments. Furthermore, the increase in ALE and LEV values with the increasing am-
bient size is evident in the results of Table 6. The line graphs of Figure 7 also compare the ALE
and LEV results of the algorithms in environments of different sizes, which indicate the sig-
nificant superiority of the proposed algorithm.

5.3 | Experiments on environments with different number of


unknown objects

This section evaluates the influence of the total number of unknown objects on NLE, ALE, and
LEV of localization algorithms. In these simulations, the deployment area, number of anchors,
and communication range were fixed to 150 × 150m2, 30, and 25 m, respectively, while the
number of unknown objects varied from 100 to 400. Figure 8 indicates the node localization
error of the algorithms with different numbers of unknown objects.
Considering the graphs of Figure 8, it can be observed that the TSHH‐DV‐Hop algorithm
outperformed other algorithms and achieved promising results in all scenarios. For having a
more in‐depth evaluation, the algorithms' ALE and LEV results are statistically stated in
Table 7 and visually represented in Figure 9.
It is evident from the statistical results of Table 7 and the line graphs of Figure 9 that the
proposed localization algorithm surpassed competitor algorithms in the simulations with
various numbers of unknown objects.

F I G U R E 8 Node location error of the algorithms with different number of unknown objects. (A) 100 unknown,
(B) 200 unknown, (C) 300 unknown, and (D) 400 unknown [Color figure can be viewed at [Link]]
BARSHANDEH ET AL. | 23

TABLE 7 Obtained results by the algorithm with different number of unknown objects
No. of HHO‐ TSA‐ GWO‐ SCA‐ WOA‐ CSA‐ TSHH‐
objects Criterion DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop
100 ALE 1.1248 1.1850 0.7999 1.1572 1.8168 2.0111 0.0508
LEV 1.8923 2.4108 1.8269 2.0147 4.0029 4.8940 0.0021
150 ALE 1.3486 1.4741 1.0632 1.4497 1.8833 2.1207 0.1939
LEV 2.5430 3.3479 2.7113 3.2030 4.4930 5.3215 0.0834
200 ALE 1.5360 1.5939 1.1329 1.6111 2.1179 2.2105 0.3439
LEV 3.1547 3.8009 2.6997 3.8993 5.4679 5.8750 0.2278
250 ALE 1.7290 1.8472 1.3191 1.8434 2.1757 2.2664 0.5783
LEV 3.8519 4.9721 3.2702 4.6408 5.6231 6.2614 0.6270
300 ALE 1.7312 1.8437 1.3437 1.8331 2.1289 2.1894 0.7135
LEV 3.9440 4.7586 3.2256 4.8512 5.4027 5.5669 0.8565
350 ALE 1.8818 1.9221 1.4400 1.8847 2.2482 2.3437 0.8629
LEV 4.6229 5.1953 3.5020 5.0809 5.8986 6.4902 1.2929
400 ALE 1.8857 2.0853 1.5446 2.0879 2.2512 2.3060 0.9451
LEV 4.4813 5.7804 3.8735 5.8365 5.7847 6.0665 1.4803
Note: The best results are presented in bold.
Abbreviations: ALE, average localization error; DV‐Hop, distance vector‐Hop; GWO, grey wolf optimizer; HHO, Harris hawks
optimization; LEV, localization error variance; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.

F I G U R E 9 ALE and LEV line graphs of the algorithms with different number of unknown objects. ALE, average
localization error; LEV, localization error variance [Color figure can be viewed at [Link]]

5.4 | Experiments on environments with different number of anchors

In this section, the effect of different numbers of anchors on localization error has been
investigated. In the simulations, the deployment area is set to 150 × 150m2 , the number of
24 | BARSHANDEH ET AL.

F I G U R E 10 Node location error of the algorithms with different number of anchors. (A) 20 anchors, (B) 30
anchors, (C) 40 anchors, and (D) 50 anchors [Color figure can be viewed at [Link]]

TABLE 8 Obtained results by the algorithm with different number of anchors


No. of HHO‐ TSA‐ GWO‐ SCA‐ WOA‐ CSA‐ TSHH‐
anchors Criterion DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop
15 ALE 1.4392 1.5211 1.0285 1.5528 1.9851 2.1957 0.2110
LEV 2.8819 3.5224 2.3113 4.0153 4.9872 5.7929 0.0685
20 ALE 1.3514 1.4998 1.0847 1.4171 1.9510 2.0824 0.1888
LEV 2.4742 3.5880 2.6777 3.0774 4.5598 5.4710 0.0719
25 ALE 1.4781 1.4663 1.0141 1.4716 1.9907 2.2817 0.1963
LEV 3.3192 3.3265 2.2171 3.3712 4.9642 6.4196 0.0812
30 ALE 1.4234 1.4230 0.9434 1.4087 1.9455 2.1275 0.2011
LEV 2.8573 3.1190 2.0840 3.1118 4.6803 5.6610 0.0827
35 ALE 1.3649 1.4525 0.9398 1.3855 2.0119 2.2206 0.1781
LEV 2.7419 3.2451 2.0819 2.9543 4.8814 6.1972 0.0449
40 ALE 1.4258 1.5465 1.1597 1.6252 1.9947 2.2069 0.2123
LEV 2.9171 3.7515 3.1056 3.9663 4.9989 6.0572 0.0641
50 ALE 1.3925 1.5202 1.0857 1.5365 1.9444 2.1298 0.1728
LEV 2.8732 3.7805 2.8002 4.0295 4.6367 5.2348 0.0425
Note: The best results are presented in bold.
Abbreviations: ALE, average localization error; DV‐Hop, distance vector‐Hop; GWO, grey wolf optimizer; HHO, Harris hawks
optimization; LEV, localization error variance; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
BARSHANDEH ET AL. | 25

F I G U R E 11 ALE and LEV line graphs of the algorithms with different number of anchors. ALE, average
localization error; LEV, localization error variance [Color figure can be viewed at [Link]]

unknown objects is set to 150, the communication range is considered 25 m, and the number of
anchors is varied from 15 to 50. Figure 10 depicts the influence of different numbers of anchors
on the algorithms' performance in terms of NLE and evaluates their results.
Considering the localization algorithms' results illustrated in Figure 10, it is vividly apparent that
the TSHH‐DV‐Hop algorithm outdid the peer algorithms. Moreover, it is witnessed that the number
of anchors had no significant effect on the error of location algorithms. Likewise, the algorithms'
results are expressed in Table 8 in terms of ALE and LEV criteria and plotted in Figure 11.
The results of Table 8, Figure 11 demonstrate the superiority of the proposed localization
algorithm on all simulations. Furthermore, the results affirmed the ineffectiveness of a large
number of anchors. Therefore, it is recommended to use the minimum number of anchors to
reduce real‐world implementation costs.

5.5 | Experiments with different communication range

This subsection investigates the influence of different communication ranges on NLE, ALE,
and LEV of localization algorithms. In these simulations, the communication range varied from
15 m to 50 m while the deployment area, number of unknown objects, and number of anchors
are set to 150 × 150m2, 150, and 30, respectively. Figure 12 represents the NLE of the
algorithms with various communication ranges.
The line graphs of Figure 12 indicate that the proposed localization algorithm outperformed
competitive algorithms and estimated the locations with fewer errors. Moreover, the
algorithms' NLE results implied that the communication range did not impact the accuracy of
the estimated locations. With the so‐called caveat, it can be declared that the DV‐Hop‐based
localization algorithms are range‐free approaches. Besides, the ALE and LEV obtained by the
algorithms are presented in Table 9 and Figure 13.
The algorithms' obtained results in different environments affirmed that the proposed lo-
calization algorithm is beneficial and can estimate unknown IoT objects' locations with un-
equivocally negligible error.
26 | BARSHANDEH ET AL.

F I G U R E 12 Node location error of the algorithms with different communication ranges. (A) 10 m, (B) 20 m,
(C) 30 m, (D) 40 m [Color figure can be viewed at [Link]]

TABLE 9 Obtained results by the algorithm with different communication ranges


Communication HHO‐ TSA‐ GWO‐ SCA‐ WOA‐ CSA‐ TSHH‐
range Criterion DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop DV‐Hop
10 ALE 3.7271 3.7482 2.6194 3.8012 5.1821 5.4483 0.5043
LEV 19.9377 21.3383 14.7026 21.0831 32.7485 35.2364 0.5329
15 ALE 2.3082 2.5860 1.6046 2.5053 3.4487 3.6196 0.3350
LEV 7.9117 9.7405 5.2655 9.5678 14.8174 16.2304 0.1951
20 ALE 1.8054 1.8859 1.2841 1.8382 2.5010 2.7267 0.2674
LEV 4.7999 5.4899 3.6751 4.9041 7.7174 8.8848 0.1174
25 ALE 1.5059 1.4976 0.9898 1.4919 1.9452 2.1506 0.1984
LEV 3.2964 3.4873 2.2341 3.2889 4.5994 5.6672 0.0739
30 ALE 1.2179 1.2550 0.7827 1.2568 1.6786 1.8095 0.1697
LEV 2.0434 2.2960 1.4535 2.3992 3.3904 4.1529 0.0547
35 ALE 0.9742 1.0641 0.7578 1.1334 1.4757 1.5584 0.1536
LEV 1.2966 1.7280 1.2386 2.0501 2.5819 3.0493 0.0395
40 ALE 0.9160 0.9905 0.5490 0.9532 1.2420 1.3234 0.1321
LEV 1.1587 1.4678 0.6773 1.3874 1.8909 2.0772 0.0248
Note: The best results are presented in bold.
Abbreviations: ALE, average localization error; DV‐Hop, distance vector‐Hop; GWO, grey wolf optimizer; HHO, Harris hawks
optimization; LEV, localization error variance; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
BARSHANDEH ET AL. | 27

F I G U R E 13 ALE and LEV line graphs of the algorithms with different communication ranges. ALE, average
localization error; LEV, localization error variance [Color figure can be viewed at [Link]]

The results obtained in the scenarios of Section 5.2 reveal that in the environments with different
sizes, the proposed algorithm estimates the location of unknown objects more accurately than the
existing methods. Moreover, the results of scenarios of Section 5.3 indicate that the proposed algo-
rithm provides more precise estimates for different numbers of unknown objects. Furthermore,
according to Section 5.4, it can be stated that with different anchors, the proposed algorithm per-
formed better than similar position estimation methods. Finally, Section 5.5 asserts that the proposed
algorithm can optimally estimate the position of objects with different communication ranges.
Consequently, it can be deduced that the proposed algorithm has the ability to estimate IoT objects'
location in various real‐world environments more accurately.

6 | CONCLUSION

This paper provides a novel hybrid swarm‐based optimization algorithm called TSHH and a
unique localization algorithm named TSHH‐DV‐Hop. In TSHH, the TSA and HHO are hy-
bridized to resolve their deficiencies. Next, a new phase was added to the TSHH, aiming to
enhance the convergence rate and exploration capability. Furthermore, the greedy mechanism
was utilized to augment exploitation ability. In the TSHH‐DV‐Hop, the proposed TSHH al-
gorithm is integrated with the basic DV‐Hop algorithm.
The TSHH algorithm is tested on twenty‐three classical test functions comprising unimodal,
multimodal, and fix‐dimension functions, along with twenty‐seven test functions from CEC
2017, which included shifted rotated, hybrid, and composite test functions. The results of TSHH
are compared with those of six state‐of‐the‐art algorithms using five widely‐used statistical
metrics, box plots, and convergence graphs. Besides, the obtained results are inferentially
corroborated using the Wilcoxon signed‐rank test. The experimental results revealed the pre-
dominance of the TSHH algorithm on competitor algorithms.
The TSHH‐DV‐Hop localization algorithm is applied to twenty‐eight environments with different
specifications. The simulation environments are divided into four categories, in each of which the
value of a property in the environment was changed, while the others remained intact. In the
simulations, the proposed algorithm is compared with the competitor algorithms in terms of NLE,
28 | BARSHANDEH ET AL.

average localization error (ALE), and localization error variance (LEV). The comparison results
asserted the superiority of the proposed algorithm.
Compared with some models, the proposed algorithm requires more memory and CPU
time. For future studies, the proposed localization algorithm could also be extended to localize
mobile objects. Moreover, it could be adopted to estimate the location of the objects in 3D
environments.

ORCID
Saeid Barshandeh [Link]
Mohammad Masdari [Link]
Gaurav Dhiman [Link]
Vahid Hosseini [Link]
Krishna K. Singh [Link]

REFERENCES
1. Masdari M, Ahmadzadeh S. Comprehensive analysis of the authentication methods in wireless body area
networks. Security Commun Netw. 2016;9(17):4777‐4803.
2. Šenk I, Tarjan L, Oros D, Stankovski S, Ostojić G. A model for indoor product localization based on the internet
of things. In: Proceedings of XVII International Scientific Conference on Industrial Systems; 2017:86‐91.
3. Masdari M, Ahmadzadeh S, Bidaki M. Key management in wireless body area network: challenges and
issues. J Network Comput Appl. 2017;91:36‐51.
4. Guo X, Elikplim NR, Ansari N, Li L, Wang L. Robust WiFi localization by fusing derivative fingerprints of
RSS and multiple classifiers. IEEE Trans Industr Inform. 2019;16(5):3177‐3186.
5. Cheikhrouhou O, M Bhatti G, Alroobaea R. A hybrid DV‐hop algorithm using RSSI for localization in
large‐scale wireless sensor networks. Sensors. 2018;18(5):1469.
6. Wang P, Xue F, Li H, Cui Z, Xie L, Chen J. A multi‐objective DV‐Hop localization algorithm based on
NSGA‐II in internet of things. Mathematics. 2019;7(2):184.
7. Dhiman G, Kumar V. Seagull optimization algorithm: theory and its applications for large‐scale industrial
engineering problems. Knowl‐Based Syst. 2019;165:169‐196.
8. Dhiman G, Garg M, Nagar A, Kumar V, Dehghani M. A novel algorithm for global optimization: rat swarm
optimizer. J Ambient Intell Human Comput. 2020:1‐26.
9. Dehghani M, Montazeri Z, Givi H, Guerrero JM, Dhiman G. Darts game optimizer: a new optimization
technique based on darts game. Int J Intell Eng Syst. 2020;13:286‐294.
10. Dehghani M, Montazeri Z, Dhiman G, et al. A spring search algorithm applied to engineering optimization
problems. Appl Sci. 2020;10(18):6173.
11. Zhang Y, Jin Z. Group teaching optimization algorithm: a novel metaheuristic method for solving global
optimization problems. Expert Syst Appl. 2020;148:113246.
12. Harifi S, Khalilian M, Mohammadzadeh J, Ebrahimnejad S. Emperor penguins colony: a new metaheuristic
algorithm for optimization. Evol Intell. 2019;12(2):211‐226.
13. Dhiman G, Kaur A. STOA: A bio‐inspired based optimization algorithm for industrial engineering pro-
blems. Eng Appl Artif Intell. 2019;82:148‐174.
14. Kaur S, Awasthi LK, Sangal AL, Dhiman G. Tunicate swarm algorithm: a new bio‐inspired based meta-
heuristic paradigm for global optimization. Eng Appl Artif Intell. 2020;90:103541.
15. Dhiman G, Kumar V. Emperor penguin optimizer: a bio‐inspired algorithm for engineering problems.
Knowl‐Based Syst. 2018;159:20‐50.
16. Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S. Equilibrium optimizer: a novel optimization algo-
rithm. Knowl‐Based Syst. 2020;191:105190.
17. Hayyolalam V, Kazem AAP. Black widow optimization algorithm: a novel meta‐heuristic approach for
solving engineering optimization problems. Eng Appl Artif Intell. 2020;87:103249.
18. Zervoudakis K, Tsafarakis S. A mayfly optimization algorithm. Comput Industr Eng. 2020;145:106559.
BARSHANDEH ET AL. | 29

19. Masdari M, Gharehpasha S, Ghobaei‐Arani M, Ghasemi V. Bio‐inspired virtual machine placement


schemes in cloud computing environment: taxonomy, review, and future research directions. Cluster
Comput. 2019;23:1‐31.
20. Gharehpasha S, Masdari M, Jafarian A. The placement of virtual machines under optimal conditions in
cloud datacenter. Inform Technol Control. 2019;48(4):545‐556.
21. Masdari M, Barshande S, Ozdemir S. CDABC: chaotic discrete artificial bee colony algorithm for multi‐
level clustering in large‐scale WSNs. J Supercomput. 2019;75(11):7174‐7208.
22. Kanwar V, Kumar A. DV‐Hop‐based range‐free localization algorithm for wireless sensor network using
runner‐root optimization. J Supercomput. 2020;77:1‐18.
23. Han D, Yu Y, Li K‐C, de Mello RF. Enhancing the sensor node localization algorithm based on improved
DV‐Hop and DE algorithms in wireless sensor networks. Sensors. 2020;20(2):343.
24. Chai Q‐W, Chu S‐C, Pan J‐S, Hu P, Zheng W‐M. A parallel WOA with two communication strategies
applied in DV‐Hop localization method. EURASIP J Wireless Commun Network. 2020;2020(1):1‐10.
25. Bhat SJ, Santhosh K. An optimization based localization with area minimization for heterogeneous
wireless sensor networks in anisotropic fields. Comput Networks. 2020;179:107371.
26. Meng Y, Zhi Q, Zhang Q, Lin E. A two‐stage wireless sensor grey wolf optimization node location algo-
rithm based on K‐value collinearity. Math Prob Eng. 2020;2020.
27. Song L, Zhao L, Ye J. DV‐hop node location algorithm based on GSO in wireless sensor networks. J Sensors.
2019;2019.
28. Kaur A, Kumar P, Gupta GP. Nature inspired algorithm‐based improved variants of DV‐Hop algorithm for
randomly deployed 2D and 3D wireless sensor networks. Wireless Personal Commun. 2018;101(1):567‐582.
29. Zhou C, Yang Y, Wang Y. DV‐Hop localization algorithm based on bacterial foraging optimization for
wireless multimedia sensor networks. Multimedia Tool Appl. 2019;78(4):4299‐4309.
30. Sharma G, Kumar A. Improved DV‐Hop localization algorithm using teaching learning based optimization
for wireless sensor networks. Telecommun Syst. 2018;67(2):163‐178.
31. Li J, Gao M, Pan J‐S, Chu S‐C. A parallel compact cat swarm optimization and its application in DV‐Hop
node localization for wireless sensor network. Wireless Network. 2021;2021:1‐21.
32. Huang X, Han D, Cui M, Lin G, Yin X. Three‐dimensional localization algorithm based on improved A*
and DV‐Hop algorithms in wireless sensor network. Sensors. 2021;21(2):448.
33. Kanwar V, Kumar A. DV‐Hop localization methods for displaced sensor nodes in wireless sensor network
using PSO. Wireless Network. 2021;27(1):91‐102.
34. Sharma G, Kumar A. Improved range‐free localization for three‐dimensional wireless sensor networks
using genetic algorithm. Comput Electr Eng. 2018;72:808‐827.
35. Singh SP, Sharma S. A PSO based improved localization algorithm for wireless sensor network. Wireless
Personal Commun. 2018;98(1):487‐503.
36. Shi Q, Xu Q, Zhang J. An improved DV‐Hop scheme based on path matching and particle swarm opti-
mization algorithm. Wireless Personal Commun. 2019;104(4):1301‐1320.
37. Wang H, Zhang L. An Improved Simulated Annealing Localization Algorithm for WSN. In: Proceedings of
2018 IEEE 3rd International Conference on Communication and Information Systems (ICCIS); 2018:93‐96.
38. Singh SP, Sharma SC. Implementation of a PSO based improved localization algorithm for wireless sensor
networks. IETE J Res. 2019;65(4):502‐514.
39. Hai Y. Improved DV‐Hop algorithm based on ant colony algorithm and particle swarm optimization for
wireless sensor network location problem. DEStech Trans Comput Sci Eng. 2018.
40. Cui L, Xu C, Li G, Ming Z, Feng Y, Lu N. A high accurate localization algorithm with DV‐Hop and
differential evolution for wireless sensor network. Appl Soft Comput. 2018;68:39‐52.
41. Zhang H, Wang C. Improved wireless sensor location algorithm based on combined particle swarm‐quasi‐
newton with threshold N. Int J Online Biomed Eng. 2018;14(05):31‐41.
42. Pang, M, Feng, Z, Bai, W. DV‐Hop localization algorithm based on RSSI hop number correction and
improved artificial immune algorithm optimization. In: Proceedings of 2019 International Conference on
Robots & Intelligent System (ICRIS); 2019:501‐504.
43. Li T, Yan W, Ping L, Fang P. A WSN positioning algorithm based on 3D discrete chaotic mapping.
EURASIP J Wireless Commun Networking. 2019;2019(1):126.
30 | BARSHANDEH ET AL.

44. Kaur S, Awasthi LK, Sangal A, Dhiman G. Tunicate swarm algorithm: a new bio‐inspired based meta-
heuristic paradigm for global optimization. Eng Appl Artif Intell. 2020;90:103541.
45. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and
applications. Future Generation Comput Syst. 2019;97:849‐872.
46. Barshandeh S, Haghzadeh M. A new hybrid chaotic atom search optimization based on tree‐seed algorithm
and Levy flight for solving optimization problems. Eng Comput. 2020:1‐44.
47. Barshandeh S, Piri F, Sangani SR. HMPA: an innovative hybrid multi‐population algorithm based on
artificial ecosystem‐based and Harris hawks optimization algorithms for engineering problems. Eng
Comput. 2020:1‐45.
48. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Software. 2014;69:46‐61.
49. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowl‐Based Syst. 2016;96:120‐133.
50. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Software. 2016;95:51‐67.
51. Askarzadeh A. A novel metaheuristic method for solving constrained engineering optimization problems:
crow search algorithm. Comput Struct. 2016;169:1‐12.
52. Wu G, Mallipeddi R, Suganthan P. Problem definitions and evaluation criteria for the CEC 2017 compe-
tition and special session on constrained single objective real‐parameter optimization; 2016.

How to cite this article: Barshandeh S, Masdari M, Dhiman G, Hosseini V, Singh KK.
A range‐free localization algorithm for IoT networks. Int J Intell Syst. 2021;1‐44.
[Link]

APPENDIX A: EXPERIMENTS ON THE PROPOSED HYBRID


O P TIM I Z A T I O N A L G O R I T H M ( T S H H )
In this section, the proposed hybrid algorithm's performance (TSHH) is evaluated on fifty well‐
known standard test functions. The test functions represent real‐world problems and evaluate the
algorithms from different aspects. Furthermore, due to the random nature of the meta‐heuristic
optimization algorithms, each algorithm is run 30 times independently on all test functions, and
their results are recorded. Subsequently, the algorithms' results are compared in two respects: a)
Statistically in terms of best, median, average, worst, and standard deviation of independent runs b)
Visually in terms of box plots of obtained results. Furthermore, the Wilcoxon signed‐rank test is
exploited to demonstrate the predominance of the TSHH inferentially.
A.1. Experiments on classical test functions
The first twenty‐three test functions are selected from classical test functions comprising
unimodal (UM), multimodal (MM), and fix‐dimension (FD) functions. The UM test functions
have only one optimal point, while the MM test functions have more than one optimum point.
Moreover, UM and MM functions' dimensions can be changed, while the FD functions' di-
mension is fixed. The classical test functions challenge the exploration and exploitation cap-
abilities of the algorithms as well as the ability to find optimal points. A brief description of
these test functions is stated in Table A.1 (details are exhibited in46). Besides, the statistical
results of the algorithms over these test functions are provided in Table A.2.
As it is evident from the statistical results of Table A.2, the average and standard deviation
of independent runs of the TSHH algorithm is the most optimal value among the obtained
results by the algorithms in all classical test functions. For a more detailed comparison, the box
plots of the algorithms' results are plotted in Figure A.1.
The box plots of Figure A.1 exhibit that the proposed TSHH algorithm achieved highly
coherent outcomes in all test functions. In contrast, the results of competitor algorithms were
BARSHANDEH ET AL. | 31

T A B L E A.1 Details of the classical test functions


Type Name dim Range Fmin
TF1 UM Sphere 30 [−100,100]d 0
TF2 UM Schwwefel 2.22 30 [−10,10]d 0

TF3 UM Cigar 30 [−100,100]d 0


TF4 UM Schwwefel 2.21 30 [−100,100]d 0

TF5 UM Rosenbrock 30 [−30,30]d 0


TF6 UM Step 30 [−100,100]d 0

TF7 UM Zakharov 10 [−5,10]d 0


TF8 MM Schwefel 2.26 30 [−500,500]d −12569.5

TF9 MM Rastrigin 30 [−5.12,5.12]d 0


TF10 MM Ackley 30 [−32,32]d 0

TF11 MM Griewank 30 [−600,600]d 0


TF12 MM Penalized 1 30 [−50,50]d 0

TF13 MM Penalized 2 30 [−50,50]d 0


TF14 FD Foxholes 2 [−65.53,65.53] 0.9980

TF15 FD Kowalik 4 [−5,5]d 3.074e‐04


TF16 FD Six hump camel 2 [−5,5]d −1.0316

TF17 FD Branin 2 [−5,10]d × [0,15]d 0.3979


TF18 FD Goldstein price 2 [−2,2]d 3.0000

TF19 FD Hartman 3 3 [0,1]d −3.86278


TF20 UM Brown 10 [−1,4]d 0

TF21 FD Shekel 5 4 [0,10]d −10.1532


TF22 FD Shekel 7 4 [0,10]d −10.4028

TF23 FD Shekel 10 4 [0,10]d −10.5363

scattered and random. Therefore, it can be argued that the TSHH algorithm had a lower
random state and lower dependence on the initial solutions. Additionally, the Wilcoxon signed‐
rank test was administered with a 5% significant level to test the TSHH algorithm's superiority.
Table A.3 presents the Wilcoxon signed‐rank test results.
In Table A.3, the p‐values less than 0.05 depict the significant difference between the TSHH
algorithm and the other competitor algorithm. Column R expresses the results of the test. The sign
“=” indicates no significant difference between algorithms; the signs “+” and “−” indicate a
positive and negative significant difference between TSHH and competitor algorithm, respectively.
According to the results of Table A.3, there is no significant difference between the TSHH and
HHO in TF2 and TF13, between TSHH and GWO in TF11, and between the TSHH and CSA in
TF8. Furthermore, this test could not recognize the significant difference between the TSHH and
HHO in TF9‐TF11 and between the TSHH and WOA in TF9 and TF11. In the remaining 129 cases,
the TSHH achieved a significant difference in its test results with other competitor algorithms.
32

T A B L E A.2 Statistical results of the algorithms on the classical test functions


|

HHO TSA GWO SCA WOA CSA TSHH


Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD

F1 3.21E−138 1.24E−137 6.03E−22 7.60E−22 3.49E−27 1.72E−27 6.85E+01 3.28E+01 8.20E−75 2.31E−75 3.62E−01 3.10E−01 1.97E−147 5.28E−148

F2 2.90E−87 5.11E−87 1.24E−13 2.28E−13 4.00E−17 7.43E−17 1.37E−02 1.17E−02 7.97E−50 2.27E−50 3.03E−01 3.23E−01 1.60E−88 8.77E−89

F3 7.15E−81 2.77E−80 6.26E−16 1.45E−15 1.07E−20 3.66E−21 1.12E+07 6.44E+06 6.75E−66 2.53E−66 2.59E+05 1.94E+05 0.00E+00 2.00E−193

F4 1.77E−42 4.81E−42 2.56E−01 3.94E−01 8.41E−07 8.37E−07 1.15E+01 3.20E+01 2.73E+01 5.26E+01 7.96E−02 9.90E−02 1.12E−48 3.33E−49

F5 5.19E−02 5.78E−02 3.39E+01 2.08E+01 6.99E−01 2.71E+01 3.06E+04 2.09E+04 4.32E−01 2.82E+01 1.53E+01 1.25E+01 1.42E−06 9.40E−07

F6 6.36E−04 6.14E−04 3.86E+00 4.22E−01 2.90E−01 8.96E−01 1.12E+01 1.32E+01 1.71E−01 4.45E−01 2.69E−01 2.76E−01 1.59E−31 9.00E−32

F7 2.57E−60 9.96E−60 4.98E−32 9.87E−32 3.97E−34 1.64E−34 5.27E−05 1.43E−05 1.22E+01 1.03E+01 5.53E−03 3.95E−03 0.00E+00 1.91E−199

F8 −1.05E+04 1.85E+03 −6.20E+03 6.30E+02 1.07E+03 −6.06E+03 2.07E+02 −3.73E+03 1.91E+03 −1.04E+04 9.13E+01 −1.24E+04 1.55E+02 −1.25E+04

F9 0.00E+00 0.00E+00 2.05E+02 4.13E+01 4.25E+00 4.34E+00 2.93E+01 3.40E+01 1.47E−14 3.79E−15 1.73E−01 1.01E−01 0.00E+00 0.00E+00

F10 8.88E−16 0.00E+00 2.02E+00 1.50E+00 1.48E−14 1.10E−13 9.68E+00 8.88E+00 2.69E−15 4.44E−15 5.67E−01 2.42E−01 0.00E+00 8.88E−16

F11 0.00E+00 0.00E+00 8.46E−03 1.12E−02 4.96E−03 2.37E−03 3.01E−01 8.37E−01 0.00E+00 0.00E+00 2.16E−01 3.44E−01 0.00E+00 0.00E+00

F12 3.60E−05 3.27E−05 9.12E+00 4.32E+00 2.90E−02 5.78E−02 1.75E+04 4.63E+03 4.37E−02 3.30E−02 3.51E−03 2.54E−03 6.16E−34 1.66E−32

F13 2.85E−02 2.93E−02 2.99E+00 4.17E−01 1.97E−01 5.53E−01 2.09E+06 6.85E+05 2.13E−01 6.01E−01 2.41E−02 2.05E−02 2.06E−32 1.46E−32

F14 1.06E+00 2.57E−01 7.12E+00 4.69E+00 4.73E+00 4.78E+00 2.50E+00 2.45E+00 1.62E+00 2.52E+00 4.02E−16 9.98E−01 0.00E+00 9.98E−01

F15 3.28E−04 2.24E−05 1.41E−02 2.58E−02 8.27E−03 4.39E−03 2.70E−04 8.82E−04 1.13E−03 1.27E−03 1.61E−07 3.08E−04 1.48E−19 3.07E−04

F16 −1.03E+00 3.31E−14 −1.03E+00 5.24E−07 1.97E−08 −1.03E+00 3.46E−05 −1.03E+00 1.27E−09 −1.03E+00 6.29E−13 −1.03E+00 0.00E+00 −1.03E+00

F17 3.98E−01 3.35E−07 3.98E−01 4.91E−05 3.20E−06 3.98E−01 2.19E−03 4.00E−01 6.93E−06 3.98E−01 1.61E−12 3.98E−01 0.00E+00 3.98E−01

F18 3.00E+00 9.97E−08 1.38E+01 2.24E+01 4.49E−05 3.00E+00 1.19E−04 3.00E+00 2.37E−04 3.00E+00 1.03E−11 3.00E+00 5.44E−16 3.00E+00

F19 −3.86E+00 2.81E−08 −3.86E+00 5.25E−05 3.03E−03 −3.86E+00 2.20E−03 −3.85E+00 1.83E−02 −3.85E+00 8.11E−12 −3.86E+00 1.84E−15 −3.86E+00

F20 3.14E−94 8.97E−94 3.24E−42 7.67E−42 4.21E−59 2.23E−59 4.41E−15 1.39E−15 1.62E−81 7.42E−82 1.30E−06 1.18E−06 0.00E+00 0.00E+00
BARSHANDEH
ET AL.
TABLE A.2 (Continued)

HHO TSA GWO SCA WOA CSA TSHH


Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD
BARSHANDEH

F21 −6.75E+00 2.49E+00 −5.52E+00 3.35E+00 9.38E−04 −1.02E+01 1.75E+00 −1.96E+00 2.47E+00 −8.43E+00 2.00E−08 −1.02E+01 1.84E−15 −1.02E+01
ET AL.

F22 −6.51E+00 2.43E+00 −6.91E+00 3.64E+00 7.20E−04 −1.04E+01 1.99E+00 −3.85E+00 3.36E+00 −7.77E+00 2.88E−08 −1.04E+01 2.01E−15 −1.04E+01

F23 −6.21E+00 2.24E+00 −8.92E+00 2.70E+00 9.23E−04 −1.05E+01 1.40E+00 −4.24E+00 3.81E+00 −6.75E+00 1.28E−08 −1.05E+01 2.01E−15 −1.05E+01

Note: The best results are shown in bold


Abbreviations: GWO, grey wolf optimizer; HHO, Harris hawks optimization; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
33
34 | BARSHANDEH ET AL.

F I G U R E A.1 Box plots of the results of the algorithms on the classical test functions. CSA, crow search algorithm;
GWO, grey wolf optimizer; HHO, Harris hawks optimization; SCA, sine cosine algorithm; TSA, tunicate swarm
algorithm; WOA, whale optimization algorithm [Color figure can be viewed at [Link]]

Given the statistical results of Table A.2, box plots of Figure A.1, and the Wilcoxon signed‐
rank test results provided in Table A.3, it can be inferred that the proposed TSHH algorithm
outperformed the competitor algorithms in all classical test functions.
T A B L E A.3 Wilcoxon signed‐rank test results of the TSHH versus the competitor algorithms with a 5% significance level on the classical test functions
TSHHvs. HHO TSHHvs. TSA TSHHvs. GWO TSHHvs. SCA TSHHvs. WOA TSHHvs. CSA
¯ ¯ ¯ ¯ ¯ ¯
p‐value R p‐value R p‐value R p‐value R p‐value R p‐value R
BARSHANDEH

TF1 2.5574E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +


ET AL.

−02 −05 −05 −05 −05 −05


TF2 2.2931E ‐ 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−01 −05 −05 −05 −05 −05
TF3 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF4 2.0142E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−03 −05 −05 −05 −05 −05
TF5 1.2207E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−04 −05 −05 −05 −05 −05
TF6 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF7 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF8 6.1035E + 6.1035E + 6.1035E + 6.1035E + 2.0142E + 8.0396E ‐
−05 −05 −05 −05 −03 −01
TF9 – = 6.1035E + 6.1035E + 6.1035E + – = 6.1035E +
−05 −05 −05 −05
TF10 – = 6.1035E + 6.1035E + 6.1035E + 9.7656E + 6.1035E +
−05 −05 −05 −04 −05
TF11 – = 7.8125E + 2.5000E ‐ 6.1035E + – = 6.1035E +
−03 −01 −05 −05
TF12 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
|

−05 −05 −05 −05 −05 −05

(Continues)
35
36

TABLE A.3 (Continued)


|

TSHHvs. HHO TSHHvs. TSA TSHHvs. GWO TSHHvs. SCA TSHHvs. WOA TSHHvs. CSA
¯ ¯ ¯ ¯ ¯ ¯
p‐value R p‐value R p‐value R p‐value R p‐value R p‐value R
TF13 8.3252E ‐ 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−02 −05 −05 −05 −05 −05
TF14 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF15 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF16 7.8125E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−03 −05 −05 −05 −05 −05
TF17 2.4414E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−04 −05 −05 −05 −05 −05
FT18 1.2207E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−04 −05 −05 −05 −05 −05
TF19 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF20 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF21 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF22 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF23 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
Abbreviations: CSA, crow search algorithm; GWO, grey wolf optimizer; HHO, Harris hawks optimization; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
BARSHANDEH
ET AL.
BARSHANDEH ET AL. | 37

A.2. Experiments on CEC 2017 test functions


The second part of the hybrid optimization algorithm evaluations is dedicated to CEC 2017
test functions. The CEC 2017 consists of 29 test functions, two of which have been removed. In
addition to exploiting and exploring capabilities, these test functions also assess local optimal
departure potential. Therefore, to achieve the desired results, algorithms must have the proper
balance between exploitation and exploration capabilities along with the ability to exit or avoid

T A B L E A.4 Brief description of CEC 2017 test functions


Type Function dim Range Fmin
TF24 UM Shifted and Rotated Bent Cigar Function 30 [−100,100]d 100
TF25 MM Shifted and Rotated Rosenbrock's Function 30 [−100,100]d 300

TF26 MM Shifted and Rotated Rastrigin's Function 30 [−100,100]d 400


TF27 MM Shifted and Rotated Expanded Schaffer's 30 [−100,100]d 500
F6 Function
TF28 MM Shifted and Rotated Lunacek Bi_Rastrigin 30 [−100,100]d 600
Function
TF29 MM Shifted and Rotated Noncontinuous 30 [−100,100]d 700
Rastrigin's Function
TF30 MM Shifted and Rotated Levy Function 30 [−100,100]d 800
TF31 MM Shifted and Rotated Schwefel's Function 30 [−100,100]d 900

TF32 Hybrid Hybrid Function 1 (N=3) 30 [−100,100]d 1000


TF33 Hybrid Hybrid Function 2 (N=3) 30 [−100,100]d 1100

TF34 Hybrid Hybrid Function 3 (N=3) 30 [−100,100]d 1200


TF35 Hybrid Hybrid Function 4 (N=4) 30 [−100,100]d 1300

TF36 Hybrid Hybrid Function 5 (N=4) 30 [−100,100]d 1400


TF37 Hybrid Hybrid Function 6 (N=4) 30 [−100,100]d 1500

TF38 Hybrid Hybrid Function 6 (N=5) 30 [−100,100]d 1600


TF39 Hybrid Hybrid Function 6 (N=5) 30 [−100,100]d 1700

TF40 Hybrid Hybrid Function 6 (N=5) 30 [−100,100]d 1800


TF41 Hybrid Hybrid Function 6 (N=6) 30 [−100,100]d 1900

TF42 Composite Composition Function 1 (N=3) 30 [−100,100]d 2000


TF43 Composite Composition Function 2 (N=3) 30 [−100,100]d 2100

TF44 Composite Composition Function 4 (N=4) 30 [−100,100]d 2300


TF45 Composite Composition Function 5 (N=5) 30 [−100,100]d 2400

TF46 Composite Composition Function 6 (N=5) 30 [−100,100]d 2500


TF47 Composite Composition Function 7 (N=6) 30 [−100,100]d 2600

TF48 Composite Composition Function 8 (N=6) 30 [−100,100]d 2700


TF49 Composite Composition Function 9 (N=3) 30 [−100,100]d 2800

TF50 Composite Composition Function 10 (N=3) 30 [−100,100]d 2900


38

T A B L E A.5 Statistical results of the algorithms on the CEC 2017 test functions
|

HHO TSA GWO SCA WOA CSA TSHH


Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD

F24 6.11E+07 1.80E+07 1.87E+10 6.77E+09 2.83E+09 2.07E+09 2.00E+10 3.63E+09 4.73E+09 1.45E+09 5.04E+08 1.71E+08 3.97E+03 3.93E+03

F25 5.33E+04 8.38E+03 5.90E+04 1.30E+04 6.06E+04 1.14E+04 8.64E+04 1.53E+04 2.33E+05 6.67E+04 4.50E+04 1.13E+04 5.56E+03 2.06E+03

F26 1.06E+03 8.54E+02 4.62E+03 2.04E+03 6.16E+02 6.14E+01 3.09E+03 8.04E+02 1.55E+03 6.15E+02 6.83E+02 5.62E+01 4.55E+02 3.88E+01

F27 7.65E+02 3.92E+01 8.59E+02 5.43E+01 6.30E+02 3.09E+01 8.28E+02 2.37E+01 8.34E+02 4.03E+01 6.85E+02 6.12E+01 5.96E+02 2.33E+01

F28 6.68E+02 6.33E+00 6.80E+02 1.29E+01 6.11E+02 4.48E+00 6.65E+02 7.61E+00 6.78E+02 1.04E+01 6.41E+02 1.01E+01 6.00E+02 8.55E−02

F29 1.30E+03 4.86E+01 1.32E+03 8.46E+01 9.05E+02 4.01E+01 1.25E+03 6.24E+01 1.32E+03 8.77E+01 9.89E+02 6.68E+01 8.36E+02 2.54E+01

F30 9.80E+02 2.02E+01 1.13E+03 6.59E+01 9.12E+02 1.71E+01 1.09E+03 2.34E+01 1.09E+03 6.19E+01 9.70E+02 2.07E+01 8.81E+02 1.65E+01

F31 7.76E+03 1.15E+03 1.47E+04 4.22E+03 2.36E+03 7.51E+02 9.14E+03 2.03E+03 1.05E+04 3.06E+03 3.69E+03 1.27E+03 9.95E+02 1.21E+02

F32 5.76E+03 7.76E+02 7.73E+03 6.40E+02 5.33E+03 1.31E+03 8.98E+03 2.88E+02 7.73E+03 7.83E+02 5.17E+03 5.85E+02 4.44E+03 5.97E+02

F33 1.46E+03 3.09E+02 5.31E+03 1.79E+03 2.76E+03 1.20E+03 3.67E+03 1.07E+03 8.15E+03 3.31E+03 1.59E+03 1.99E+02 1.16E+03 3.67E+01

F34 3.75E+08 5.64E+08 4.57E+09 2.13E+09 3.97E+07 3.87E+07 2.65E+09 6.40E+08 4.13E+08 2.18E+08 1.26E+08 9.46E+07 6.58E+04 4.45E+04

F35 9.14E+07 2.39E+08 3.53E+09 4.60E+09 2.11E+07 7.51E+07 1.20E+09 5.15E+08 1.62E+07 2.25E+07 1.11E+05 5.68E+04 1.16E+04 8.26E+03

F36 1.07E+06 8.10E+05 8.81E+05 7.67E+05 8.29E+05 1.13E+06 8.87E+05 6.50E+05 2.17E+06 1.72E+06 1.57E+04 1.98E+04 7.15E+03 3.30E+03

F37 8.36E+04 4.23E+04 2.03E+08 5.71E+08 2.09E+06 4.32E+06 5.78E+07 3.32E+07 3.63E+06 4.03E+06 2.79E+04 9.59E+03 5.44E+03 3.73E+03

F38 3.74E+03 6.21E+02 3.80E+03 4.46E+02 2.66E+03 3.34E+02 4.12E+03 3.07E+02 4.18E+03 5.54E+02 3.07E+03 2.79E+02 2.40E+03 1.90E+02

F39 2.48E+03 3.03E+02 2.68E+03 3.42E+02 2.06E+03 1.83E+02 2.80E+03 1.75E+02 2.81E+03 1.65E+02 2.20E+03 1.93E+02 1.88E+03 1.44E+02

F40 1.97E+06 3.55E+06 1.77E+07 2.09E+07 1.44E+06 1.28E+06 1.70E+07 1.10E+07 2.39E+07 1.65E+07 2.56E+05 2.93E+05 1.33E+05 1.35E+05

F41 7.44E+05 6.71E+05 4.96E+08 6.88E+08 2.89E+06 7.65E+06 1.03E+08 6.99E+07 2.26E+07 1.54E+07 8.81E+05 9.61E+05 5.33E+03 2.84E+03

F42 2.84E+03 2.08E+02 2.75E+03 1.75E+02 2.63E+03 2.27E+02 3.00E+03 1.81E+02 2.91E+03 1.57E+02 2.62E+03 1.58E+02 2.28E+03 1.39E+02

F43 2.57E+03 2.65E+03 2.65E+03 2.77E+03 2.41E+03 2.48E+03 2.61E+03 2.69E+03 2.68E+03 2.81E+03 2.46E+03 2.54E+03 2.38E+03 2.46E+03

F44 3.17E+03 2.00E+02 3.26E+03 9.47E+01 2.78E+03 3.39E+01 3.08E+03 4.79E+01 3.15E+03 8.85E+01 2.96E+03 8.63E+01 2.74E+03 2.13E+01

F45 3.38E+03 1.18E+02 3.41E+03 1.19E+02 2.97E+03 6.42E+01 3.25E+03 4.41E+01 3.27E+03 1.07E+02 3.18E+03 9.31E+01 2.94E+03 4.17E+01
BARSHANDEH
ET AL.
TABLE A.5 (Continued)

HHO TSA GWO SCA WOA CSA TSHH


Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD Mean STD
BARSHANDEH

F46 2.98E+03 2.00E+01 3.59E+03 2.90E+02 3.02E+03 5.96E+01 3.52E+03 1.43E+02 3.21E+03 8.27E+01 3.03E+03 4.33E+01 2.89E+03 1.01E+01
ET AL.

F47 7.32E+03 1.34E+03 8.25E+03 1.53E+03 4.96E+03 9.77E+02 7.80E+03 3.94E+02 8.56E+03 8.31E+02 5.69E+03 1.95E+03 4.15E+03 1.03E+03

F48 3.78E+03 2.61E+02 3.70E+03 2.83E+02 3.26E+03 2.40E+01 3.53E+03 7.71E+01 3.46E+03 1.49E+02 3.49E+03 8.07E+01 3.23E+03 1.57E+01

F49 3.70E+03 4.17E+02 4.85E+03 1.07E+03 3.49E+03 1.25E+02 4.60E+03 3.95E+02 3.80E+03 1.50E+02 3.51E+03 1.04E+02 3.16E+03 5.34E+01

F50 4.80E+03 4.89E+02 5.62E+03 6.93E+02 3.92E+03 2.17E+02 5.22E+03 3.41E+02 5.67E+03 7.12E+02 4.44E+03 2.37E+02 3.60E+03 1.46E+02

Note: The best results are shown in bold


Abbreviations: CSA, crow search algorithm; GWO, grey wolf optimizer; HHO, Harris hawks optimization; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
39
40 | BARSHANDEH ET AL.

F I G U R E A.2 Box plots of the results of the algorithms on the CEC 2017 test functions. CSA, crow search
algorithm; GWO, grey wolf optimizer; HHO, Harris hawks optimization; SCA, sine cosine algorithm; TSA, tunicate
swarm algorithm; WOA, whale optimization algorithm [Color figure can be viewed at [Link]]
T A B L E A.6 Wilcoxon signed‐rank test results of the TSHH versus the competitor algorithms with a 5% significance level on the CEC 2017 test functions
TSHHvs. HHO TSHHvs. TSA TSHHvs. GWO TSHHvs. SCA TSHHvs. WOA TSHHvs. CSA
¯ ¯ ¯ ¯ ¯ ¯
BARSHANDEH

p‐value R p‐value R p‐value R p‐value R p‐value R p‐value R


TF24 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
ET AL.

−05 −05 −05 −05 −05 −05


TF25 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF26 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF27 6.1035E + 6.1035E + 1.1597E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −03 −05 −05 −05
TF28 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF29 6.1035E + 6.1035E + 1.2207E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −04 −05 −05 −05
TF30 6.1035E + 6.1035E + 2.0142E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −03 −05 −05 −05
TF31 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF32 3.0518E + 6.1035E + 8.3252E ‐ 6.1035E + 6.1035E + 6.1035E +
−04 −05 −02 −05 −05 −05
TF33 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF34 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF35 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
|

−05 −05 −05 −05 −05 −05


41

(Continues)
42

TABLE A.6 (Continued)


|

TSHHvs. HHO TSHHvs. TSA TSHHvs. GWO TSHHvs. SCA TSHHvs. WOA TSHHvs. CSA
¯ ¯ ¯ ¯ ¯ ¯
p‐value R p‐value R p‐value R p‐value R p‐value R p‐value R
TF36 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF37 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF38 6.1035E + 6.1035E + 4.1260E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −02 −05 −05 −05
TF39 3.0518E + 1.2207E + 8.3618E + 6.1035E + 6.1035E + 6.1035E +
−04 −04 −03 −05 −05 −05
TF40 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF41 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF42 6.1035E + 1.2207E + 1.2207E + 6.1035E + 6.1035E + 6.1035E +
−05 −04 −04 −05 −05 −05
TF43 6.1035E + 6.1035E + 2.1545E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −02 −05 −05 −05
TF44 6.1035E + 6.1035E + 2.0142E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −03 −05 −05 −05
TF45 6.1035E + 6.1035E + 1.5143E ‐ 6.1035E + 6.1035E + 6.1035E +
−05 −05 −01 −05 −05 −05
TF46 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF47 1.8311E + 6.1035E + 2.1545E + 6.1035E + 6.1035E + 6.1035E +
BARSHANDEH

−04 −05 −02 −05 −05 −05


ET AL.
TABLE A.6 (Continued)

TSHHvs. HHO TSHHvs. TSA TSHHvs. GWO TSHHvs. SCA TSHHvs. WOA TSHHvs. CSA
¯ ¯ ¯ ¯ ¯ ¯
R R R R R R
BARSHANDEH

p‐value p‐value p‐value p‐value p‐value p‐value


TF48 6.1035E + 6.1035E + 3.0518E + 6.1035E + 6.1035E + 6.1035E +
ET AL.

−05 −05 −04 −05 −05 −05


TF49 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −05 −05 −05 −05
TF50 6.1035E + 6.1035E + 1.1597E + 6.1035E + 6.1035E + 6.1035E +
−05 −05 −03 −05 −05 −05
Abbreviations: CSA, crow search algorithm; GWO, grey wolf optimizer; HHO, Harris hawks optimization; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
43
44 | BARSHANDEH ET AL.

local optimums. A brief description of these test functions is given in Table A.4 (the details are
available in Reference [52]). The statistical results of the algorithms on the CEC 2017 test
functions are shown in Table A.5.
Considering the statistical results of Table A.5, it can be seen that the TSHH algorithm
outperformed other competing algorithms in terms of average and standard deviation on all
CEC 2017 test functions. Therefore, it can be argued that the TSHH algorithm statistically
outperformed competitor algorithms.
The box plots of the algorithms for CEC 2017 test functions are plotted and illustrated in
Figure A.2 to perform drastic comparisons. According to the graphs of Figure A.2, the TSHH
algorithm results are less scattered in all CEC 2017 test functions, except in TF27 and TF32 that
the results of SCA were more coherent. Additionally, in TF47, the results of GWO, SCA, and
WOA algorithms were more consistent than the results of the TSHH. Therefore, it can be
concluded that the randomness of the TSHH algorithm is less than the other competitor
algorithms. Moreover, for conducting faultless comparisons and verify the superiority of the
proposed algorithm, the Wilcoxon signed‐rank test is fulfilled, and the results are provided in
Table A.6.
Detailed examination of the excellent results of Table A.6 shows that only in TF 39 and TF
45 there is no significant difference between TSHH and GWO algorithms. In the rest of the
tests, the proposed algorithm has a significant positive difference with other compared
algorithms. Therefore, it can be found that the proposed algorithm is much better than other
competing algorithms.
Besides, concerning the experimental results of Sections A.1 and A.2, it can be asserted that
the TSHH algorithm was acted remarkably and can efficiently solve various real‐life problems.

You might also like