0% found this document useful (0 votes)
55 views10 pages

Wolf Algorithm

The Grey Wolf Optimization (GWO) algorithm is a meta-heuristic technique inspired by the hunting behavior of grey wolves, organized into a hierarchy of Alpha, Beta, Delta, and Omega roles. The algorithm involves initializing a population of wolves, evaluating their fitness, and iteratively updating their positions based on the best solutions found. Key steps in the optimization process include searching for prey, tracking, encircling, and attacking, which are mathematically modeled to guide the wolves towards optimal solutions.

Uploaded by

Silvia Gabriela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views10 pages

Wolf Algorithm

The Grey Wolf Optimization (GWO) algorithm is a meta-heuristic technique inspired by the hunting behavior of grey wolves, organized into a hierarchy of Alpha, Beta, Delta, and Omega roles. The algorithm involves initializing a population of wolves, evaluating their fitness, and iteratively updating their positions based on the best solutions found. Key steps in the optimization process include searching for prey, tracking, encircling, and attacking, which are mathematically modeled to guide the wolves towards optimal solutions.

Uploaded by

Silvia Gabriela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 5

Grey Wolf Optimizer

5.1 Introduction

The Grey Wolf Optimization (GWO) algorithm is a swarm or population-based


meta-heuristics technique. The algorithm was developed taking into consideration
motivation from the hunting pattern of the Grey Wolves (GW) or the rock-hard
pyramid of the GW [1]. The algorithm of the GWO is such that individual assigned
in a pack of wolf is allocated in one out of four diverse classified echelons (pyramid
or hierarchy), usually from the highest to lowest category such as Alpha(α), Beta
(β), Delta (δ), and Omega (ω) (Fig. 5.1). Following the hierarchy, Alpha signifies the
foremost individual in charge of the park and is characterized by orders and verdicts.
On the other hand, Beta signifies individual in the system who supports Alpha while
deciding on the park. Such an individual is also involved in other activities. Beta
commands low-level individuals while obeying the orders of the Alpha. Beta can be
positioned as the counselor for Alpha. Beta equally represents the park’s discipline.
Omega on the rock-hard pyramid represent individual with the lowest ranking on the
echelon system. Omega is pictured on the pyramid as the individual who is loyal to
other individuals at the top echelon. In the absence of Omega, actions of the wolf
pack like cub-carting, fighting, and hunting could be susceptible. Looking closely at
wolves on the echelon system of the pyramid, the Delta is quite different from the
other three packs of wolves (Omega, Alpha, and Beta). It is also important to note
that the wolves park referred to as Deltas are very much submissive and loyal to the
first two wolves park “Betas and Alphas”, while they try to override the Omegas.
They usually take the position as a detective, lookout, watchman, hoarier, predator,
and curator [2].
The flowchart representing the GWO optimization technique or procedure is
shown in Fig. 5.1 [3]. As shown in the flowchart, the first step to the optimiza-
tion process is the random initialization of the grey wolves. Then the selection of

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 43


M. O. Okwu and L. K. Tartibu, Metaheuristic Optimization: Nature-Inspired
Algorithms Swarm and Computational Intelligence, Theory and Applications,
Studies in Computational Intelligence 927, https://doi.org/10.1007/978-3-030-61111-8_5
44 5 Grey Wolf Optimizer

Fig. 5.1 Grey wolf hierarchy

chaotic input grey wolf optimization parameter and mapping of the chaotic member-
ship values with the respective algorithm laterally with initialization process variable
and disordered number [4]. Afterward, Chaotic Grey Wolf Optimization (CGWO)
parameters of the process are adjusted and tuned. All grey wolves fitness adjusted in
the quest space is estimated using several standard index functions which is arranged
by their fitness. It is presumed that the first wolf obtained after the arrangement
is Alpha and thus the second and third wolves are believed to be Beta and Delta
correspondingly. Afterward, the fitness evaluation process takes place to identify the
best three wolves. Then repetition of chaotic sequence with an update of the chaotic
map, along with the position of all GWs takes place. This is followed by a fitness
evaluation and replacement of the worst fit grey with the best fit grey. The next step
is to decide on the criteria met before the best output solution (Fig. 5.2).

5.2 Fundamental Theory of Gray Wolf Optimizer (GWO)

The leadership and the hunting mechanism of grey wolves are the inspiration behind
the Grey Wolf Optimization algorithm. The main steps of Grey wolf hunting can be
summarized as follows [1]:
5.2 Fundamental Theory of Gray Wolf Optimizer (GWO) 45

Fig. 5.2 Optimization procedure flowchart of GWO [1]


46 5 Grey Wolf Optimizer

1. Searching of the prey;


2. Tracking, chasing and approaching the prey;
3. Engaging pursue, encircling and harassing the prey until it stops moving;
4. Attacking the prey.
The steps consist of searching, pursuing, encircling and attacking.
To convert this into a mathematical model, we will consider the fittest solution
Alpha. The second and third fittest solution is Beta and delta wolves respectively.
Omega follows these three wolves.
Mathematical model of encircling the prey can be described as follows:
 − →  
 = C.
D X p − X(t) (5.1)
→ 
 + 1) = −
X(t  D
Xp (t) − A.   (5.2)

where t = current iterations.




Xp = position of the prey;


X = position of the grey wolf;

→ −→
A , C = coefficient vectors;

→ −→
A , C vectors are calculated as


A = 2.−
→ r1 − −
a .−
→ →
a; (5.3)



C = 2.−

r2 ; (5.4)


→r1 , −

r2 = random vectors range [0,1];
Component − →
a linearly decrease from 2 to 0 throughout iterations.
These equations can be used to update the grey wolf position according to the
position of the prey. Several positions around the best search agents can be reached

→ −

concerning the current position by adjusting the values of A and C . The random
vectors −→r1 and −

r2 allows the wolf to reach a position between 2 specific points.
Mathematical model of hunting can be described as follow.
In general, the hunting process is guided by Alpha. This process assumes that
Alpha, Beta, and Delta have better knowledge about the location of the prey (or the
optimal solution). The other wolves will update their positions based on the position
of Alpha, Beta, and Delta [1, 5].

→   −
− →  
D∝ = C 1 .X∝ − X(t) (5.5)

→   −
− →  
Dβ = C 2 .Xβ − X(t) (5.6)
5.2 Fundamental Theory of Gray Wolf Optimizer (GWO) 47

→   −
− →  
Dδ = C3 .Xδ − X(t)  (5.7)

→ −
− → − →− →
X1 = X∝ − A1 .D∝  (5.8)

→ −
− → − →− →
X2 = Xβ − A2 .Dβ  (5.9)

→ −
− → − →− →
X3 = Xδ − A3 .Dδ  (5.10)

The position of the grey wolves will be updated using the following equation:

→ − → − →
 X1 + X2 + X3
X(t + 1) = (5.11)
3
−→ − → − → −
→ − → −

X∝ , Xβ , Xδ are the position vector of ∝, β and δ. A1 , A2 and A3 are the coefficient
vectors.
Mathematical model for attacking the prey can be described as follows:
As the prey stops moving, wolves attack it to finalize the process of hunting.
Mathematically, this is expressed by the decrease of − →a from 2 to 0 during iterations.

→ −

As a decrease, A decreases as well. Value of A < 1 forces the wolf to launch the
attack toward the prey. However, for |A| > 1 make the wolf searching for better prey

→ −

(Fig. 5.3) [1]. C vectors are random values in the interval [0, 2]. C vectors add

Fig. 5.3 Grey wolf positioning for search agent [1]


48 5 Grey Wolf Optimizer

weight to the prey and make it difficult for the wolves to locate it. When C > 1, the
prey importance is emphasized and for C < 1, the prey importance is reduced.

5.3 Application of Gray Wolf Optimization


with a Numerical Example

Minimization of Korn function.


 
f(x1 , x2 ) = min (x1 − 5)2 + (x2 − 2)2

Initialize grey wolf population: n = 12




A = 2.−
→ r1 − −
a .−
→ →
a



C = 2.−

r2
 


a =2−2
iter
max iter

We assume that Alpha, Beta, and Delta have better knowledge about the location
of the prey (or the optimal solution). Hence, we assume:

x1 x2 f(x1 ,x2 )
1 6.16 4.41 7.1537
2 6.21 4.09 5.8322
3 7.42 8.38 46.5608
4 2.89 0.87 5.729
5 6.1 3.72 4.1684
6 6.34 3.21 3.2597
7 7.56 6.14 23.6932
8 6.24 4.04 5.6992
9 6.99 4.58 10.6165
10 4.73 3.3 1.7629
11 4.81 3.49 2.2562
12 5.94 3.44 2.9572
5.3 Application of Gray Wolf Optimization with a Numerical Example 49

And the corresponding (best) values of X∝ , Xβ and Xδ taken from the previous
table are:

f(x1 ,x2 )
X∝ = [4.73, 3.30] 1.7629
Xβ = [4.81, 3.49] 2.2562
Xδ = [5.94, 3.44] 2.9572

We can now update the position of the grey wolves:


 


a = 2 − 2.
1
= 1.3333
3

→ −
− → − → − → 
D∝ =  C 1 .X∝ − X (t) = |(2.− →
r2 ) × [4.73, 3.30] - [6.16, 4.41]|
→ −
−  → − → − →  


Dβ =  C 2 .Xβ − X (t) = |(2. r2 ) × [4.81, 3.49] − [6.16, 4.41]|
→ −
− → − → − → 
Dδ =  C 3 .Xδ − X (t) = |(2.−
→r2 ) × [5.94, 3.44] − [6.16, 4.41]|
→ −
−  → − →− →  −

X1 = X∝ − A1 .D∝  = |[4.7372, 3.3048] − (2 × 1.3333 × − →
r1 − −
→a ) × D∝ |
→ −
− → − →− → −

X2 = Xβ − A2 .Dβ  = |[4.8148, 3.4931] − (2 × 1.3333 × − →
r1 − −→
a ) × Dβ |
→ −
− → − →− → −

X3 = Xδ − A3 .Dδ  = |[5.9444, 3.4433] − (2 × 1.3333 × − →
r1 − −→
a ) × Dδ |
r and −

→ →
r are random numbers between [0, 1].
1 2
The new position of the grey wolves are
→ −
− → − →

→ X1 + X2 + X3
X (t + 1) = = [4.04, 2.60]
3
The new updated positions and corresponding fitness values are:

x1 x2 f(x1 ,x2 )
1 4.04 2.60 1.2816
2 4.64 3.04 1.2112
3 5.46 3.66 2.9672
4 5.60 3.59 2.8881
5 4.65 3.03 1.1834
6 4.74 3.33 1.8365
7 4.24 2.66 1.0132
8 4.90 3.24 1.5476
9 4.52 2.95 1.1329
10 5.39 3.54 2.5237
(continued)
50 5 Grey Wolf Optimizer

(continued)
x1 x2 f(x1 ,x2 )
11 4.11 2.53 1.073
12 5.09 3.15 1.3306

And the corresponding (best) values of X∝ , Xβ and Xδ taken from the previous
table are:

f(x1 ,x2 )
X∝ = [4.24, 2.66] 1.0132
Xβ = [4.11, 2.53] 1.073
Xδ = [4.65, 3.03] 1.1834

We can now update the position of the grey wolves:


 


a = 2 − 2.
2
= 0.667
3

→ −
− → − → − → 
D∝ =  C 1 .X∝ − X (t) = |(2.− →
r2 ) × [4.24, 2.66] − [4.04, 2.60]|
→ −
−  → − → − →  
Dβ =  C 2 .Xβ − X (t) = |(2.−
→r2 ) × [4.11, 2.53] − [4.04, 2.60]|
→ −
−  → − → − →  


Dδ =  C 3 .Xδ − X (t) = |(2. r2 ) × [4.65, 3.03] − [4.04, 2.60]|
→ −
− → − →− → −→
X1 = X∝ − A1 .D∝  = |[4.24, 2.66] − (2 × 0.667 × − →r1 − −
→a ) × D∝ |
→ −
−  → − →− → −

X2 = Xβ − A2 .Dβ  = |[4.11, 2.53] − (2 × 0.667 × − →r1 − −
→a ) × Dβ |
→ −
− → − →− → −

X = X − A .D  = |[4.65, 3.03] − (2 × 0.667 × −
3 δ 3 δ

r −−1
→a )×D | δ


r1 & −
→r2 are random numbers between [0, 1].
The new position of the grey wolves are
→ −
− → − →

→ X1 + X2 + X3
X (t + 1) = = [4.48, 2.78]
3
The new updated positions and corresponding fitness values are:

x1 x2 f(x1 ,x2 )
1 4.48 2.78 0.8788
2 4.56 2.82 0.866
3 4.58 2.83 0.8653
4 4.74 2.94 0.9512
5 4.63 2.86 0.8765
(continued)
5.3 Application of Gray Wolf Optimization with a Numerical Example 51

(continued)
x1 x2 f(x1 ,x2 )
6 4.59 2.84 0.8737
7 4.58 2.84 0.882
8 4.57 2.83 0.8738
9 4.57 2.83 0.8738
10 4.57 2.83 0.8738
11 4.57 2.82 0.8573
12 4.56 2.82 0.866

And the corresponding (best) values of X∝ , Xβ and Xδ taken from the previous
table are:

f(x1 ,x2 )
X∝ = [4.57, 2.82] 0.8573
Xβ = [4.56, 2.82] 0.866
Xδ = [4.56, 2.82] 0.866

In the previous sections, the fitness values have been calculated. The fitness func-
tion (or the evaluation function) is used to evaluate how close a given solution is to
the optimum of a desired problem. This function depends on a given problem [5].
The fitness of each wolf namely Alpha, Beta, and Delta, has been calculated in turn.
The fitness of the Alpha wolf will be considered as the most optimal solution to the
problem after the last iteration.
This model was implemented in MATLAB using the Grey Wolf Optimizer
(GWO) as described by Mirjalili et al. [1]. To illustrate the approach, 30 search
agents were considered (n = 30) and the maximum number of iterations was set to
1000. The best solution obtained by GWO is: [4.9996 2.0003] and the best optimal
value of the objective function found by GWO is 2.4264e-07. The parameter space
and the objective space are shown in Figs. 5.4 and 5.5.
52 5 Grey Wolf Optimizer

Fig. 5.4 Parameter space

Fig. 5.5 Objective space

References

1. Mirjalili, S., S.M. Mirjalili, and A. Lewis. 2014. Grey wolf optimizer. Advances in Engineering
Software 69: 46–61.
2. Shuyu, D., D. Niu, and Y. Li. 2018. Daily peak load forecasting based on complete ensemble
empirical mode decomposition with adaptive noise and support vector machine optimized by
modified greywolf optimization algorithm. Energies 11 (1): 163–175. https://doi.org/10.3390/
en11010163.
3. Kohli, Mehak. 2018. Sankalap arora chaotic grey wolf optimization algorithm for constrained
optimization problems. Journal of Computational Design and Engineering 5: 458–472.
4. Gandomi, A.H., and X.-S. Yang. 2014. Chaotic bat algorithm. Journal of Computational Science
5(2): 224–232.
5. Mirjalili, S. 2015. How effective is the grey wolf optimizer in training multi-layer perceptrons.
Applied Intelligence 43 (1): 150–161.
6. Kohli, M., and S. Arora. 2018. Chaotic grey wolf optimization algorithm for constrained
optimization problems. Journal of Computational Design and Engineering 5 (4): 458–472.

You might also like