Hill-climbing search:
The hill-climbing search algorithm (steepest-ascent version) is simply a loop that continually
moves in the direction of increasing value—that is, uphill.
It terminates when it reaches a “peak” where no neighbor has a higher value.
The algorithm does not maintain a search tree, so the data structure for the current node needs
only record the state and the value of the objective function.
Hill climbing does not look ahead beyond the immediate neighbors of the current state. This
resembles trying to find the top of Mount Everest in a thick fog while suffering from amnesia.
PSEUDOCODE:
Hill climbing is sometimes called greedy local search because it grabs a good neighbor state
without thinking ahead about where to go next.
Hill climbing often makes rapid progress toward a solution because it is usually quite easy to
improve a bad state.
Unfortunately, hill climbing often gets stuck for the following reasons:
● Local maxima: a local maximum is a peak that is higher than each of its neighboring
states but lower than the global maximum. Hill-climbing algorithms that reach the vicinity
of a local maximum will be drawn upward toward the peak but will then be stuck with
nowhere else to go.
● Ridges: a ridge is shown in Figure 4.4. Ridges result in a sequence of local maxima that
is very difficult for greedy algorithms to navigate.
● Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is
possible.
In each case, the algorithm reaches a point at which no progress is being made.
Simulated annealing:
Simulated annealing is such an algorithm. In metallurgy, annealing is the process used to
temper or harden metals and glass by heating them to a high temperature and then gradually
cooling them, thus allowing the material to reach a low energy crystalline state.
To explain simulated annealing, we switch our point of view from hill climbing to gradient
descent (i.e., minimizing cost) and imagine the task of getting a ping-pong ball into the deepest
crevice in a bumpy surface. If we just let the ball roll, it will come to rest at a local minimum.
If we shake the surface, we can bounce the ball out of the local minimum. The trick is to shake
just hard enough to bounce the ball out of local minima but not hard enough to dislodge it from
the global minimum. The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e., lower the temperature).
The innermost loop of the simulated-annealing algorithm is quite similar to hill climbing.
Instead of picking the best move, however, it picks a random move. If the move improves the
situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability
less than 1.
The probability decreases exponentially with the “badness” of the move—the amount ΔE by
which the evaluation is worsened. The probability also decreases as the “temperature” T goes
down: “bad” moves are more likely to be allowed at the start when T is high, and they become
more unlikely as T decreases. If the schedule lowers T slowly enough, the algorithm will find a
global optimum with probability approaching 1.
Local beam search:
The local beam search algorithm keeps track of k states rather than just one. It begins with k
randomly generated states.
At each step, all the successors of all k states are generated. If any one is a goal, the algorithm
halts. Otherwise, it selects the k best successors from the complete list and repeats.
At first sight, a local beam search with k states might seem to be nothing more than running k
random restarts in parallel instead of in sequence.
In fact, the two algorithms are quite different.
In a random-restart search, each search process runs independently of the others.
In a local beam search, useful information is passed among the parallel search threads.
In effect, the states that generate the best successors say to the others, “Come over
here, the grass is greener!” The algorithm quickly abandons unfruitful searches and
moves its resources to where the most progress is being made.
In its simplest form, local beam search can suffer from a lack of diversity among the k
states—they can quickly become concentrated in a small region of the state space, making
the search little more than an expensive version of hill climbing.
A variant called stochastic beam search, analogous to stochastic hill climbing, helps alleviate
this problem.
Instead of choosing the best k from the pool of candidate successors, stochastic beam search
chooses k successors at random, with the probability of choosing a given successor being an
increasing function of its value.
Stochastic beam search bears some resemblance to the process of natural selection, whereby
the “successors” (offspring) of a “state” (organism) populate the next generation according to its
“value” (fitness).
Genetic algorithms:
A genetic algorithm (or GA) is a variant of stochastic beam search in which successor states are
generated by combining two parent states rather than by modifying a single state.
Like beam searches, GAs begin with a set of k randomly generated states, called the
population. Each state, or individual, is represented as a string over a finite alphabet—most
commonly, a string of 0s and 1s.
PARTICLE SWARM OPTIMIZATION:
Introduction to Optimization
Optimization refers to the process of finding the best solution or outcome among a set of
possible solutions. It is widely used in various fields such as engineering, economics, and
computer science to solve problems where you need to maximize or minimize an objective
function.
Introduction to Particle Swarm Optimization (PSO)
Particle Swarm Optimization (PSO) is a population-based optimization technique inspired by the
social behavior of birds flocking or fish schooling. It was developed by James Kennedy and
Russell Eberhart in 1995. PSO is used to find an optimal or near-optimal solution to a problem
by iteratively improving a candidate solution with respect to a given measure of quality.
Key Concepts of PSO
1. Particles: Each potential solution to the optimization problem is considered a particle in
the swarm. Particles move through the solution space to find the optimal solution.
2. Swarm: A collection of particles.
3. Position: The current solution represented by a particle.
4. Velocity: The rate at which a particle changes its position.
5. Fitness: A measure of how good the current solution (position) is according to the
objective function.
PSO Algorithm
The PSO algorithm can be summarized in the following steps:
1. Initialization: Initialize a swarm of particles with random positions and velocities. Each
particle has a memory of its best position (personal best) and knows the best position
found by the swarm (global best).
2. Evaluation: Evaluate the fitness of each particle using the objective function.
3. Update Personal and Global Bests: For each particle, update its personal best if the
current position is better. Update the global best if any particle's current position is better
than the global best.
4. Update Velocity: Update the velocity of each particle based on its personal best and the
global best. The velocity update formula is:
5. Update Position: Update the position of each particle using the updated velocity:
6. Termination: Repeat steps 2 to 5 until a stopping criterion is met, such as a maximum
number of iterations or a satisfactory fitness level.
Grey Wolf Optimization (GWO)
Introduction to Optimization
Optimization involves finding the best solution or outcome among a set of possible solutions. It
is a fundamental process in various fields such as engineering, economics, and computer
science to solve problems where you need to maximize or minimize an objective function.
Introduction to Grey Wolf Optimization (GWO)
Grey Wolf Optimization (GWO) is a nature-inspired optimization algorithm developed by
Seyedali Mirjalili, Seyed Mohammad Mirjalili, and Andrew Lewis in 2014. It mimics the
leadership hierarchy and hunting mechanism of grey wolves in nature. GWO is used for solving
continuous and discrete optimization problems.
Key Concepts of GWO
1. Grey Wolves: In GWO, the population of candidate solutions is referred to as grey
wolves.
2. Leadership Hierarchy: Grey wolves have a strict social hierarchy consisting of four
levels:
○ Alpha (α): The leader of the pack, responsible for decision-making.
○ Beta (β): The second in command, assists the alpha.
○ Delta (δ): The third level, which advises the alpha and beta.
○ Omega (ω): The lowest ranking wolves, which follow the orders of the other
three levels.
3. Hunting Behavior: The optimization process mimics the hunting behavior of grey
wolves, including encircling prey, hunting, and attacking prey.
GWO Algorithm
The GWO algorithm can be summarized in the following steps:
1. Initialization: Initialize the population of grey wolves with random positions in the search
space. Initialize the alpha, beta, and delta positions as the best three solutions.
2. Encircling Prey: Update the position of each grey wolf based on the positions of alpha,
beta, and delta wolves. The encircling behavior is modeled as:
3. Hunting: Update the positions of grey wolves based on the three best solutions (alpha,
beta, and delta). The position update formula for each wolf is:
4. Attacking Prey: As the wolves get closer to the prey, the value of a decreases, which
reduces the exploration and focuses on exploitation.
5. Termination: Repeat steps 2 to 4 until a stopping criterion is met, such as a maximum
number of iterations or a satisfactory fitness level.
BAT Algorithm Optimization
Introduction to Optimization
Optimization refers to the process of finding the best solution or outcome among a set of
possible solutions. It is a crucial process in various fields such as engineering, economics, and
computer science to solve problems where you need to maximize or minimize an objective
function.
Introduction to BAT Algorithm Optimization
The BAT Algorithm is a nature-inspired optimization algorithm developed by Xin-She Yang in
2010. It is inspired by the echolocation behavior of microbats. Bats use echolocation to detect
prey, avoid obstacles, and locate their roosting crevices in the dark. The BAT algorithm
leverages this echolocation capability to navigate the search space and find optimal solutions.
Key Concepts of BAT Algorithm
1. Bats: In the BAT algorithm, each candidate solution is considered a bat.
2. Echolocation: Bats emit sound pulses and listen for the echoes that bounce back from
obstacles or prey. This behavior is simulated to explore the search space.
3. Frequency: The frequency of the emitted pulse determines the range and resolution of
the search.
4. Loudness and Pulse Rate: Loudness controls the intensity of the search, while the
pulse rate determines the probability of a bat performing a local search around its current
best solution.
BAT Algorithm
The BAT algorithm can be summarized in the following steps:
1. Initialization: Initialize the population of bats with random positions and velocities.
Assign random frequencies, loudness, and pulse rates to each bat.
2. Echolocation: Update the frequency, velocity, and position of each bat based on its
echolocation.
3. Local Search: Perform a local search if a random number is greater than the pulse rate.
Adjust the bat's position slightly around its current best solution.
4. Global Search: Update the global best solution if a bat's new position is better than the
current global best.
5. Loudness and Pulse Rate Update: Gradually decrease the loudness and increase the
pulse rate to intensify the local search as the algorithm progresses.
6. Termination: Repeat steps 2 to 5 until a stopping criterion is met, such as a maximum
number of iterations or a satisfactory fitness level.
Glowworm Swarm Optimization (GSO)
Introduction to Optimization
Optimization involves the process of finding the best solution or outcome among a set of
possible solutions. It is widely used in various fields such as engineering, economics, and
computer science to solve problems where you need to maximize or minimize an objective
function.
Introduction to Glowworm Swarm Optimization (GSO)
Glowworm Swarm Optimization (GSO) is a nature-inspired optimization algorithm developed by
K.N. Krishnanand and D. Ghose in 2005. GSO is inspired by the behavior of glowworms, which
use a luminescent chemical (luciferin) to attract mates or prey. The algorithm mimics this
behavior to find optimal solutions in the search space.
Key Concepts of GSO
1. Glowworms: In GSO, each candidate solution is considered a glowworm.
2. Luciferin: A quantity associated with each glowworm that represents the quality of the
solution. It is analogous to the glowworm's brightness.
3. Local-decision range: The range within which a glowworm can perceive other
glowworms. This range adapts dynamically.
4. Movement: Glowworms move towards other glowworms that have higher luciferin
values.
GSO Algorithm
The GSO algorithm can be summarized in the following steps:
1. Initialization: Initialize the population of glowworms with random positions and assign
initial luciferin values. Set the parameters for the algorithm.
2. Luciferin Update: Update the luciferin values based on the fitness of each glowworm's
position.
3. Movement: Move each glowworm towards a neighboring glowworm that has a higher
luciferin value within its local-decision range.
4. Local-decision Range Update: Adjust the local-decision range of each glowworm to
control the number of neighbors it can perceive.
5. Termination: Repeat steps 2 to 4 until a stopping criterion is met, such as a maximum
number of iterations or a satisfactory fitness level.
A NOTE:
PSEUDOCODE:
Question no. 3 Part 2
In Grey Wolf Optimization (GWO), a nature-inspired optimization algorithm, the hierarchy among
wolves plays a crucial role in guiding the search process. The hierarchy is structured as follows:
1. Alpha (α): The leader of the pack, responsible for decision-making and guiding the hunt.
2. Beta (β): The second-in-command, assisting the alpha and stepping in as a leader if
necessary.
3. Delta (δ): The subordinate to alpha and beta, guiding the lower-ranking wolves
(omegas) and also aiding the alpha and beta.
4. Omega (Ω): The lowest-ranking wolves, following the directives of the higher-ranking
wolves.
Interaction of Delta (δ) with Alpha (α), Beta (β), and Omega (Ω)
1. Interaction with Alpha (α):
● Guidance and Direction: The delta wolves follow the alpha's guidance and assist in
maintaining the structure of the pack. They ensure that the alpha's decisions and
strategies are implemented effectively within the pack.
● Support: Deltas provide support to the alpha by helping to control and organize the pack
during hunting and other activities. They help maintain order and ensure that the alpha's
commands are executed efficiently.
2. Interaction with Beta (β):
● Coordination and Assistance: Delta wolves work closely with beta wolves to support
the alpha. They assist in coordinating the pack’s activities, ensuring that the alpha’s
decisions are followed.
● Communication: They act as intermediaries between the beta and omega wolves,
ensuring that instructions and feedback flow smoothly throughout the pack. This helps in
maintaining the overall cohesion and effectiveness of the pack.
● Strategic Role: During hunts, deltas help betas in planning and executing strategies to
corner and capture prey. They take on tasks that require more experience and
coordination, complementing the beta’s role.
3. Interaction with Omega (Ω):
● Supervision and Control: Delta wolves supervise omega wolves, ensuring that they
follow the pack’s structure and the directives from alpha and beta wolves. They help
keep the omegas in line and manage their activities during hunts and other pack
functions.
● Training and Guidance: Deltas play a mentoring role for the omega wolves, helping
them learn the pack's ways and improve their skills. This training helps in enhancing the
overall efficiency and capabilities of the pack.
● Execution of Tasks: Deltas assign specific tasks to omegas, ensuring that the
lower-ranking wolves contribute effectively to the pack’s objectives. They oversee the
execution of these tasks, providing feedback and corrections as necessary.
Role of Delta (δ) in GWO Algorithm
In the context of the Grey Wolf Optimization algorithm, the delta wolves (δ) play a significant role
in the optimization process. The GWO algorithm simulates the social hierarchy and hunting
behavior of grey wolves to perform optimization.
● Position Updates: The positions of delta wolves are updated based on the positions of
alpha (α) and beta (β) wolves. The algorithm uses the positions of α, β, and δ wolves to
guide the search for optimal solutions. This multi-leader guidance helps in exploring the
search space more effectively.
● Balance Exploration and Exploitation: The inclusion of δ wolves, along with α and β,
helps in balancing exploration and exploitation during the optimization process. While α
primarily drives the search towards promising areas, β and δ assist in refining the search
and avoiding local optima.
● Information Sharing: Delta wolves share information about the search space with alpha
and beta, contributing to a more informed and collective decision-making process. This
information exchange enhances the algorithm’s ability to converge to the optimal
solution.