0% found this document useful (0 votes)
22 views14 pages

Local Search Algorithms in AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views14 pages

Local Search Algorithms in AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Local Search Algorithms in AI

Artificial Intelligence (AI) is revolutionizing how we solve complex problems


and make decisions. One crucial aspect of AI is local search algorithms,
which play a significant role in finding optimal solutions in various domains.

What is Local Search in AI?

Local search in AI refers to a family of optimization algorithms that are used


to find the best possible solution within a given search space. Unlike global
search methods that explore the entire solution space, local search
algorithms focus on making incremental changes to improve a current
solution until they reach a locally optimal or satisfactory solution. This
approach is useful in situations where the solution space is vast, making an
exhaustive search impractical.

Working of a Local Search Algorithm

The basic working principle of a local search algorithm involves the following
steps:

 Initialization: Start with an initial solution, which can be generated


randomly or through some heuristic method.
 Evaluation: Evaluate the quality of the initial solution using an objective
function or a fitness measure. This function quantifies how close the
solution is to the desired outcome.

 Neighbor Generation: Generate a set of neighboring solutions by making


minor changes to the current solution. These changes are typically
referred to as "moves."

 Selection: Choose one of the neighboring solutions based on a criterion,


such as the improvement in the objective function value. This step
determines the direction in which the search proceeds.

 Termination: Continue the process iteratively, moving to the selected


neighboring solution, and repeating steps 2 to 4 until a termination
condition is met. This condition could be a maximum number of iterations,
reaching a predefined threshold, or finding a satisfactory solution.
Local Search Algorithms

Several local search algorithms are commonly used in AI and optimization


problems. Let's explore a few of them:

Let's delve into some of the commonly used local search algorithms:

1. Hill Climbing

Hill climbing is a straightforward local search algorithm that starts with an


initial solution and iteratively moves to the best neighboring solution that
improves the objective function.

The Hill Climbing Analogy

 To grasp the essence of Hill Climbing, let's think of it as climbing a hill


or mountain. Imagine you're placed on a hilly landscape, and your
objective is to reach the highest peak (representing the optimal
solution).

 You start at an initial position and make small steps, always moving in
the direction of increasing elevation (improving the solution). Your goal
is to keep ascending until you can't ascend any further.
Local Search Nature

 Emphasize that Hill Climbing is a local search algorithm. This means


it's focused on exploring the immediate neighborhood of the current
solution to find a better one.

 Mention that Hill Climbing might get stuck in local optima (suboptimal
solutions) because it lacks the ability to explore globally beyond its
current neighborhood.

Hill Climbing Algorithm in AI


Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is


currently present.

Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Hill Climbing algorithm:

 Initialization: Begin with an initial solution, often generated randomly or


using a heuristic method.

 Evaluation: Calculate the quality of the initial solution using an objective


function or fitness measure.

 Neighbor Generation: Generate neighboring solutions by making small


changes (moves) to the current solution.

 Selection: Choose the neighboring solution that results in the most


significant improvement in the objective function.
 Termination: Continue this process until a termination condition is met
(e.g., reaching a maximum number of iterations or finding a satisfactory
solution).

Hill climbing has a limitation in that it can get stuck in local optima, which are
solutions that are better than their neighbors but not necessarily the best
overall solution. To overcome this limitation, variations of hill climbing
algorithms have been developed, such as stochastic hill climbing and
simulated annealing.
2. Local Beam Search

Local beam search represents a parallelized adaptation of hill climbing,


designed specifically to counteract the challenge of becoming ensnared in
local optima. Instead of starting with a single initial solution, local beam
search begins with multiple solutions, maintaining a fixed number (the "beam
width") simultaneously. The algorithm explores the neighbors of all these
solutions and selects the best solutions among them.

 Initialization: Start with multiple initial solutions.

 Evaluation: Evaluate the quality of each initial solution.

 Neighbor Generation: Generate neighboring solutions for all the current


solutions.

 Selection: Choose the top solutions based on the improvement in the


objective function.

 Termination: Continue iterating until a termination condition is met.

Local beam search effectively avoids local optima because it maintains


diversity in the solutions it explores. However, it requires more memory to
store multiple solutions in memory simultaneously.
3. Simulated Annealing

Simulated Annealing is a probabilistic optimization technique inspired by the


annealing process in metallurgy, where materials are heated and then slowly
cooled to remove defects and achieve a stable crystalline structure. It is
commonly used for problems where finding the globally optimal solution is
challenging, such as traveling salesman problems, scheduling, or complex
systems optimization.

 Initialization: Start with an initial solution.

 Evaluation: Evaluate the quality of the initial solution.

 Neighbor Generation: Generate neighboring solutions.

 Selection: Choose a neighboring solution based on the improvement in


the objective function and the probability of acceptance.

 Termination: Continue iterating until a termination condition is met.

The key to simulated annealing's success is the "temperature" parameter,


which controls the likelihood of accepting worse solutions. Initially, the
temperature is high, allowing for more exploration. As the algorithm
progresses, the temperature decreases, reducing the acceptance probability
and allowing the search to converge towards a better solution.

Travelling Salesman Problem

The Travelling Salesman Problem (TSP) is a classic optimization problem in


which a salesman is tasked with finding the shortest possible route that visits
a set of cities exactly once and returns to the starting city. TSP is NP-hard,
meaning that finding an exact solution for large instances becomes
computationally infeasible.

Local search algorithms, including hill climbing and simulated annealing, are
often used to find approximate solutions to the TSP. In this context, the cities
and their connections form the solution space, and the objective function is
to minimize the total distance traveled.

These algorithms iteratively explore different routes, making incremental


changes to improve the tour's length. While they may not guarantee the
absolute optimal solution, they often find high-quality solutions in a
reasonable amount of time, making them practical for solving TSP instances.

You might also like