Bece309l Ai&Ml Module 2
Bece309l Ai&Ml Module 2
for
Source: [Link]
Terminologies Module-2; Topic-2: Search Algorithms
• Search: Searching is a step by step procedure to solve a search-problem in a given search space.
In a problem-solving context, the state space represents all possible states that a system or agent
can be in. Each state represents a particular configuration, situation, or condition.
• States: In the context of a state space, a "state" refers to a specific configuration or situation
that the problem-solving agent can occupy. States can represent a wide range of conditions,
depending on the problem..
• A search problem can have three main factors:
• Search Space: Search space represents a set of possible solutions, which a system may
have.
• Initial State: It is a state from where agent begins the search.
• Goal test/state: The desired state that the system or agent aims to reach. The goal state is
typically defined in terms of certain criteria or conditions.
• Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
• Actions: It gives the description of all the available actions to the agent.
• Path: A path is a sequence of actions or operators that leads from the initial state to a particular
state in the search space.
Terminologies Module-2; Topic-2: Search Algorithms
• Transition model: A description of what each action do, can be represented as a transition
model.
• Search Algorithm: Forward state space search is guided by a search algorithm that
systematically explores the search space, searching for a solution path from the initial state to a
goal state.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
• Heuristic Functions: Informed search algorithms may employ heuristic functions to guide the
search more efficiently by estimating the cost or distance to the goal state from a given state.
• Informed and Uninformed Search: Informed search algorithms (e.g., A*) use heuristic
information, while uninformed search algorithms (e.g., BFS, DFS) operate without heuristic
guidance.
• Complex Problem-Solving: State space search aids in solving complex problems, optimizing
processes, and improving decision-making in various real-world scenarios.
Module-2; Topic-1: State Space
State Space
• State-space consists of all the
possible states together with
actions to solve a specific
problem.
• In the graphical representation of
state space (such as trees), the
states are represented by nodes
while the actions are represented
by arcs.
• Any state space has an initial
‘Start’ node and an ending ‘Goal’ • The path from the initial start node to the
node or multiple Goal States. final goal node is known as the ‘Solution’ for
the particular problem.
A problem may have multiple goals as well as Source:
paths. [Link]
e/state-space-search-in-artificial-intelligence
Module-2; Topic-1: State Space
State Space
Module-2; Topic-2: Search Algorithms
Properties of Search Algorithms
• Completeness: A search algorithm is said
to be complete if it guarantees to return a
solution if at least any solution exists for
any random input.
• Optimality: If a solution found for an
algorithm is guaranteed to be the best
solution (lowest path cost) among all other
solutions, then such a solution for is said to
be an optimal solution.
• Time Complexity: Time complexity is a
measure of time for an algorithm to
complete its task.
• Space Complexity: It is the maximum
storage space required at any point during
the search, as the complexity of the
problem.
Types of search algorithms Module-2; Topic-2: Search Algorithms
• Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search Module-2; Topic-2: Search Algorithms
Source: [Link]
Uniform-cost Search Algorithm Module-2; Topic-2: Search Algorithms
Source: [Link]
Uniform-cost Search Algorithm Module-2; Topic-2: Search Algorithms
Algorithm
• Set a variable NODE to the initial state, i.e.,
the root node and expand it.
• After expanding the root node, select one
node having the lowest path cost and
expand it further. (Remember, the selection
of the node should give an optimal path
cost.)
• If the goal node is searched with optimal
value, return goal state, else carry on the
search.
Source: [Link]
Uniform-cost Search Algorithm Module-2; Topic-2: Search Algorithms
Source: [Link]
search-algorithms
Module-2; Topic-2: Search Algorithms
Depth limited search Algorithm
• Set a variable NODE to the initial state, i.e.,
the root node.
• Set a variable GOAL which contains the
value of the goal state.
• Loop each node by traversing deeply in one
direction/path in search of the goal node.
• While performing the looping, start
removing the elements from the stack in
LIFO order.
• If the goal state is found, return goal state
otherwise backtrack to expand nodes in other
direction.
DFS works starting from the initial node A (root node)
and traversing in one direction deeply till node I and
then backtrack to B and so on. Therefore, the sequence
will be A->B->D->I->E->C->F->G.
Source: [Link]
search-algorithms
Module-2; Topic-2: Search Algorithms
Depth limited Search Algorithm
The performance measure of DFS
• Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
• Time Complexity: Time complexity of DLS algorithm is O(bℓ).
• Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
• Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.
• Advantages:
Depth-limited search is Memory efficient.
• Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.
Source: [Link]
Iterative Deepening First depth Search Module-2; Topic-2: Search Algorithms
Source: [Link]
Bidirectional Search Algorithm Module-2; Topic-2: Search Algorithms
Source: [Link]
Bidirectional Search Algorithm Module-2; Topic-2: Search Algorithms
Source: [Link]
Informed Search/ Heuristic Search in AI Module-2; Topic-2: Search Algorithms
• Informed search algorithm contains an array of
knowledge such as how far we are from the goal, path
cost, how to reach to goal node, etc.
• Heuristic is a function which is used in Informed Search,
and it finds the most promising path. Heuristic function
estimates how close a state is to the goal.
• Heuristic function is represented by h(n), and it
calculates the cost of an optimal path between the pair of
states.
• It takes the current state of the agent as its input and
produces the estimation of how close agent is from the goal.
• The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in
reasonable time.
Heuristic function
Best-first Search (Greedy search) Module-2; Topic-2: Search Algorithms
f(n)=g(n)+h(n)
h(n) is the actual heuristic cost value and h’(n) is the estimated heuristic cost value.
Module-2; Topic-2: Search Algorithms
A* Search Algorithm
S is the root node, and G is the goal node.
Starting from the root node S and moving
towards its next successive nodes A and B. In
order to reach the goal node G, calculate the f(n)
value of node S, A and B using the evaluation
equation i.e.
f(n)=g(n)+h(n)
Calculation of f(n) for node S:
f(S)=(distance from node S to S) + h(S)
0+10=10.
Calculation of f(n) for node A:
f(A)=(distance from node S to A)+h(A)
2+12=14
Calculation of f(n) for node B:
Therefore, node A has the lowest f(n) value. Hence, node A will
f(B)=(distance from node S to B)+h(B) be explored to its next level nodes C and D and again calculate
3+14=17 the lowest f(n) value. After calculating, the sequence we get
is S->A->D->G with f(n)=13(lowest value).
Module-2; Topic-2: Search Algorithms
A* Search Algorithm
• Struct Node: This defines a nodestructure that represents a
grid cell. It contains the x and y coordinates of the node, the
cost g from the starting node to that node, the heuristic value
h (estimated cost from that node to the destination node),
and a pointer to the starting node of the path.
• Calculate heuristic: This function calculates a heuristic
using the Euclidean distance between a node and the target
• AStarSearch: This function runs the A* search algorithm. It
takes the start and destination coordinates, a grid, and returns
a vector of pairs representing the coordinates of the shortest
path from start to finish.
• Primary: The program's main function takes input grids,
origin, and target coordinates from the user. It then calls
AStarSearch to find the shortest path and prints the result.
Module-2; Topic-2: Search Algorithms
A* Search Algorithm
• The performance measure of A* search
• Completeness: The star(*) in A* search
guarantees to reach the goal node.
• Optimality: An underestimated cost will
always give an optimal solution.
• Space and time complexity: A* search has
O(bd) space and time complexities.
Advantage of A* search
• A* is efficient if an efficient and acceptable
heuristic function is used.
• A* can be used to find optimal solutions
efficiently as long as a meaningful heuristic
can be defined.
Disadvantage of A* search
• A* mostly runs out of space for a long
period.
Module-2; Topic-2: Search Algorithms
A* Search Algorithm-Applications
• Pathfinding in Games: A* is often used in video games for character movement, enemy
AI navigation, and finding the shortest path from one location to another on the game
map. Robotics and Autonomous Vehicles: A* is used in robotics and autonomous
vehicle navigation to plan anoptimal route for robots to reach a destination, avoiding
obstacles and considering terrain costs. This is crucial for efficient and safe movement in
natural environments.
• Maze solving: A* can efficiently find the shortest path through a maze, making it
valuable in many maze-solving applications, such as solving puzzles or navigating
complex structures.
• Route planning and navigation: In GPS systems and mapping applications, A* can be
used to find the optimal route between two points on a map, considering factors such as
distance, traffic conditions, and road network topology.
• Puzzle-solving: A* can solve various diagram puzzles, such as sliding puzzles, Sudoku,
and the 8-puzzle problem. Resource Allocation: In scenarios where resources must be
optimally allocated, A* can help find the most efficient allocation path, minimizing cost
and maximizing efficiency.
Module-2; Topic-2: Search Algorithms
A* Search Algorithm-Applications
• Network Routing: A* can be used in computer networks to find the most efficient
route for data packets from a source to a destination node.
• Natural Language Processing (NLP): In some NLP tasks, A* can generate
coherent and contextualresponses by searching for possible word sequences based on
their likelihood and relevance.
• Path planning in robotics: A* can be used to plan the path of a robot from one
point to another, considering various constraints, such as avoiding obstacles or
minimizing energy consumption.
• Game AI: A* is also used to makeintelligent decisions for non-player characters
(NPCs), such as determining the best way to reach an objective or coordinate
movements in a team-based game.
Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Search in Complex Environments
• In search algorithm, When a goal is found, the
path to that goal constitutes a solution to the
problem.
• However, in many problems, the path to the goal
is irrelevant; the goal state itself is the solution.
• Local search algorithms operate by searching
from a start state to neighboring states, without
keeping track of the paths, nor the set of states
that have been reached. Local search algorithms
can also solve optimization problems, in which
the aim is to find the best state according to an
objective function.
• They are not systematic – they might never
explore a portion of the search space where a
solution actually resides.
• Two advantages-Use very little memory, and
Often find reasonable solutions in large or
infinite state spaces (systematic algorithms may
be unsuitable). Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Different regions in the State
Space Diagram
• Local maximum: It is a state which is better
than its neighboring state however there
exists a state which is better than it(global
maximum). This state is better because here
the value of the objective function is higher
than its neighbors.
• Global maximum: It is the best possible
state in the state space diagram. This is
because, at this stage, the objective function
has the highest value.
• Current state: The region of the state space
• Plateau/flat local maximum: It is a flat
diagram where we are currently present during
region of state space where neighboring the search.
states have the same value. • Shoulder: It is a plateau that has an uphill edge.
• Ridge: It is a region that is higher than its
neighbors but itself has a slope. It is a special
kind of local maximum. Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Hill Climbing search algorithm
• In Hill Climbing, the algorithm starts with an initial
solution and then iteratively makes small changes to
it in order to improve the solution.
• Hill-climbing search keeps track of one current state
and on each iteration moves to the neighboring state
with highest value—that is, it heads in the direction
that provides the steepest ascent.
• These changes are based on a heuristic function that
evaluates the quality of the solution. The algorithm
continues to make these small changes until it
reaches a local maximum, meaning that no further
improvement can be made with the current set of
moves.
• It terminates when it reaches a “peak” where no • Note that one way to use hill-climbing search is
neighbor has a higher value. to use the negative of a heuristic cost function as
• Hill climbing does not look ahead beyond the the objective function; that will climb locally to
the state with smallest heuristic distance to the
immediate neighbors of the current state.
goal.
Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Hill Climbing search algorithm
Key Features:
• Generate and Test Approach: This feature involves
generating neighboring solutions and evaluating their
effectiveness, always aiming for an upward move in
the solution space.
• Greedy Local Search: The algorithm uses a cheap
strategy, opting for immediate beneficial moves that
promise local improvements.
• No Backtracking: Unlike other algorithms, Hill
Climbing Algorithm in AI does not revisit or
reconsider previous decisions, persistently moving
forward in the quest for the optimal solution.
Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Hill Climbing search algorithm
Advantages of Hill Climbing algorithm:
• Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
• It can be used in a wide variety of optimization problems, including those with a large search
space and complex constraints.
• Hill Climbing is often very efficient in finding local optima, making it a good choice for problems
where a good solution is needed quickly.
• The algorithm can be easily modified and extended to include additional heuristics or constraints.
Disadvantages of Hill Climbing algorithm:
• Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of
the problem.
• The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result in
a poor final solution.
• Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find
better solutions.
• It may be less effective than other optimization algorithms, such as genetic algorithms or
simulated annealing, for certain types of problems.
Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Simple Hill Climbing
• Simple hill climbing is the simplest way to implement a
hill climbing algorithm.
• Evaluate the initial state. If it is a goal state then stop and
return success. Otherwise, make the initial state as the
current state.
• Loop until the solution state is found or there are no new
operators present which can be applied to the current
state.
• Select a state that has not been yet applied to the
current state and apply it to produce a new state.
• Perform these to evaluate the new state. • The search algorithm only evaluates the neighbor node
• If the current state is a goal state, then stop and state at a time and selects the first one which optimizes
return success. current cost and set it as a current state. It only checks
• If it is better than the current state, then make it it's one successor state, and if it finds better than the
the current state and proceed further. current state, then move else be in the same state.
• If it is not better than the current state, then The features are
• Less time consuming
continue in the loop until a solution is found. •
Less optimal solution and the solution is not guaranteed
• Exit from the function.
Source: [Link]
Steepest-Ascent hill climbing Module-2; Topic-3: Search in Complex Environments
• The steepest-Ascent algorithm is a variation of simple
hill climbing algorithm.
• This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is
closest to the goal state.
• Step 1: Evaluate the initial state, if it is goal state then
return success and stop, else make current state as initial
state.
• Step 2: Loop until a solution is found or the current state
does not change.
• Let SUCC be a state such that any successor of the
current state will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it
to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set
current state to SUCC.
• Step 5: Exit. Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Stochastic hill climbing
• It does not examine all the neighboring nodes before deciding which
node to select. It just selects a neighboring node at random and
decides (based on the amount of improvement in that neighbor)
whether to move to that neighbor or to examine another.
Algorithm
• Evaluate the initial state. If it is a goal state then stop and return
success. Otherwise, make the initial state the current state.
Repeat these steps until a solution is found or the current state does not
change.
Select a state that has not been yet applied to the current state.
Apply the successor function to the current state and generate all
the neighbor states.
Among the generated neighbor states which are better than the
current state choose a state randomly (or based on some
probability function).
If the chosen state is the goal state, then return success, else make
it the current state and repeat step 2 of the second point.
Exit from the function.
Source: [Link]
Module-2; Topic-3: Search in Complex Environments
Problems in different regions in
Hill climbing
1. Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another state
also present which is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum
in state space landscape. Create a list of the promising path so that the
algorithm can backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of this
algorithm does not find any best direction to move. A hill-climbing search
might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state which
is far away from the current state so it is possible that the algorithm could
find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.
Source: [Link]
Simulated Annealing Module-2; Topic-3: Search in Complex Environments
Breadth-first search (BFS), Depth-first search (DFS), Depth limited search(DLS), Iterative
Deepening First depth Search(IDS), Bidirectional Search Algorithm(BDS), Uniform-cost Search
(UCS)
Conclusion Module-2-Conclusion