Fundamentals of Artificial
Intelligence (Course Code:
4350705)
Prof. Asha Prajapati
Computer Engineering Department-Diploma
S.P.B. Patel Engineering College. Linch, Mehsana
UNIT - 2
STATE SPACE SEARCH AND
HEURISTIC TECHNIQUE
Prof. Asha Prajapati
Computer Engineering Department-Diploma
S.P.B. Patel Engineering College. Linch, Mehsana
CONTENT….
2.1 Solving problems as state space search
2.2 Production system
2.3 Problem characteristics
2.4 Depth First Search
2.5 Breadth-First Search
2.6 Heuristic function
2.7 Hill climbing
2.8 Best First Search
Introduction
Problem solving is a big part of AI. It means finding
solutions based on given information.
To solve a problem in AI, we need to create a
method that can come up with the right answer.
Here are the four steps to build such a system:
1. Define the Problem Precisely
2. Analyze the Problem
3. Isolate and Represent Necessary Knowledge
4. Choose and Apply the Best Technique
1. Define the Problem Precisely:
● Clearly describe what the problem is.
● Specify the starting point (input) and what the solution
should look like (output).
1. Analyze the Problem:
● Identify important details that affect how we might
solve the problem.
● Understand which features of the problem are most
crucial.
1. Isolate and Represent Necessary Knowledge:
● Gather and organize the information needed to solve
the problem.
1. Choose and Apply the Best Technique:
2.1
Solving problems as state
space search
1. Defining Problem & Search
A problem is defined by:
1. State Space: List all possible setups of objects.
2. Initial States: Identify where the problem starts.
3. Goal States: Identify where you want to end up.
4. Rules: Outline the actions you can take.
Defining State & State Space
● STATE - A state shows the problem's situation
at a specific time.
● STATE SPACE - State space is all possible states
you can reach from the start state.
● It looks like a graph with states as nodes and
actions as arcs.
● A path is a series of states linked by actions.
● The problem’s solution is a part of this state
Example: Consider Water Jug problem
The Problem:
You have two jugs:
● A 4-gallon jug.
● A 3-gallon jug.
● You also have a pump with unlimited water and the
ground to pour water on. You need to get exactly 2
gallons of water in the 4-gallon jug.
Initial and Goal States:
Initial State: (0, 0) - Both jugs are empty.
Goal State: (2, n) - The 4-gallon jug has 2
gallons, and the 3-gallon jug can have any
amount.
State Space Representation:
A state is represented as (x, y) where:
x is the amount of water in the 4-gallon jug (0 ≤
x ≤ 4).
y is the amount of water in the 3-gallon jug (0 ≤
y ≤ 3).
Assumptions:
● You can fill a jug from the pump.
● You can pour water out of a jug onto the
ground.
● You can pour water from one jug into the
other.
● There are no measuring markings on the
jugs.
Solution: Here’s one possible sequence of steps to
solve the problem:
8 Puzzle Problem
2.2
Production system
Production System
A production system helps structure AI programs to
facilitate search processes. Here’s how it works:
Components of a Production System:
1) Rules
2) Knowledge/Databases
3) Control Strategy
4) Rule Applier
1.Rules:
● Each rule has two parts:
● Left side: Determines when the rule can be
applied.
● Right side: Describes what to do when the
rule is applied.
1.Knowledge/Databases:
● Contains all the information needed for the
task.
● Some information is permanent, while other
Cont.
3. Control Strategy:
● Specifies the order in which rules are checked
against the database.
● Resolves conflicts when multiple rules can be
applied at once.
4. Rule Applier:
● Implements the control strategy.
● Applies the rules to solve the problem.
Steps to Solve a Problem:
1.Define the Problem:
● Specify the start and goal states.
● Define operators to move between states.
1.Search for a Solution:
● Find a path from the initial state to the
goal state.
● Model this process as a production
Benefits of a Production System:
1. Structured AI Programs:
Helps organize the search process efficiently.
2. Modularity:
Rules can be added, removed, or changed independently.
3. Understandable Rules:
Rules are written in a natural and clear way.
Characteristics of Production Systems:
1. Production System:
● Applying one rule doesn’t prevent applying
another rule later.
● Rules are independent.
1. Non-Monotonic Production System:
● Applying one rule might prevent another rule
from being applied.
Cont.
3. Partially Commutative Production System:
● If a sequence of rules transforms state x to state y,
then any allowable order of these rules also
transforms state x to state y.
4. Commutative Production System:
● Both monotonic and partially commutative.
Control Strategies
Control strategies help us choose the next step when searching
for a solution to a problem.
A good control strategy should:
● Lead to progress.
● Be organized and methodical.
Types of Control Strategies:
1. Uninformed/Blind Search
2. Informed/Directed Search
1. Uninformed/Blind Search:
No extra information beyond the problem is used.
The entire search space is explored.
Examples:
BFS (Breadth First Search)
DFS (Depth First Search)
DLS (Depth Limited Search).
2. Informed/Directed Search:
Uses some information to guide the search.
Helps focus on the most promising areas.
Examples:
Best First Search
A*
Problem Decomposition
Means-End Analysis.
2.3
Problem characteristics
1. Is the problem decomposable into a set of
independent smaller or easier sub-problems?
1. Decomposable Problems:
● Can be broken into smaller, independent parts.
● Easier to solve by tackling each part separately.
● Example: To solve ∫ (x² + 3x + sin²x cos²x) dx:
○ Break it into three parts: ∫ x² dx, ∫ 3x dx, and ∫
sin²x cos²x dx.
○ Solve each part, then add the results.
COnt.
2. Non-Decomposable Problems:
● Cannot be broken into smaller parts.
● Must be solved in a specific sequence.
● Example: Blocks World Problem:
○ Move blocks step-by-step to reach the goal state.
○ Each step depends on the previous one.
Summary:
Decomposable Problems: Break into smaller parts to
solve.
Non-Decomposable Problems: Solve in a sequence
with interdependent steps.
2. Can Solution Steps Be Ignored or Undone?
When solving problems, we need to know if we can ignore
or undo steps. Problems fall into three categories:
1. Ignorable:
a. Steps that don’t affect the outcome and can be
ignored.
b. Example: Theorem proving:
i. If a step is useless, we can skip it without any
problem.
Cont.
2. Recoverable:
● Steps that can be undone if needed.
● Example: 8-puzzle problem:
○ If a wrong move is made, we can
backtrack and fix it by taking extra
steps.
Cont.
3. Irrecoverable:
● Steps that can’t be ignored or undone.
● Example: Chess game:
○ A wrong move can’t be taken back, so we
must make the best of the situation.
Cont.
Control Structures:
● Ignorable Problems: Simple control, no need to
backtrack.
● Recoverable Problems: Control allows backtracking to
undo mistakes.
● Irrecoverable Problems: Careful decision-making since
moves are final.
Understanding these types helps in choosing the right
problem-solving method.
3. Is the Problem's Universe Predictable?
Types of Problems:
1. Certain Outcome Problems:
● The result is predictable.
● Examples:
○ Eight puzzles, water jug problem.
● Benefits:
○ You can plan a sequence of steps that
guarantees a solution.
○ Planning helps avoid unnecessary steps.
2. Uncertain Outcome Problems:
● The result is not predictable.
● Examples:
○ Playing cards.
● Challenges:
○ Planning can only suggest steps that might work,
not guarantee success.
○ Finding a solution can be very difficult and
expensive because there are many possible
paths to explore.
○ These problems are hard to solve because you
can't always predict the next step.
Summary:
● Certain Outcome Problems: Predictable results,
easier to plan and solve.
● Uncertain Outcome Problems: Unpredictable
results, harder to plan and solve, with many
possible paths.
4. Is a Good Solution Obvious Without Comparing All Possible Solutions?
here are two types of problems: Any Path Problems and
Best Path Problems.
1. Any Path Problems:
● Examples: Water Jug Problem, 8 Puzzle Problem.
● We are happy with any solution, no matter the path
taken.
● We use heuristic methods to find a solution quickly
without checking all options.
● These problems can be solved in a reasonable time.
2. Best Path Problems:
1. Example: Traveling Salesman Problem (finding
the shortest path).
2. We need the best solution, not just any solution.
3. We explore all possible paths using an
exhaustive search to find the best one.
4. These problems are harder and take more time
to solve.
Summary:
● Any Path Problems: Any solution is fine; solved
quickly with heuristics.
● Best Path Problems: Only the best solution is
acceptable; need to check all paths.
5. Is the Desired Solution a State or a Path to a State?
1. State of the World:
● The solution is a specific situation or outcome.
● Example: In natural language processing, the goal
is to understand the meaning of the sentence “The
bank president ate a dish of pasta salad with the
fork.”
○ We need the correct interpretation of the
sentence.
○ We don’t need the details of how we found this
2. Path to a State:
● The solution is the sequence of steps to
reach a specific outcome.
● Example: In the water jug problem, we
need to show how we reached the final
state (2, 0).
○ It’s not enough to say we solved it.
○ We need to show the steps we took to
Summary:
State of the World: The solution is just the final
outcome.
Path to a State: The solution includes the steps
taken to reach the final outcome
6. What is the Role of Knowledge?
Knowledge plays a key role in solving problems, even with
unlimited computing power. Here’s why:
Knowledge Base: The amount of knowledge you have can
greatly affect how well you solve a problem.
Example 1: Chess
● Knowing the basic rules for making legal moves is
important.
● Extra knowledge about strategies and tactics helps find
better moves faster and makes the game more
realistic.
Example 2: Predicting Political Trends
● Requires a lot of knowledge to make accurate
predictions.
● Without enough information, finding a good
solution is very difficult.
Summary:
More knowledge helps improve problem-solving by
making the search process faster and more effective.
7. Does the Task Require Interaction with a Person?
Problems can be divided into two types based on
whether they need interaction with a person:
1. Solitary Tasks:
● The computer works on the problem on its own.
● The computer gets a problem description and
provides an answer without needing to
communicate with a person.
● Example: Simple theorem proving. The computer
uses rules and laws to prove a theorem without
needing further input.
Cont.
2. Conversational Tasks:
● The computer and a person interact during the
process.
● This interaction might be to help the computer,
provide extra information to the user, or both.
● Example: Medical diagnosis. People want to
understand how the computer reached its
conclusion, so they interact with it for
explanations.
Summary:
Solitary Tasks: No need for communication; the
computer solves the problem on its own.
Conversational Tasks: Requires communication
between the person and the computer for
explanations or additional information.
Problem Classification
When we look at problems closely, we can see that they
fit into different types or categories. Here’s a simple
way to understand it:
Problem Classification:
Problems can be grouped into broad categories based
on their characteristics.
By asking the right questions, we can figure out which
category a problem belongs to.
Summary:
Problems can be classified into different types.
Understanding these types helps us solve them
more effectively.
Control Strategies
Control strategies help us choose the next
step when solving a problem. A good
strategy should:
1. Move Forward: Help us make progress.
2. Be Systematic: Follow a clear and
organized method
There are two main types:
1. Uninformed (Blind) Search:
● No extra information is used beyond the
problem itself.
● Searches through all possibilities.
● Examples: Breadth First Search (BFS), Depth
First Search (DFS), Depth Limited Search (DLS).
1. Informed (Directed) Search:
● Uses extra information to find solutions more
efficiently.
● Examples: Best First Search, A* Problem
Decomposition.
2.3
Depth First Search
Depth First Search
● Recursive Algorithm.
● Start from root node and follows each
path to its greatest depth node before
moving to next path.
● Implemented using STACK. (LIFO)
DFS - Algorithm
1. Enter root node on stack (Using Push
operation)
2. Do until stack is not empty
a. Remove node (Using POP operation)
i. If node = goal → Stop
ii. Push all children of node in stack
Example
Example - 2
Path
S→A→B→D→E→C→G
Advantages of DFS
● Requires less memory.
● Less time to reach goal node if traversal
in right path.
Disadvantages of DFS
● No guarantee of finding solution.
● Infinite Loop
Time and Space Complexity of DFS
TC & SC → O(b^d)
b → Branching Factor
d → Depth
2.5
Breadth-First Search
DFS
● Explores all the nodes at given depth before
proceeding to the next level.
● Uses QUEUE to implement.
DFS - Algo.
1. Enter starting nodes on queue
2. If queue is empty then return fail and stop.
3. If first element on queue is GOAL node then return
success and stop.
Else
4. Remove and expand first element from QUEUE and place
children at end of queue
5. GoTo step II.
EX-2
Advantages of BSF
● Find solution if exist.
● Minimal solution in least number of
steps.
Disadvantages of BSF
● Requires more memory
● Needs lots of time when solution is far
from root node.
Time and Space Complexity of DFS
TC & SC → O(b^d)
b → Branching Factor
d → Depth
2.6
Heuristic function
Heuristic Search
● It is a simple searching technique that tries
to optimize a problem using Heuristic
function.
● Optimization means that we try to solve a
problem in minimum number of steps or
cost.
Heuristic Function h(n)
● It is a function H(n) that gives an
estimation on the cost of getting form
node ‘n’ to the ‘goal’ state.
● It helps in selecting optimal node for
expansion.
Ex.
K M 100 K
R1 M
70
Mehsa Sura
na
t
10
0K
M KM
R2
20
Ex. conti.
H(100)
R1 = 170 KM
K M 100 K
R1 M
70
Mehsa Sura
na
t
10
0K
M KM
R2
20
R2 = 120 KM
H(20)
Heuristic Function h(n)
Types of Heuristic Function h(n)
There are two types of H(N)
1. Admissible
2. Non - Admissible
Admissible Heuristic Function h(n)
never overestimate
● A heuristic function is admissible if it
the cost of reaching the goal.
h(n) < = h* n
Here,
h(n) → heuristic cost
h*n → estimated cost
So, heuristic cost should be less than or equal to the estimated cost
Non - Admissible Heuristic Function h(n)
● A non - admissible heuristic may overestimate the cost of reaching
the goal.
h(n) > h* n
Here,
h(n) → heuristic cost
h*n → estimated cost
So, heuristic cost may be greater than to the estimated cost
Total cost = search cost +
path cost
Ex - Admissible Heuristic Function h(n)
Heuristic cost
H(B) = 3 H(C) = 4 H(D) = 5
F(n) = H(n) = G(n)
Ex - Admissible Heuristic Function h(n)
Heuristic cost
H(B) = 3 H(C) = 4 H(D) = 5
F(n) = H(n) = G(n)
Cost = Heuristic cost + Actual
Cost
B=1+3=4
C=1+4=5
D=1+5=6
Ex - Admissible Heuristic Function h(n)
Heuristic cost
H(B) = 3 H(C) = 4 H(D) = 5
F(n) = H(n) = G(n)
Cost = Heuristic cost + Actual
Cost
B=1+3=4
C=1+4=5
D=1+5=6
Actual cost form A to G =
1+3+5+2 = 11
Ex - Admissible Heuristic Function h(n)
Heuristic cost
H(B) = 3 H(C) = 4 H(D) = 5
F(n) = H(n) = G(n)
Cost = Heuristic cost + Actual Cost
B=1+3=4
C=1+4=5
D=1+5=6
Actual cost form A to G = 1+3+5+2
= 11
H(B)=3 so h(n)=3 and h*n =11
H(n)<=h*(n)
3<=11
Ex - NOn - Admissible Heuristic Function h(n)
Ex - NOn - Admissible Heuristic Function h(n)
Heuristic cost
H(D) = 5
F(n) = H(n) = G(n)
Cost = Heuristic cost + Actual Cost
Actual cost from A to G = 1 + 3 = 4
Ex - NOn - Admissible Heuristic Function h(n)
Heuristic cost
H(D) = 5
F(n) = H(n) = G(n)
Cost = Heuristic cost + Actual Cost
Actual cost from A to G = 1 + 3 = 4
H(D) = 5, h(n) = 5, h*(n) = 4
H(n) < = h*(n)
5<=4
Heuristic Search Techniques
1. Best First Search: Explores the most promising node first,
based on a given heuristic.
2. A Search*: Combines cost to reach a node and an estimated
cost to reach the goal using the formula: f(n) = g(n) + h(n).
3. Greedy Best First Search: Chooses the next node based
solely on the heuristic function (ignores cost so far).
4. Hill Climbing: Moves towards the highest value or closest
goal, making decisions locally.
5. Simulated Annealing: Similar to hill climbing but allows some
"bad" moves to avoid getting stuck in local maxima.
2.7
Hill climbing
Hill climbing algo.
● Local search algorithm.
● Greedy approach.
● No backtracking
Hill climbing algo.
● It is a variant of generate and test method in
which feedback from test procedure is used to
help generator decide which direction to move in
search space.
● Always moves in single direction.
● It is like DFS.
Hill climbing algo.
Hill climbing algo.
Current State
Hill climbing algo.
New State
Current State
Hill climbing algo.
If new state is
better than
New State
current state than
new state become
current state.
Current State
NS=CS
Hill climbing algo.
End
New State
Current State
Algorithm
Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
Step 2: Loop Until a solution is found or there is no new operator
left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new
state as a current state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.
Features of Hill Climbing:
● Generate and Test: Hill Climbing is a type of the "Generate and
Test" method. It creates new options and tests them to decide
the next move.
● Greedy Approach: It always chooses the move that looks best
right now, aiming to improve the solution step-by-step.
● No Backtracking: Once it makes a move, it doesn't go back to
previous states. It only moves forward.
● Deterministic: Given the same starting point and problem, it will
always give the same result. There’s no randomness involved.
● Local Search: It looks for better solutions by making small
changes in the current area. It might not find the best solution
State-space Diagram for Hill Climbing:
● Local Maximum: Local maximum is a state which is
better than its neighbor states, but there is also
another state which is higher than it.
● Global Maximum: Global maximum is the best
possible state of state space landscape. It has the
highest value of objective function.
● Current state: It is a state in a landscape diagram
where an agent is currently present.
● Flat local maximum: It is a flat space in the
landscape where all the neighbor states of current
states have the same value.
● Shoulder: It is a plateau region which has an uphill
Types of Hill Climbing Algorithm:
1. Simple hill Climbing
2. Steepest-Ascent hill-climbing
3. Stochastic hill Climbing
1. Simple hill Climbing
Simple Hill Climbing is a basic way to find a better solution. It
works like this:
● Look at one neighbor of the current state.
● If this neighbor is better, move to this neighbor.
● If not, stay where you are.
Features:
● Takes less time.
● Might not always find the best solution.
2. Steepest-Ascent hill-climbing
● The Steepest-Ascent algorithm is a type of hill-climbing
algorithm. It looks at all the nearby options from the
current position and chooses the one that is closest to the
goal.
● This method takes more time because it checks many
options before deciding.
3. Stochastic hill Climbing
● Stochastic hill climbing is a search method that doesn't
check all nearby options before making a move.
● Instead, it picks one neighbor at random and decides
whether to move to that neighbor or look at another
option.
Limitation
1. Local Maxima
2. Plateau
3. Ridge
1. Local Maxima
A local maximum is like a peak in a landscape that is
higher than the areas around it, but there might be an
even higher peak somewhere else.
Solution:
● The backtracking technique can help find a better
peak. It keeps track of good paths so the algorithm
can go back and try different ones.
2. Plateau / Flat maxima
A plateau is a flat part of the search space where all nearby
options have the same value. Because of this, the algorithm can't
find the best direction to move. Hill-climbing search can get stuck
in this flat area.
Solution:
● To get out of a plateau, the algorithm can take either big or
small steps while searching. Another solution is to randomly
choose a state far from the current one, which may help the
algorithm find a non-flat area.
2. Ridges
A ridge is a special form of the local maximum. It has an
area which is higher than its surrounding areas, but itself
has a slope, and cannot be reached in a single move.
Solution:
● With the use of bidirectional search, or by moving in
different directions, we can improve this problem.
Advantages of Hill Climbing:
● Simplicity: Easy to understand and use for
optimization problems.
● Low Memory Usage: Uses less RAM compared to
tree search methods.
● Quick Results: Often finds a good solution quickly,
even if it’s not the best possible.
Disadvantages of Hill Climbing:
● Local Optima: Can get stuck at local peaks,
missing the best overall solution.
● Superficial Search: Only looks at nearby solutions,
not exploring further.
● Sensitive to Start: Results can vary a lot based on
the initial starting point.
Application of Hill Climbing
1. Machine Learning
2. Robotics
3. Network Design’
4. Game Playing
5. NLP
2.8
Best First Search
Best First Search
● Uses a method to decide which nearby node
looks the best, then explores it.
● Category of heuristic or informed search.
● Priority queue is used to store cost of nodes.
● Greedy search algorithm.
● Combination of BSF and DSF.
Algorithm
Priority queue ‘PQ’ containing initial states
Loop
If PQ = empty return fail
Else
Node ← remove_first (PQ)
If node = goal
Return path from initial to NODE
Else
Generate all successor of node And insert newly
generated node into PQ according to cost value
End Loop
Example
Example
Straight line distance (Given)
A → G = 40
B → G = 32
C → G = 25
D → G = 35
E → G = 19
F → G = 17
G→G=0
H → G = 10
Example Straight line
distance OPEN CLOSE
(Given)
[A] []
A → G = 40
B → G = 32 [C,B,D] [A]
C → G = 25 [F,E,B,D] [A,C]
D → G = 35 [G,E,B,D] [A,C,F]
E → G = 19 [A,C,F,G]
F → G = 17
G→G=0
PATH = A → C → F → G (44)
H → G = 10
OPEN CLOSE
[A] []
[C,B,D] [A]
[F,E,B,D] [A,C]
[G,E,B,D] [A,C,F]
[A,C,F,G]
PATH = A → C → F → G (44)
Time and Space Complexity of DFS
TC & SC → O(b^d)
b → Branching Factor
d → Depth of the tree
Ex - 2
OPEN CLOSE
[S] []
[B,A] [S]
[F,E,A] [S,B]
[G,I,E,A] [S,B,F]
[S,B,F,G]
PATH = S → B → F → G
Hill - https://javatpoint.com/hill-climbing-algorithm-in-ai