Chapter-3 - Searching and Planning
Chapter-3 - Searching and Planning
Chapter Three:
Searching and Planning
01/24/2025 AI/COSC 1
Objectives
01/24/2025 AI/COSC 5
Problem-solving agent
01/24/2025 AI/COSC 10
Problem…contd
States
Actions
Start Solution
Goal
01/24/2025 AI/COSC 11
Problem Solving Agent cont’d
01/24/2025 AI/COSC 12
Problem Solving Agent cont’d
01/24/2025 AI/COSC 15
Problem Solving Agent cont’d
– In problem solving by searching, solution can be
described into two ways.
– Solution can be provided as state sequence or
action sequence
– For example consider the vacuum cleaner
world with initial state as shown bellow
– Solution as state sequence becomes:
Suck Move
Right
In general, an agent with several immediate options of
unknown value can decide what to do by first examining
different possible sequences of actions that lead to states
of known value, and then choosing the best sequence.
01/24/2025 AI/COSC 16
Problem Solving Agent cont’d
01/24/2025 AI/COSC 17
Problem solving agent cont’d
Example: Road map of Ethiopia
Aksum
100
200 Mekele
Gondar 80
180
Lalibela
110 250
150
Bahr dar
Dessie
170
Awasa
01/24/2025 AI/COSC 18
Problem Solving agent cont’d
Example: Road map of Ethiopia
- Current position of the agent (Initial State):
Awasa.
- Needs to arrive to: Gondar
- Formulate goal: be in Gondar
- Formulate problem:
states: various cities
actions: drive between cities
- Find solution:
sequence of cities, e.g., Awasa, Adama,
Addis Ababa, Dessie, Godar
01/24/2025 AI/COSC 19
Types of Problems
Four types of problems exist in the real
situations:
1. Single-state problem
– The environment is Deterministic and fully
observable
– Out of the possible state space, agent knows
exactly which state it will be in; solution is a
sequence
2. Sensor less problem(conformant
problem)
– The environment is non-observable
– It is also called multi-state problem
– Agent may have no idea where it is; solution is
a sequence
01/24/2025 AI/COSC 20
Types of Problems cont’d
3. Contingency problem
– The environment is nondeterministic
and/or partially observable
– It is not possible to know the effect of
the agent action
– percepts provide new information about
current state
4. Exploration problem
– The environment is partially observable
– It is also called unknown state space
01/24/2025 AI/COSC 21
Problem type as a summery cont’d
01/24/2025 AI/COSC 22
Vacuum Cleaner World Example
cont’d
Example: vacuum world
Single-state
– Starting state
us known say
in #5.
– What is the
Solution?
01/24/2025 AI/COSC 23
Vacuum Cleaner World Example
cont’d
Example: vacuum world
- Single-state, start in
#5. Solution? [Right,
Suck]
01/24/2025 AI/COSC 24
Vacuum Cleaner World Example
cont’d
Example: vacuum world
Sensorless,
– It doesn’t know what
the current state is
– So the current start is
either of the
following:
{1,2,3,4,5,6,7,8}
– What is the Solution?
01/24/2025 AI/COSC 25
Vacuum Cleaner World Example
cont’d
Example: vacuum
world
Sensorless
• Solution
• Right goes to {2,4,6,8}
Solution?
• [Right,Suck,Left,Suck]
01/24/2025 AI/COSC 26
Vacuum Cleaner World Example
cont’d
Example: vacuum world
Contingency
• Nondeterministic:
- Suck may dirty a clean carpet
• Partially observable:
- Hence we have partial
information
• Let’s assume the current percept
is: [L, Clean] i.e. start in #5 or
#7
• What is the Solution?
01/24/2025 AI/COSC 27
Vacuum Cleaner World Example
cont’d
Example: vacuum world
Contingency
Solution Move right
[Right, if dirt then Suck]
suck
01/24/2025 AI/COSC 28
Vacuum Cleaner World Example
cont’d
• Real-world Problems to be solved by searching
algorithms
– We have seen two such problems:
• The road map problem and the vacuum
cleaner world problem
• Route finding
• Touring problems
• VLSI layout
• Robot Navigation
• Automatic assembly sequencing
• Drug design
• Internet searching
01/24/2025 AI/COSC 29
Vacuum Cleaner World Example
cont’d
Example: vacuum world
• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
01/24/2025 AI/COSC 30
Vacuum Cleaner World Example
cont’d
Example: vacuum world
01/24/2025 AI/COSC 31
Example: 8-puzzle
• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
01/24/2025 AI/COSC 32
Example: 8-puzzle cont’d
01/24/2025 AI/COSC 33
Example: 8-puzzle cont’d
8 2 1 2 3
3 4 7 4 5 6
5 1 6 7 8
01/24/2025 AI/COSC 34
Example: 8-puzzle cont’d
8 2 7
3 4
8 2 5 1 6
3 4 7
5 1 6 8 2 8 2
3 4 7 3 4 7
5 1 6 5 1 6
01/24/2025 AI/COSC 35
Example: 8-puzzle cont’d
Size of the state space = 9!/2 = 181,440
10 million states/sec
01/24/2025 AI/COSC 36
Example: 8-queens cont’d
01/24/2025 AI/COSC 37
Example: 8-queens problem
cont’d
01/24/2025 AI/COSC 38
Example: 8-queens cont’d
Formulation #1:
• States: any arrangement of
0 to 8 queens on the board
• Initial state: 0 queens on
the board
• Actions: add a queen in any
square
• Goal test: 8 queens on the
board, none attacked
• Path cost: none
648 states with 8 queens
01/24/2025 AI/COSC 39
Example: 8-queens cont’d
Formulation #2:
• States: any arrangement of
k = 0 to 8 queens in the k
leftmost columns with none
attacked
• Initial state: 0 queens on the
board
• Successor function: add a
queen to any square in the
leftmost empty column such
that it is not attacked by
any other queen
2,067 states
• Goal test: 8 queens on the
bord
01/24/2025 AI/COSC 40
Example: robot assembly
• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
01/24/2025 AI/COSC 41
Example: robot assembly cont’d
01/24/2025 AI/COSC 43
Searching For Solution (Tree search
algorithms)
– A state is a (representation of) a physical
configuration
– A node is a data structure constituting part
of a search tree
– It includes:
• state,
• parent node,
• action,
• depth and
• one or more costs [like path cost g(x),
heuristic cost h(x), evaluation function
cost f(x)]
01/24/2025 AI/COSC 44
Searching For Solution (Tree search
algorithms)
01/24/2025 AI/COSC 45
Searching For Solution (Tree search
algorithms)
• A search process can be viewed as
building a search tree over the state
space
• Search tree is a tree structure defined by
initial state and a successor function.
• Search(root) Node is the root of the search
tree representing initial state and without a
parent.
• A child node is a node adjacent to the
parent node obtained by applying an
operator or rule.
01/24/2025 AI/COSC 46
Tree search example cont’d
Tree search example
Awasa
Gambela
BahrDar AA
Lalibela AA Gondar
Gondar Debre M.
01/24/2025 AI/COSC 47
General tree search cont’d
Implementation: general tree search
01/24/2025 AI/COSC 48
Search strategies cont’d
– A search strategy is defined by picking the order of node
expansion
– Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one
exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in
memory
optimality: does it always find a least-cost solution?
– Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
– Generally, searching strategies can be classified in to two as
uninformed and informed search strategies
01/24/2025 AI/COSC 49
Uninformed search (blind search)
strategies
Uninformed search strategies (Blind Search)
– use only the information available in the problem definition
– They have no information about the number of steps or the
path cost from the current state to the goal
– They can distinguish the goal state from other states
– They are still important because there are problems with
no additional information.
Six kinds of such search strategies will be discussed and each
depends on the order of expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
01/24/2025 AI/COSC 50
Uninformed search (blind search) strategies
m
b
G
01/24/2025 AI/COSC 51
Generating action sequences- search trees
01/24/2025 AI/COSC 52
Breadth-first search
01/24/2025 AI/COSC 53
Breadth first search cont’d
– In general, all the nodes are expanded
at a given depth in the search tree
before any nodes at the next level are
expanded.
– That is, BFS expands all nodes at level d
before expanding nodes at level d+1
– It checks all paths of a given length
before moving to any longer path
– Expands the shallowest node first
01/24/2025 AI/COSC 54
Breadth first search cont’d
A D
B D A E Move
downwards,
level by
C E E B B F level, until
goal is
D F B F C E A C G reached.
G C G F
G
01/24/2025 AI/COSC 56
Breadth first search cont’d
3. IF goal reached
THEN success;
ELSE failure;
01/24/2025 AI/COSC 58
Breadth first search cont’d
Properties of breadth-first search
01/24/2025 AI/COSC 60
Depth-first Search (DFS)
01/24/2025 AI/COSC 61
Depth-first Search cont’d
Depth-first search- Chronological backtracking
S
• Select a child
A • convention: left-to-right
or may be alphabetical
B D order
• Repeatedly go to next
child, as long as possible.
C E • Return to left-over
alternatives (higher-up)
D F only when needed.
01/24/2025 AI/COSC 62
Depth-first search cont’d
Depth-first search(LIFO) algorithm
1. STACK <-- path only containing the root;
01/24/2025 AI/COSC 63
Depth-first search cont’d
• Complete: Yes, if state space finite
No, if state contains
infinite paths or loops
• Time: O(bm)
• Space: O(bm) or O(bm+1) (i.e.
linear space)
• Optimal : No
Then the worst case time complexity is O(bm)
However, for very deep (or infinite due to cycles)
trees this search may spend a lot of time (forever)
searching down the wrong branch
Backtracking search uses even less memory, one
successor instead of all b.
01/24/2025 AI/COSC 64
Depth-first search cont’d
• Time Requirements of Depth-First Search
– It is also more likely to return a
solution path that is longer than the
optimal
– Because it may not find a solution if one
exists, this search strategy is not
complete.
– Remarks: Avoid DFS for large or infinite
maximum depths.
01/24/2025 AI/COSC 65
Depth-Limited Strategy(Depth first search
with cut off)
• Depth-first with depth cutoff k (maximal depth
below which nodes are not expanded)
01/24/2025 AI/COSC 66
Depth-Limited Strategy(Depth first search
with cut off)
DFS Evaluation:
• DFS is a method of choice when there is a known
(and reasonable) depth bound, and finding any
solution is sufficient
1. Depth-first search:
IF the search space contains very deep branches
without solution, THEN Depth-first may waste
much time in them.
2. Breadth-first search:
Is VERY demanding on memory !
Solutions ??
Iterative deepening
The order of expansion of states is similar to BFS,
except that some states are expanded multiple
times
01/24/2025 AI/COSC 67
Iterative Deepening Search l = 0
• Limit = 0
01/24/2025 AI/COSC 68
Iterative Deepening Search l = 1
• Limit = 1
01/24/2025 AI/COSC 69
Iterative Deepening Search l = 2
• Limit = 2
01/24/2025 AI/COSC 70
Iterative Deepening Search l = 3
• Limit = 3
01/24/2025 AI/COSC 71
Iterative Deepening Search l = 1 to l=4
01/24/2025 AI/COSC 72
Iterative Deepening Search Ex.
cont’d
01/24/2025 AI/COSC 73
Iterative Deepening Search Ex.
cont’d
01/24/2025 AI/COSC 74
Iterative Deepening Search Ex.
cont’d
01/24/2025 AI/COSC 75
Iterative Deepening Search (IDS)
01/24/2025 AI/COSC 76
Iterative Deepening Search
Algorithm cont’d
1. DEPTH <-- 1
01/24/2025 AI/COSC 77
Iterative Deepening Search
• Completeness
– It is complete
– It finds a solution if exists
• Optimality
– It is optimal
– Finds the shortest path (like breadth first)
• Guarantee shortest path
• Guarantee for goal node of minimal
depth
01/24/2025 AI/COSC 78
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Consider the problem that moves from node S
to G
S
A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
15 C 5
G, 11
S
A, 1 B, 5 C, 15
G, 11 G, 10
01/24/2025 AI/COSC 79
Bi-directional Search cont’d
S
Forward
A D Backwards
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
01/24/2025 AI/COSC 80
Bi-directional Search cont’d
Bi-directional Search
* Completeness: yes
* Optimality: yes
* d/2
Time complexity: O(bd/2)
* Space complexity: O(bd/2) d
01/24/2025 AI/COSC 82
3.5 Search and Optimization
(Gradient descent)
01/24/2025 AI/COSC 83
3.6 Dynamic Game Theory
• It is a mathematical framework that enables us to model
and analyze strategic interactions in dynamic and ever-
changing environments.
• The theory uses multiple players make decisions and take
actions while considering not just their own interests, but
also how the choices of others influence the overall
outcome.
• It has a significant impact in various fields, including
economics, AI, and multi-agent systems,
• it helps in understanding and optimizing decision-making
in complex, interactive situations.
• It allows intelligent systems to adapt and make strategic
decisions in real-time, in response to changing
circumstances and the actions of other agents.
• It considers the dynamic nature of interactions,
• It plays a crucial role in enhancing AI search and planning,
01/24/2025 AI/CSE 3206 84
Key elements of dynamic game theory
1. Sequential Decision-Making:
Players make decisions one after another, and each player's choice is
influenced by the choices made by previous players. This sequential
aspect introduces a temporal dimension to the analysis.
2. Information Asymmetry:
Players may have varying levels of information about the game and
other players' strategies. This can lead to complex strategies and
strategic uncertainty.
3. Dynamic Strategies:
In dynamic games, strategies are not static but evolve over time
based on the observed actions and outcomes, allowing for adaptation
and learning.
4. Equilibrium Concepts:
Dynamic game theory extends equilibrium concepts from static games
to incorporate the dynamic nature of interactions, such as subgame
perfect equilibria, Markov perfect equilibria, and sequential equilibria.
01/24/2025 AI/COSC 86