Solving Problems by Searching
Reflex agent is simple
base their actions on
a direct mapping from states to actions
but cannot work well in environments
whichthis mapping would be too large to store
and would take too long to learn
Hence, goal-based agent is used
Problem-solving agent
Problem-solving agent
A kind of goal-based agent
It solves problem by
finding sequences of actions that lead to
desirable states (goals)
To solve a problem,
the first step is the goal formulation, based on
the current situation
Goal formulation
The goal is formulated
as a set of states, in which the goal is satisfied
Reaching from initial state goal state
Actions are required
Goal formulation, based on the current situation and the
agent’s performance measure, is the first step in problem
solving.
Actions are the operators
causing transitions between states
Actions should be abstract enough at a certain
degree, instead of very detailed
E.g., turn left VS turn left 30 degree, etc.
Problem formulation
The process of deciding
what actions and states to consider, given
a goal
E.g., driving Amman Zarqa
in-between states and actions defined
States: Some places in Amman & Zarqa
Actions: Turn left, Turn right, go straight,
accelerate & brake, etc.
Search
Because there are many ways to achieve
the same goal
Those ways are together expressed as a tree
Multiple options of unknown value at a point,
theagent can examine different possible
sequences of actions, and choose the best
This process of looking for such a sequence
is called search
A search algorithm takes a problem as input
and returns a solution in the form of an
action sequence.
Search algorithm
Defined as
takinga problem
and returns a solution
Once a solution is found
theagent follows the solution
and carries out the list of actions –
execution phase
Design of an agent
“Formulate, search, execute”
Well-defined problems and solutions
A problem is defined by 4 components:
The initial state
that the agent starts in
The set of possible actions
Transition model: description of what each action
does.
(successor functions): refer to any state reachable
from given state by a single action
Initial state, actions and Transition model define the
state space
the set of all states reachable from the initial state by any
sequence of actions.
A path in the state space:
any sequence of states connected by a sequence of actions.
Well-defined problems and solutions
The goal test
Applied to the current state to test
if the agent is in its goal
-Sometimes there is an explicit set of possible goal
states. (example: in Amman).
-Sometimes the goal is described by the properties
instead of stating explicitly the set of states
Example: Chess
the agent wins if it can capture the KING of the opponent on
next move ( checkmate).
no matter what the opponent does
Well-defined problems and solutions
A path cost function,
assigns a numeric cost to each path
= performance measure
denoted by g
to distinguish the best path from others
Usually the path cost is
thesum of the step costs of the individual
actions (in the action list)
Well-defined problems and solutions
Together a problem is defined by
Initial state
Actions
Successor function
Goal test
Path cost function
The solution of a problem is then
a path from the initial state to a state satisfying the goal
test
Optimal solution
the solution with lowest path cost among all solutions
Formulating problems
Besides the four components for problem
formulation
anything else?
Abstraction
the process to take out the irrelevant information
leave the most essential parts to the description of the
states
( Remove detail from representation)
Conclusion: Only the most important parts that are
contributing to searching are used
Example
From our Example
1. Formulate Goal
- Be In Amman
2. Formulate Problem
- States : Cities
- actions : Drive Between Cities
3. Find Solution
- Sequence of Cities : ajlun – Jarash - Amman
Our Example
1. Problem : To Go from Ajlun to Amman
2. Initial State : Ajlun
3. Operator : Go from One City To another .
4. State Space : {Jarash , Salat , irbed,……..}
5. Goal Test : are the agent in Amman.
6. Path Cost Function : Get The Cost From The Map.
7. Solution : { {Aj Ja Ir Ma Za Am} , {Aj Ir Ma Za Am} …. {Aj Ja Am} }
8. State Set Space : {Ajlun Jarash Amman}
Example: Romania
Example problems
Toy problems
those intended to illustrate or exercise
various problem-solving methods
E.g., puzzle, chess, etc.
Real-world problems
tend to be more difficult and whose
solutions people actually care about
E.g., Design, planning, etc.
Toy problems
Example: vacuum world
Toy problems
Example: vacuum world
Number of states: 8
Initial state: Any
Number of actions: 4
left, right, suck, noOp
Goal: clean up all dirt
Goal states: {7, 8}
Path Cost:
Each step costs 1
The 8-puzzle
The 8-puzzle
States:
a state description specifies the location of each of
the eight tiles and blank in one of the nine squares
Initial State:
Any state in state space
Successor function:
the blank moves Left, Right, Up, or Down
Goal test:
current state matches the goal configuration
Path cost:
each step costs 1, so the path cost is just the length
of the path
The 8-queens
There are two ways to formulate the
problem
All of them have the common followings:
Goal test: 8 queens on board, not attacking
to each other
The 8-queens
(1) Incremental formulation
involves operators that augment the state
description starting from an empty state
Each action adds a queen to the state
States:
any arrangement of 0 to 8 queens on board
Successor function:
add a queen to any empty square
The 8-queens
(2) Complete-state formulation
startswith all 8 queens on the board
move the queens individually around
States:
any arrangement of 8 queens, one per column in
the leftmost columns
Operators: move an attacked queen to a row,
not attacked by any other
The 8-queens
Conclusion:
the right formulation makes a big difference
to the size of the search space
Examples
Route finding problem
Touring problem
VLSI layout
Robot navigation
Automatic assembly sequence
Protein design
Internet searching
3.3 Searching for solutions
3.3 Searching for solutions
Finding out a solution is done by
searching through the state space
All problems are transformed
as a search tree
generated by the initial state and
successor function
Search tree
Initial state
The root of the search tree is a search node
Expanding
applying successor function to the current state
thereby generating a new set of states
leaf nodes
thestates having no successors
Fringe : Set of search nodes that have not been
expanded yet.
Refer to next figure
Tree search example
Tree search example
Search tree
The essence of searching
in case the first choice is not correct
choosing one option and keep others for later
inspection
Hence we have the search strategy
which determines the choice of which state to
expand
good choice fewer work faster
Important:
state space ≠ search tree
Search tree
A node is having five components:
STATE: which state it is in the state space
PARENT-NODE: from which node it is generated
ACTION: which action applied to its parent-node
to generate it
PATH-COST: the cost, g(n), from initial state to
the node n itself
DEPTH: number of steps along the path from the
initial state
Search tree
Informal Description of Genearl search Algorithm
Search tree
Measuring problem-solving performance
The evaluation of a search strategy
Completeness:
isthe strategy guaranteed to find a solution when
there is one?
Optimality:
doesthe strategy find the highest-quality solution
when there are several different solutions?
Time complexity:
how long does it take to find a solution?
Space complexity:
how much memory is needed to perform the search?
Measuring problem-solving performance
In AI, complexity is expressed in
b, branching factor, maximum number of
successors of any node
d, the depth of the shallowest goal node.
(depth of the least-cost solution)
m, the maximum length of any path in the state
space
Time and Space is measured in
numberof nodes generated during the search
maximum number of nodes stored in memory
Measuring problem-solving performance
For effectiveness of a search algorithm
we can just consider the total cost
The total cost = path cost (g) of the solution
found + search cost
search cost = time necessary to find the solution
Tradeoff:
(long time, optimal solution with least g)
vs. (shorter time, solution with slightly larger
path cost g)
3.4 Uninformed search strategies
3.4 Uninformed search strategies
Uninformed search
no information about the number of steps
or the path cost from the current state to
the goal
search the state space blindly
Informed search, or heuristic search
a cleverer strategy that searches toward
the goal,
based on the information from the current
state so far
Uninformed search strategies
Breadth-first search
Uniform cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional search
Breadth-first search
The root node is expanded first (FIFO)
All the nodes generated by the root
node are then expanded
And then their successors and so on
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
2 3 FRINGE = (1)
4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
1
2 3 FRINGE = (2, 3)
4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
1
2 3 FRINGE = (3, 4, 5)
4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
1
2 3 FRINGE = (4, 5, 6, 7)
4 5 6 7
Breadth-first search (Analysis)
Breadth-first search
Complete – find the solution eventually
Optimal, if step cost is 1
The disadvantage
if the branching factor of a node is large,
for even small instances (e.g., chess)
thespace complexity and the time complexity
are enormous
Properties of breadth-first search
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd = b(bd-1) = O(bd+1)
Space? O(bd+1) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
Breadth-first search (Analysis)
assuming 10000 nodes can be processed per second, each
with 1000 bytes of storage
Uniform cost search
Breadth-first finds the shallowest goal state
butnot necessarily be the least-cost solution
work only if all step costs are equal
Uniform cost search
modifies breadth-first strategy
by always expanding the lowest-cost node
The lowest-cost node is measured by the path
cost g(n)
Uniform-cost search
Expand least-cost unexpanded node
Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
Complete? Yes, if step cost ≥ ε
Time? numbr of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
Space? Number of nodes with g ≤ cost of optimal
solution, O(bceiling(C*/ ε))
Optimal? Yes – nodes expanded in increasing order of
g(n)
let
ε is possitive constant (every action cost)
C* be the cost of optimal solution.
Depth-first search
Always expands one of the nodes at the
deepest level of the tree
Only when the search hits a dead end
goes back and expands nodes at shallower levels
Dead end leaf nodes but not the goal
Backtracking search
only one successor is generated on expansion
rather than all successors
fewer memory
Depth-first search
Expand deepest unexpanded node
fringe = LIFO queue, i.e., put successors at front
Implementation:
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
S
A D
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Depth-first search (Analysis)
Not complete
because a path may be infinite or looping
then the path will never fail and go back try
another option
Not optimal
it doesn't guarantee the best solution
It overcomes
the time and space complexities
Properties of depth-first search
Complete? No: fails in infinite-depth spaces,
spaces with loops
Modify to avoid repeated states along path
Time? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than
breadth-first
Space? O(bm), i.e., linear space!
Optimal? No
complete in finite spaces
Depth-Limited Strategy
Depth-first with depth cutoff k (maximal
depth below which nodes are not
expanded)
Three possible outcomes:
Solution
Failure (no solution)
Cutoff (no solution within cutoff)
Depth-limited search
It is depth-first search
with a predefined maximum depth
However, it is usually not easy to define
the suitable maximum depth
too small no solution can be found
too large the same problems are
suffered from
Anyway the search is
complete
but still not optimal
Depth-limited
S
search
depth = 3
A D 3
6
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
Iterative deepening search
No choosing of the best depth limit
It tries all possible depth limits:
first
0, then 1, 2, and so on
combines the benefits of depth-first and
breadth-first search
Iterative deepening search
Iterative deepening search
(Analysis)
optimal
complete
Time and space complexities
reasonable
suitable for the problem
having a large search space
and the depth of the solution is not known
Properties of iterative deepening
search
Complete? Yes
Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd
= O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1
Bidirectional search
Run two simultaneous searches
one forward from the initial state another
backward from the goal
stop when the two searches meet
However, computing backward is difficult
A huge amount of goal states
at the goal state, which actions are used to
compute it?
can the actions be reversible to computer its
predecessors?
Bidirectional Strategy
2 fringe queues: FRINGE1 and FRINGE2
Time and space complexity = O(bd/2) << O(bd)
Bidirectional
S search
Forward
A D Backwards
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
Comparing search strategies