Lesson 3
Chapter 3:
Solving problems by searching and
Constraint satisfaction problem
Problem Solving by Searching
3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
3.4. Search Strategies
3.5. Avoiding Repeated States
3.6. Constraint Satisfaction Search
3.7. Games as Search Problems
Many interesting problems in science and engineering are solved using search.
A search problem is defined by:
A search space:
– The set of objects among which we search for the solution
-- Examples: routes between cities, or n-queens configuration
A goal condition:
– Characteristics of the object we want to find in the search space?
– Examples: Path between cities A.A and Hawasa
Problem-Solving Agents 3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
3.4. Search Strategies
Problem-solving agents: 3.5. Avoiding Repeated States
3.6. Constraint Satisfaction Search
3.7. Games as Search Problems
Decide what to do by finding sequences of actions that lead to
desirable states.
The first step in problem solving is Goal formulation.
Problem formulation is the process of deciding what actions and
states to consider.
Problem-Solving Agents (1)
If there is no additional information available?
The agent will not know which of its possible actions is best because it
does not know enough about the state that results from taking each
action.
The best it can do is choose one of the actions at random.
An agent with several immediate options of unknown value can decide
what to do by examining different possible sequences of actions. This
process is called a search.
Problem-solving agents (2)
The search algorithm takes a problem as input and returns a solution in the form of
an action sequence.
Once a solution is found, the actions it recommended. This is called the execution
phase.
• A simple formulate, search,
execute design is as follows:
formulate goal
search problem
Excute
Formulating Problems 3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
a problem can be defined formally by four components:3.4. Search Strategies
3.5. Avoiding Repeated States
3.6. Constraint Satisfaction Search
3.7. Games as Search Problems
Initial state
possible action A I
2
goal test 11
C 8
path cost 3
E 20
B
5 J
8 H
F
7
19
27 3
G K
Formulating Problems(1)
Posible action: the most common formulation uses a cuccessor
fuction.
E.g: possible actions: We can go from
B --> A, B--> C, B-->F
The state of all states recheable from initial state (B) is known as state
space (A,B,C).
The lines in graph indicates an action
Path: in the state space is a sequence of states connected by a
sequence of action.
Goal test: can be a set of possible goal states. it can be specified by an abstruct
property.
A solution to a problem is a path from the initial to a goal state.
optimal solution has the lowest path cost among all solutions.
Measuring problem-solving performance
The effectiveness of a search can be measured in at least three ways.
1 Does it find a solution at all?
2 Is it a good solution, low path cost?
3 What is the search cost associated with the time and memory required
to find a solution?
Measuring problem-solving performance (1)
The agent must some how decide what resources to devote to
search and what resources (time, storage) to devote to execution.
For large, complicated state spaces, there is a trade-off (the
agent can search for a very long time and get an optimal solution,
or a short time and get a solution with a larger path cost).
Choosing state and action: Irrelevant details should be
removed, this is called abstraction.
Measuring problem-solving performance (1)
We have to evaluate algorithm perfurmance to find the right search algorithm.
We evaluate it using 4 criteria:
Completeness: is an answer guaranteed?
Time complexity : can we go out for minimum time while we search?
Space Complexity : how much memory is required?
Optimality: if we find a solution, is it an optimal one?
Two Categories of Search algorithm
Uniformed Search Informed Search
- We can distinguish the goal - We know something about the
state from the non-goal state, nature of our path that might
increase the effectiveness of our
but the path and cost to find the
search.
goal is unknown.
- Generally better than Uninformed
- Also called known as blind search but will not be covered until
search. later.
Uninformed search algorithm
• There are six uninformed search algorithms
• Breadth First Search
• Uniform Cost Search
• Depth First Search
• Depth Limited Search
• Iterative Deepening Search
• Bi-directional Search
1. Breadth First Search (BFS)
All nodes are expanded from the root, and then every successor is expanded
and so on until we find the goal state.
1 2
3 4 5 6
1. Breadth First Search (1)
BFS is: accessing the nodes level by level.
What does this mean? From the four criteria, it means:
BFS is complete. If there exists an answer, it will be found.
BFS is optimal. The path from the start state to goal state will be shallow(small distance).
What about time complexity and space complexity?
If we look at how BFS expands from the root we see that it first expands on a set number of
nodes, say b.
On the second level it becomes b2.
On the third level it becomes b3.
And so on until it reaches bd for some depth d.
1+ b + b2 + b3 + . . . . + bd which is O(bd)
Since all leaf nodes need to be stored in memory, space complexity is the same as time complexity.
2. Uniform Cost Search (UCS)
Uniform Cost Search is a modification of BFS.
BFS returns a solution, but it may not be optimal
UCS basically takes into account the cost of moving from
one node to the next.
2. Uniform Cost Search (1)
Example of UCS:
2. Uniform Cost Search (2)
• Evaluating UCS with the four criteria:
• Complete? Yes. Same as BFS.
• Optimal? Yes. More so than BFS, because the shortest path
is guaranteed.
• Time complexity? Still O(bd)
• Space complexity? Still O(bd)
3.Depth First Search
DFS expands one node to the deepest level until a dead end is reached and then
traces back and expands on shallower nodes.
it is neirther complete nor optimal: it can make long choice and infinete path.
Good: Bad:
- Since we don’t expand all nodes at a - If you have deep search trees
level, space complexity is modest. (or infinite – which is quite
- For branching factor b and depth m, we possible), DFS may end up
require bm number of nodes to be stored running off to infinity and may
in memory. This is much better than bd. not be able to recover.
- In some cases, DFS can be faster than - Thus DFS is neither optimal or
BFS. However, the worse case is still complete.
O(bm).
3.Depth First Search (1)
Example of DFS:
1 4
2 3 5 6
4. Depth Limited Search (DLS)
Modifies DFS to avoid its pitfalls(problems) so, the depth should be limited.
sometimes the depth limits can be set based on the knowledge of the problem.
E.g: we had to find the shortest path to visit 10 cities. If we start at one of the
cities, then there are at least nine other cities to visit. So nine is the limit.
Since we restrict a limit, little changes from DFS with the exception that we will
avoid searching an infinite path. DLS is now complete if the limit we impose is
greater than or equal to the depth of our solution.
4 Depth Limited Search (1)
So how is it’s time complexity and space complexity?
DLS is O(b l) where l is the limit we impose.
Space complexity is O(b l).
So, space and time are almost identical to DFS.
What about optimality? No.
5.Iterative Deepening Search
Is a general strategy, often used in combination with depth-first tree
search, that finds the best depth limit.
It combines the benefits of DFS and BFS
It does by gradually increasing the limit—first 0, then 1, then 2, and so
on until a goal is found.
5. Iterative Deepening Search (IDS) (1)
0
1 2 6,8
3,9
4 5 13 7 17 8 20
10
11 12 14 15 18 19 21 22
5. Iterative Deepening Search (IDS) (2)
Evaluating IDS:
We look at the bottom most nodes once, while the level above twice, and the level
above that thrice and so on until the root.
Combining this factor with BFS, we get: (d+1)1 + (d)b + (d-1)b2 + … + 3bd-2 + 2bd-1 + 1bd
Like DLS, IDS is still O(bd) while space complexity is bd.
6. Bi-directional search
Simultaneously search from the start node (initial state) out and from the goal
backward till both meet to identify common state.
The search ends somewhere in the middle when the two touch each other.
Time complexity: O(2bd/2) = O(bd/2)
Space complexity is the same because for the two searches to meet at some point,
all nodes need to be stored in memory.
It is complete and optimal.
6. Bi-directional search(1)
Problems:
Do we know where the goal is?
What if there is more than one possible goal state? We may be
able to apply a multiple state search but this sounds a lot easier
said than done.
Example: How many checkmate states are there in chess?
We may utilize many different methods of search but which one is
choice?
Comparing uninformed search strategies
INFORMED (HEURISTIC ) SEARCH STRATEGIES
Greedy best-first search
Lesson 4
A* search
Memory-bounded heuristic search
Greedy search (Best first search)
Is GENERAL-SEARCH where the minimum-cost nodes (according to some
measure) are expanded first.
If we minimize the estimated cost to reach the goal, h(n), we get greedy search.
The search time is usually decreased compared to an uninformed algorithm, but the
algorithm is neither optimal nor complete.
Evaluates nodes using heuristic function: f (n) = h(n)(heuristic) = estimate cost from n to goal
It implimented using priority queue by increasing f(n)
Disadvantage: it get stuck in loops. it is not optimal.
Greedy search (Best first search)
Properties of Greedy best search E.g: hSLD(n) = straight-line distance from n to
Bucharest city
Complete? No
Time? O(bm), but a good heuristic
can give dramatic improvement
Space? O(bm) -- keeps all nodes
in memory
Optimal? No
A* search
Minimizing f(n) = g(n) + h(n) combines the the advantages of uniform-cost search and
greedy search. If we handle repeated states and guarantee that h(n) never
overestimates, we get A* search.
A* is an algorithm that:
Uses heuristic to guide search while ensuring that it will compute a path with minimum cost.
Avoid expanding paths that are already expensive, but expands most promising paths first.
Evaluation function f(n) = g(n) + h(n)
g(n) = actual cost (cost so far to reach n)
h(n) = estimated cost (from n to goal)
f(n) = estimated total cost of path through n to goal
A* search Example [textbook]
Properties of A* search
Complete? Yes (unless there
are infinitely many nodes with
f ≤ f(G) )
Time? Exponential
Space? Keeps all nodes in
memory
Optimal? Yes
Memory-bounded heuristic search
We can reduce the space requirement of A* with memory-bounded algorithms
In practise A* run out of memory before it runs out of time
How can we solve the moemory problem for A* seaarch?
Solution to A* problems (maintain completness and Optimality) to reducing memory requirement
Three methods to reduce memory requirement
1. Iteretive deepening A* (IDA*)
Similar to iteretive deepening search, but cut off at (g(n)+h(n)) > max instead of depth > max.
At each iteration, cutoff is the first f-cost that exceeds the cost of the node at the previous
iteration.
Memory-bounded heuristic search (1)
2. Recursive Best- first search(RBFS)
Recursive algorithm that attempt to mimic (copy) standard best-first search with
lenear spaces.
3. Memory-bounded A*(MA*)
Drop the worst-leaf node when memory is full.
Set max to some memory bound
Cut the full memory, to add a node drop the worst (g+h) node that is already
stored.
Expands newest best leaf, delates oldest worst leaf.
Memory Bounded Heuristic
Search: RBFS
best alternative
over fringe nodes,
which are not children:
i.e. do I want to back up?
Avoiding Repeated States 3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
For many problems, repeated states are unavoidable. 3.4. Search Strategies
3.5. Avoiding Repeated States
This includes all problems where the operators are reversible, 3.6. Constraint Satisfaction Search
3.7. Games as Search Problems
such as route-finding problems
The search trees for these problems are infinite, but if we prune some of the
repeated states, we can cut the search tree down to finite size.
The space contains only m + 1 states, where is the maximum depth. Because the
tree includes each possible path through the space, it has 2 m branches.
Avoiding Repeated States (1)
Sometimes it is possible to expand to states/nodes that have already been visited.
This may become costly and inefficient.
There are three ways to deal with this problem (Repeated states) in increasing
order of effectiveness and computational overhead:
Do not return to the state you just came from.
Do not create paths with cycles in them.
Do not generate any state that was ever generated before. B/c resulting in a
huge space complexity O(bd).
Constraint satisfaction problems (CSPs) 3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
Is standard search problem which has state and goal 3.4. Search Strategies
3.5. Avoiding Repeated States
3.6. Constraint Satisfaction Sear
3.7. Games as Search Problems
State is any data structure that supports successor function and goal.
Goal is a set of constraints specifying allowable combinations of values for
subsets of variables.
CSP: also factored representation of states.
set of variable with value
allows for more suficient algorithm
find any solution
CSP: Goal formulation
X, is set of variables: {x1,x2,...xn}
D, is set of domain for each X,{D1,D2,...Dn}
C, is set of constraints, Ci=<scope, relationship>
CSP: Example 1- Map-Coloring
No adjecent region have same color
Color of each region is Red,Green or Blue
Variables: WA, NT, Q, NSW, V, SA, T
Domains Di: {red,green,blue}
Constraints: adjacent regions must have
different colors
E.g. WA ≠ NT, or (WA,NT) in {(red,green), (red,blue),
(green,red), (green,blue),(blue,red),(blue,green)}
Solutions are complete and consistent assignments
E.g. WA = red, NT = green, Q = red, NSW =
green,V = red,SA = blue,T = green
CSP: Example 2- Scheduling class
Formulate interm of variable, domain and constraint.
Variable X: {comp1,comp2,comp3 and comp4 etc.}
Domain D: {monday 2-5 am,wednesday 8-11pm, tuesday 3-5am,thrusd 8 10pm }
Constraint C: for all i,j if Xi,Xj (classes) are requisites then Ti =Tj (T stands for
time)
CSP: Example 2- cryptarithmetic
Variables: F T U W R O X1 X2 X3
Domains: {0,1,2,3,4,5,6,7,8,9}
Constraints: All diff (F,T,U,W,R,O)
C1000 C100 C10
O+O= R+10*C10
C10+W+W=U+10*C100
C100+T+T=O+10*C1000
C1000=F
CSP: Example 4- Scheduling class
Formulate interm of variable, domain and constraint.
Variable X: {comp1,comp2,comp3 and comp4 etc.}
Domain D: {monday 2-5 am,wednesday 8-11pm, tuesday 3-5am,thrusd 8 10pm }
Constraint C: for all i,j if Xi,Xj (classes) are requisites then Ti =Tj (T stands for
time)
CSP: Varieties of constraints
• Unary constraints involve a single variable.
E.g.SA ≠ green
• Binary constraints involve pairs of variables.
E.g. SA ≠ WA
• Higher-order constraints involve 3 or more variables.
E.g. cryptarithmetic column constraints
Real-world CSPs
Many real-world problems involve real-valued variables
Assignment problems. E.g. who teaches what class
Timetabling problems. E.g. which class is offered when and where?
Transportation scheduling
Factory scheduling
Games as search Problems
3.1. Problem Solving by Searching
3.2. Problem Solving Agents
3.3. Problem Formulation
The problems are usually characterized into two types: 3.4. Search Strategies
3.5. Avoiding Repeated States
3.6. Constraint Satisfaction Search
3.7. Games as Search Problem
Toy Problems Real World Problems
May require little or no ingenuity games
Complex to solve
The 8-puzzle
Route finding
The 8-queens problem
Travelling salesman problems
The Vacuum World
VLSI layout
River-Crossing Puzzles
Robot navigation
Assembly sequencing
Toy Problems: The 8-puzzle
The 8-puzzle is a small single board player game:
Tiles are numbered 1 through 8 and one blank space on a 3 x 3 board.
A 15-puzzle, using a 4 x 4 board, is commonly sold as a child's puzzle.
Possible moves of the puzzle are made by sliding an adjacent tile into the position
occupied by the blank space, this will exchanging the positions of the tile and blank
space.
Only tiles that are horizontally or vertically adjacent (not diagonally adjacent) may
be moved into the blank space.
START GOAL
Goal test: have the tiles in ascending order.
Path cost: each move is a cost of one.
States: the location of the tiles + blank in the n x n matrix.
Operators: blank moves left, right, up or down.
Copyright, 2000 © Cheryl Arany, Roi Barak, Martin Dang & Arthur Katayen
Toy Problems: The N-queens problem
In this problem,we need to place eight queens on the chess board so that they
do not check each other. A queen attacks any piece in the same row, column
or diagonal.
States: Any arrangement of 0 to 4 queens on the board is a state.
Initial state: No queens on the board.
Actions: Add a queen to any empty square.
Transition model: Returns the board with a queen added to the specified square.
Goal test: 4 queens are on the board, none attacked.
8-Queens problem
iNITIAL GOAL
Toy Problems: The Vacuum World
Examine the Vacuum World. Assume, the agent knows where located and the dirt places
Goal test: no dirt left in any square
Path cost: each action costs one
States: one of the eight states
Operators: move left, move right, suck
Toy Problems: River-Crossing Puzzles
The puzzle presents a farmer who has to transport a goat, a wolf, and some
cabbages across a river in a boat that will only hold the farmer and one of the cargo
items.
In this scenario, the cabbages will be eaten by the goat, and the goat will be eaten
by the wolf, if left together unattended.
Solutions to river-crossing puzzles usually involve multiple trips with certain items
brought back and forth between the riverbanks.
Goal test: reached state (0,0,0,0).
Path cost: number of crossings made.
States: a state is composed of four numbers representing the number of goats,
wolfs, cabbages, boat trips. At the start state there is (1,1,1,0)
Operators: from each state the possible operators are: move wolf, move
cabbages, move goat. In total, there are 3 operators.
Toy Problems: Route finding
It is applicable or Used in
Computer networks
Automated travel advisory systems
Airline travel planning systems
path cost
money
seat quality
time of day
type of airplane
Toy Problems: Travelling Salesman Problem(TSP)
A salesman must visit N cities.
Each city is visited exactly once and finishing the city started from. There is usually
an integer cost c(a,b) to travel from city a to city b.
However, the total tour cost must be minimum, where the total cost is the sum of
the individual cost of each city visited in the tour.
It’s an NP Complete problem
no one has found any really efficient way of solving them for large n.
Thank you