Problems Solving by searching
1
Outline
Introduction
Problem-solving agents
Problem formulation
Problem types
Example problems
Basic search algorithms
2
Problem Solving by Search
The solution of many problems can be described by finding a sequence of
actions that lead to a desirable goal.
Each action changes the state and the aim is to find the sequence of actions and
states that lead from the initial (start) state to a final (goal) state.
Problem can described by
Initial state
Operator or successor function
State space
Path
Goal test
3
Problem Solving by Search
Well-defined problem can be described by:
Initial state - the state that the agent starts in
Operator or successor function - for any state x
returns s(x), the set of states reachable from x with one action
State space - all states reachable from initial by any sequence
of actions (set of possible states for a given problem)
Path - function that assigns a cost to a path. Cost of a path is
the sum of costs of individual actions along the path
Goal test - test to determine if at goal state
4
INTRODUCTION
Problem solving involves
Problem definition
Problem analysis
Knowledge representation - representing information
Problem solving (selection of best techniques)
Knowledge representation is not just storing data into some
database, but it also enables an intelligent machine to learn
from that knowledge and experiences so that it can behave
intelligently like a human.
5
Problem-solving agents
6
Example: Shortest path between VIT
Vellore Campus To VIT Chennai Campus
To reach the campus in shortest route as well as cross
the minimum number of Toll-gate’s
Problem formulation: It is one of the core steps of problem-solving which
decides what action should be taken to achieve the formulated goal.
Formulate goal:
Shortest path
Formulate problem:
states: various urban areas
actions: drive between cities
Find solution:
sequence of urban areas, e.g., Kancheepuram, Thambaram,
Kelambakkam. 7
Example: Cont…
8
Problem types
Deterministic, fully observable Single-state problem
Agent knows exactly which state it will be after any
action seq; solution is a sequence
Deterministic, Non-observable Sensorless problem
(multi-state problem)
Also called sensorless problems (Conformant
problems)
Agent may have no idea where it is; solution is a
sequence (not exactly known but limited to a set of
possible states)
9
Problem types
Nondeterministic and/or partially observable Contingency problem
The agent doesn’t know what effect its actions will have.
percepts provide new information about current state
This could be due to the environment being partially observable, or
because of another agent.
Ways to handle this:
Solution is a list of conditional actions
Interleave search and execution
Example: Partially observable vacuum world (meaning you don’t
know the status of the other square)
Unknown state space Exploration problem
States and actions of the environment are unknown. The agent must act
to discover them.
Difficult! Ex: Online Search Agent
10
Problem types
Single-state problem
observable (at least the initial state)
deterministic
static
discrete
Multiple-state problem
partially observable (initial state not observable)
deterministic
static
discrete
Contingency problem
partially observable (initial state not observable)
non-deterministic
11
Example: vacuum world
Need to clean the world using a vacuum cleaner. the world has just two locations.
Eight possible world states. There are three possible actions: left, right, and suck. The
goal is to clean up all the dirt, i.e., the goal is equivalent to the set of states {7,8}.
Single-state, start in #5.
Solution?
12
Example: vacuum world
Single-state, start in #5.
Solution? [Right, Suck]
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
13
Example: vacuum world
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck] (Random actions)
No fixed actions to reach the goal.
Contingency
Nondeterministic: Suck may
dirty a clean carpet [Murphy's law]
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7
Solution?
Murphy's law “Anything that can go wrong will go wrong.” As a rule-based system,
automation does what you tell it to do. If, then, you give it faulty or incorrect
instructions, it will either not work at all, or complete the task incorrectly. 14
Example: vacuum world
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
Contingency
Nondeterministic: Suck may
dirty a clean carpet [Murphy's law]
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
15
Single-state problem formulation
A problem is defined by four items:
1. Initial state e.g., "at Katpadi"
2. Actions or successor function S(x) = set of action–state pairs
e.g., S(Katpadi) = {<Wallaja Kancheepuram, Padappai>, … }
3. Goal test, can be
explicit, e.g., x = "at VIT Chennai"
implicit, e.g., Cross-Min no. of Toll-gate or Shortest path(x)
4. Path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial state to a goal state
16
Selecting a state space
Real world is absurdly complex
state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions
e.g., “Katpadi Chennai" represents a complex set of possible routes, detours, rest
stops, etc.
For guaranteed realizability, any real state "in Katpadi“ must get to some real
state "in Katpadi“
(Abstract) solution = set of real paths that are solutions in the real world
Each abstract action should be "easier" than the original problem
17
Vacuum world state space graph
states?
actions?
goal test?
path cost?
18
Vacuum world state space graph
states? integer dirt and robot location
actions? Left, Right, Suck
goal test? no dirt at all locations
path cost? 1 per action
19
Example: The 8-puzzle
states?
actions?
goal test?
path cost?
20
Example: The 8-puzzle
states? locations of tiles
actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
21
Tree search algorithms
Basic idea:
offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding states)
22
Tree search example
23
Tree search example
24
Tree search example
25
Implementation: general tree search
Fringe – Queue of nodes
26
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth
The Expand function creates new nodes, filling in the various
fields and using the SuccessorFn of the problem to create the
corresponding states.
27
Search strategies
Searching is a process to find the solution for a given set of
problems.
Rational agents or Problem-solving agents used these search
strategies to solve a specific problem and provide the best result.
A search strategy is defined by picking the order of node
expansion
Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
If an algorithm is complete, it means that if at least one solution exists then
the algorithm is guaranteed find a solution in a finite amount of time. ...
If a search algorithm is optimal, then when it finds a solution it finds the
best solution. 28
Search strategies
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
29
Uninformed search strategies
‘Uninformed Search’ means the machine blindly follows the algorithm
regardless of whether right or wrong, efficient or in-efficient. These algorithms
are brute force operations, Thus uninformed search algorithms are also called
blind search algorithms.
They don’t have extra information about the search space; the only information
they have is on how to traverse or visit the nodes in the tree.
Uninformed search strategies use only the information available in the problem
definition(blind search - without using any domain specific knowledge.).
30
Different types of Uninformed search
strategies
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional Search
31
Breadth-first search
This algorithm searches breadthwise in a tree or graph.
Searching started from the root node of the tree and expands all
successor node at the current level before moving to nodes of next
level.
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
32
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
33
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at
end
34
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
35
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
36
Properties of breadth-first search
Complete? Yes (if b is finite)
Time? O(bd)
Space? O(bd) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
37
Uniform-Cost Search
Traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for
each edge.
The primary goal of the uniform-cost search is to find a path to the
goal node which has the lowest cumulative cost.
Uniform-cost search expands nodes according to their path costs form
the root node.
It can be used to solve any graph/tree where the optimal cost is in
demand.
38
Uniform-Cost Search
Implemented by the priority queue. It gives maximum priority to the
lowest cumulative cost.
g(n) = cost of the path from start node to the current node
Uniform cost search is equivalent to BFS algorithm if the path cost of
all edges is the same.
39
Uniform-Cost Search
S0
S0{B1, A3, C8}
B1{A3,C8,G21}
A3{D6,C8,E10,G18,G21}
D6{C8,E10,G18,G21}
C8{E10,G13,G18,G21}
E10{G13,G18,G21}
G13{G18,G21}
Path = S C G => Cost 13
No of nodes expanded
(including Goal node) = 7
40
Uniform-Cost Search - Exercise
Source node - A, Goal node – H What path is to expand the maximum
Path cost = ? node to reach the goal node?
Minimum cost 25, Ans: A C B E F G D H = COST - 27
Path A C D H 41
Uniform-cost search
Advantages:
• Uniform cost search is optimal because at every state the path with the least cost is
chosen.
Disadvantages:
• It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
42
Uniform-cost search
Expand lowest path -cost unexpanded node
Implementation:
fringe = priority queue ordered by path cost
Equivalent to breadth-first if step costs all equal
Complete? Yes (if b is finite and costs are stepped costs are zero)
Time Complexity: O(b(c/ϵ)) where, ϵ -> is the lowest cost, c -> optimal cost
Space complexity: O(b(c/ϵ))
Optimal: Yes (even for non-even cost)
43
Depth-first search
44
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO stack, i.e., put successors at front
45
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
46
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
47
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
48
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
49
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
50
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
51
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
52
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
53
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
54
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
55
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
56
Depth-first search
Advantages:
DFS requires very little memory as it only needs to store a stack
of the nodes on the path from the root node to the current node.
It takes less time to reach the goal node than the BFS
algorithm(if it traverses in the right path).
Disadvantages:
There is the possibility that many states keep reoccurring, and
there is no guarantee of finding the solution.
The DFS algorithm goes for deep down searching and sometimes
it may go to the infinite loop.
57
Properties of depth-first search
Complete? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
complete in finite spaces
Time? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
m – maximum depth
Space? O(bm), i.e., linear space!
Optimal? No
58
Depth-limited search
The depth limit solves the infinite-path problem.
Depth-limited search can be terminated with two Conditions of failure:
Standard Failure Value(SFV): The SFV tells that there is no solution to the
problem.
Cutoff Failure Value(CFV): The Cutoff Failure Value tells that there is no
solution within the given depth-limit.
Limit 0
Limit 1
Limit = 2
Calculate, Depth = root to leaf
Calculate, Height = leaf to root 59
Depth-limited search
Maximum Limit 2.
Limit 0
Limit 1
Limit 2
60
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
61
Properties of Depth-limited search
Complete: No
Time Complexity: O(bl) where, l -> depth-limit
Space complexity: O(bl)
Optimal: No
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.
62
Iterative deepening search
• It is a combination of DFS and BFS algorithms.
• It finds out the best depth limit and does it by gradually increasing
the limit until a goal is found.
• It combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search when
search space is large, and depth of goal node is unknown.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.
Disadvantages:
• The main drawback of IDDFS is that it repeats all the work of
the previous phase.
63
Iterative deepening search
64
Iterative deepening search l =0
65
Iterative deepening search l =1
66
Iterative deepening search l =2
67
Iterative deepening search l =3
68
Iterative deepening search
Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b + b1 + b2 + … + bd-2 + bd-1 + bd
Number of nodes generated in an iterative deepening search to
depth d with branching factor b:
NIDS = d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
For b = 10, d = 5,
NDLS = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110
NIDS = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
Overhead = (123,456 - 111,111)/111,111 = 11%
69
Properties of iterative deepening
search
Complete? Yes
Time? O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1
70
Bidirectional Search Algorithm
Bidirectional search algorithm runs two simultaneous searches,
one from initial state called as forward-search and other from goal
node called as backward-search.
Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS,
DLS, etc.
71
Bidirectional Search Algorithm
Step 1: 1 is the initial node and 16 is the goal node, and 9 is the intersection
node.
Step 2: We will start searching simultaneously from start to goal node and
backward from goal to start node.
Step 3: Whenever the forward search and backward search intersect at one
node, then the searching stops.
72
Bidirectional Search Algorithm
Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
73
Properties of Bidirectional Search
Algorithm
Completeness: Bidirectional Search is complete if we use BFS in both
searches.
Time Complexity: Time complexity of bidirectional search using BFS is
O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.
74
Summary of algorithms
75