0% found this document useful (0 votes)
28 views87 pages

Unit - II Problem Solving in Ai

The document discusses various problem-solving techniques in AI, focusing on search strategies such as uninformed and informed searches. It covers specific methods like breadth-first search, depth-first search, uniform cost search, and heuristic approaches like A* search. Each method is analyzed for completeness, optimality, and time/space complexity, highlighting their advantages and disadvantages in finding solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views87 pages

Unit - II Problem Solving in Ai

The document discusses various problem-solving techniques in AI, focusing on search strategies such as uninformed and informed searches. It covers specific methods like breadth-first search, depth-first search, uniform cost search, and heuristic approaches like A* search. Each method is analyzed for completeness, optimality, and time/space complexity, highlighting their advantages and disadvantages in finding solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

UNIT-II

PROBLEM SOLVING IN AI
SEARCH TECHNIQUES
Searching for
solutions
Finding out a solution is done by
searching through the state
space
All problems are transformed
as a search tree
generated by the initial state
and successor function
Uninformed search
strategies
3Uninformed search
strategies
Uninformed search
no information about the number of steps
or the path cost from the current state to the goal
search the state space blindly
Informed search, or heuristic search
a cleverer strategy that searches toward the goal,
based on the information from the current state so
far
Uninformed search strategies
Breadth-first search
Uniform cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional search
Breadth-first search
The root node is expanded first (FIFO)
All the nodes generated by the root
node are then expanded
And then their successors and so on
Breadth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (1)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (2, 3)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (3, 4, 5)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (4, 5, 6, 7)

4 5 6 7
Breadth-first search
(Analysis)
Breadth-first search
Complete – find the solution eventually
Optimal, if step cost is 1
The disadvantage
if the branching factor of a node is large,
for even small instances (e.g., chess)
 the space complexity and the time complexity are
enormous
Properties of breadth-first search
 Complete? Yes (if b is finite)

Time? 1+b+b2+b3+… +bd = b(bd-1) = O(bd+1)



 Space? O(bd+1) (keeps every node in memory)

Optimal? Yes (if cost = 1 per step)


Space is the bigger problem (more than time)



Breadth-first search (Analysis)
 assuming 10000 nodes can be processed per second,
each with 1000 bytes of storage
Uniform cost search
Breadth-first finds the shallowest goal state
but not necessarily be the least-cost solution
work only if all step costs are equal
Uniform cost search
modifies breadth-first strategy
 by always expanding the lowest-cost node
The lowest-cost node is measured by the path cost
g(n)
Uniform cost search
 the first found solution is guaranteed to be the
cheapest
least in depth
But restrict to non-decreasing path cost
Unsuitable for operators with negative cost
Depth-first search
Always expands one of the nodes at the
deepest level of the tree
Only when the search hits a dead end
goes back and expands nodes at shallower
levels
Dead end  leaf nodes but not the goal
Backtracking search
only one successor is generated on
expansion
rather than all successors
fewer memory
Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front
Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
Expand deepest unexpanded node

Implementation:
 fringe = LIFO queue, i.e., put successors at front

Depth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Depth-first search (Analysis)
Not complete
because a path may be infinite or looping
then the path will never fail and go back try
another option
Not optimal
 it doesn't guarantee the best solution
Properties of depth-first
search
 Complete? No: fails in infinite-depth spaces,
spaces with loops
Modify to avoid repeated states along path
  complete in finite spaces

 Time? O(bm): terrible if m is much larger


than d
 but if solutions are dense, may be much
faster than breadth-first
 Optimal? No
Depth-Limited Strategy
 Depth-first with depth cutoff k (maximal depth below
which nodes are not expanded)

Three possible outcomes:


Solution
Failure (no solution)
Cutoff (no solution within cutoff)
Depth-limited search
It is depth-first search
with a predefined maximum depth
However, it is usually not easy to define the
suitable maximum depth
too small  no solution can be found
too large  the same problems are suffered
from DFS
Anyway the search is
complete
but still not optimal
Depth-limited search
S depth = 3
A D 3
6
B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
Iterative deepening search
No choosing of the best depth limit
It tries all possible depth limits:
first 0, then 1, 2, and so on
combines the benefits of depth-first and breadth-first search
Iterative deepening search
Iterative deepening search (Analysis)
optimal
complete
Time and space complexities
reasonable
suitable for the problem
having a large search space
and the depth of the solution is not known
Properties of iterative deepening
search
Complete? Yes

Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)

Space? O(bd)

Optimal? Yes, if step cost = 1
Heuristic search strategies
Best-first search
Greedy best-first search
A* search
Heuristics
Local search algorithms
Hill-climbing search
Simulated annealing search
Local beam search
Genetic algorithms
Best-first search
 Idea: use an evaluation function f(n) for each node
 estimate of "desirability

 Expand most desirable unexpanded node

 Implementation:
 Order the nodes in fringe in decreasing order of
desirability

 Special cases:
 greedy best-first search
 A* search
Romania with step costs in km
Greedy best-first search
Evaluation function f(n) = h(n) (heuristic)
= estimate of cost from n to goal

e.g., hSLD(n) = straight-line distance from n to Bucharest

Greedy best-first search expands the node that appears to
be closest to goal

Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
Complete? No – can get stuck in loops, e.g., Iasi  Neamt 
Iasi  Neamt 

Time? O(bm), but a good heuristic can give dramatic
improvement

Space? O(bm) -- keeps all nodes in memory

Optimal? No

A* search
Idea: avoid expanding paths that are already expensive

Evaluation function f(n) = g(n) + h(n)

g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal

A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
A heuristic h(n) is admissible if for every node n,
 h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n.

An admissible heuristic never overestimates the cost to


reach the goal, i.e., it is optimistic
Example: hSLD(n) (never overestimates the actual road
distance)
Theorem: If h(n) is admissible, A* using TREE-SEARCH is
optimal
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be
an unexpanded node in the fringe such that n is on a shortest path to an optimal
goal G.

 f(G2) = g(G2) since h(G2) = 0


 g(G2) > g(G) since G2 is suboptimal
 f(G) = g(G) since h(G) = 0
 f(G2) > f(G) from above
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on
a shortest path to an optimal goal G.

 f(G2) > f(G) from above


 h(n) ≤ h^*(n) since h is admissible
 g(n) + h(n) ≤ g(n) + h*(n)
 f(n) ≤ f(G)
 Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
A heuristic is consistent if for every node n, every successor n' of n generated by any
action a,

h(n) ≤ c(n,a,n') + h(n')

If h is consistent, we have

f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)

 i.e., f(n) is non-decreasing along any path.


 Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Optimality of A*
 A* expands nodes in order of increasing f value

 Gradually adds "f-contours" of nodes


 Contour i has all nodes with f=fi, where fi < fi+1
Properties of A$^*$
Complete? Yes (unless there are infinitely many nodes with
f ≤ f(G) )
Time? Exponential
Space? Keeps all nodes in memory
Optimal? Yes
Admissible heuristics
E.g., for the 8-puzzle:

 h (n) = number of misplaced tiles


1
 h (n) = total Manhattan distance
2
 (i.e., no. of squares from desired location of each tile)

 h (S) = ?
1
 h (S) = ?
2
Admissible heuristics
E.g., for the 8-puzzle:

 h (n) = number of misplaced tiles


1
 h (n) = total Manhattan distance
2
 (i.e., no. of squares from desired location of each tile)

 h (S) = ? 8
1
 h (S) = ? 3+1+2+2+2+3+3+2 = 18
2
Dominance
 If h (n) ≥ h (n) for all n (both admissible)
2 1
 then h dominates h
2 1
 h is better for search
2

Typical search costs (average number of nodes expanded):


 d=12 IDS = 3,644,035 nodes


A*(h1) = 227 nodes
A*(h2) = 73 nodes
 d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes

Relaxed problems
A problem with fewer restrictions on the actions is called a
relaxed problem
The cost of an optimal solution to a relaxed problem is an
admissible heuristic for the original problem
If the rules of the 8-puzzle are relaxed so that a tile can
move anywhere, then h1(n) gives the shortest solution
If the rules are relaxed so that a tile can move to any
adjacent square, then h2(n) gives the shortest solution
Local search algorithms
In many optimization problems, the path to the goal is
irrelevant; the goal state itself is the solution

State space = set of "complete" configurations


Find configuration satisfying constraints, e.g., n-queens

In such cases, we can use local search algorithms


keep a single "current" state, try to improve it
Example: n-queens
Put n queens on an n × n board with no two queens on the
same row, column, or diagonal

Hill-climbing search
"Like climbing Everest in thick fog with amnesia"

Hill-climbing search
Problem: depending on initial state, can get stuck in local
maxima

Hill-climbing search: 8-queens problem

 h = number of pairs of queens that are attacking each other, either


directly or indirectly
 h = 17 for the above state
Hill-climbing search: 8-queens problem

• A local minimum with h = 1


Simulated annealing search
Idea: escape local maxima by allowing some "bad" moves
but gradually decrease their frequency

Properties of simulated annealing
search
One can prove: If T decreases slowly enough, then
simulated annealing search will find a global optimum with
probability approaching 1

Widely used in VLSI layout, airline scheduling, etc


Local beam search
Keep track of k states rather than just one

Start with k randomly generated states

At each iteration, all the successors of all k states are


generated

If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
Genetic algorithms
 A successor state is generated by combining two parent states

 Start with k randomly generated states (population)

 A state is represented as a string over a finite alphabet (often a


string of 0s and 1s)

 Evaluation function (fitness function). Higher values for better


states.

 Produce the next generation of states by selection, crossover,


and mutation
Genetic algorithms

 Fitness function: number of non-attacking pairs of queens (min


= 0, max = 8 × 7/2 = 28)
 24/(24+23+20+11) = 31%
 23/(24+23+20+11) = 29% etc
Genetic algorithms

You might also like