0% found this document useful (0 votes)
230 views17 pages

Memory Bounded1

This document discusses memory-bounded heuristic search algorithms. It introduces Iterative Deepening A* and Recursive Best-First Search as approaches to limit memory usage. It also discusses how to invent effective heuristics functions, including using pattern databases and machine learning techniques. Well-designed admissible heuristics that estimate solution cost can significantly reduce the search space.

Uploaded by

Prasidha Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
230 views17 pages

Memory Bounded1

This document discusses memory-bounded heuristic search algorithms. It introduces Iterative Deepening A* and Recursive Best-First Search as approaches to limit memory usage. It also discusses how to invent effective heuristics functions, including using pattern databases and machine learning techniques. Well-designed admissible heuristics that estimate solution cost can significantly reduce the search space.

Uploaded by

Prasidha Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Artificial Intelligence

Memory Bounded Search


B E (CSE) B Section, Semester 5
T. T. Mirnalinee
Department of Computer Science and Engineering
SSN College of Engineering
24th August 2020
Memory-Bounded Heuristic Search

• Iterative Deepening A* (IDA*)


• Iterative Deepening: cutoff, g(n)
• f(n)=g(n)+h(n)
• In each iteration, use the next higher f-cost
• Recursive BFS
• BFS using linear space
• Like Recursive DFS

• BFS and A* space requirement is exponential with respect to depth d


Heuristic/evaluation function
• Evaluation function f (s) is designed based on some heuristics h(s) —
estimation of cost of reaching a goal state from state s
• For example, can you think of a heuristics for the route finding
• problem in a map? — Straight line distance (SLD) from the current
city to the destination city
• Heuristics should be an easy function to compute!
• h(s) should be 0 for any goal state s
Example
• Consider the sliding puzzle, such as
• What may be a good heuristics for this state space?
• h1(s) : Number of misplaced tiles — for the above state h1(s) = 7
• h2(s) : Sum of Manhattan distances of tiles from their goal positions
• — for the above state h2(s) = 2 + 0 + 2 + 2 + 1 + 1 + 4 + 1 = 13
• Which heuristics is better? — an estimate which is closer to the
actual is always better!
• h2 dominates h1
• An admissible heuristics is one which does not overestimate — in our
example, both h1(x) and h2(x) are admissible
Iterative Deepening A
• Similar to the iterative lengthening search
• f -cost is used as a cut-off
• Remember the smallest f -cost of any node that exceeded the cut-off
• in the previous iteration, and use it as cut-off for current iteration
• Drawback: Too many iterations required when f -cost increases very
• slowly
Recursive Best-First Search (RBFS)
• Dynamically adjusts the cost-limit for a particular path
• Keeps track of the best alternative path available from any ancestor
• of the current node
• When the best child exceeds this limit, the search unwinds
• While unwinding, cost of current node is replaced with that of best
child
RBFS
• RBFS is generally better than IDA, but may still suffer from
• excessive node regeneration
• Every path change could require many reexpansions of forgotten
nodes
• Linear space complexity, but at the cost of increased time
• Ends up under utilizing the available memory!
Simplified Memory-Bounded A (SMA)
• Motivation: Make use of as much memory as possible!
• Proceed like normal A until memory permits
• Throw away nodes with largest f -value to get space for new ones
• Like RBFS, remember the best forgotten ones in its parent
• Drawback: How do we handle the situation where there are several
• nodes with same f -cost?
• Drawback: In some harder problems, situation similar to thrashing
• may occur
• Memory may be an issue in finding an optimal solution
• None of A and its variants is guaranteed to solve any complex
• problem with given memory and time
• Reinforcement learning techniques may be used to learn some
• meta-heuristics to guarantee completeness, but may be at the cost of
• optimality
Inventing Heuristics
• A problem with fewer restrictions on the actions is called a relaxed
problem
• Exact solution of a relaxed problem is an admissible heuristics for the
original problem
• A tile can move to any adjacent square — we get h1
• A tile can “fly” to any square — we get h2
• If h1, h2, . . . , hm are admissible, then h(n) = max {h1(n), . . . , hm(n)}
is also admissible — and it dominates others
• State graph of a relaxed problem is a supergraph of the original state
graph
• Short-cut edges are added to the original graph
• Solution to the original problem is still a solution for the relaxed
problem — but, the relaxed problem has other optimal short-cuts
• assume that optimal solution for a relaxed problem canbe easily
found
• It is clear that optimal solution for a relaxed problem is admissible
• It can also be shown that heuristics invented like this is also consistent
Pattern databases
• Each subproblem is a pattern, and a pattern database can be built to
arrive at effective heuristics
• Each entry gives an admissible heuristics, and they can be combined!
• — building database of disjoint patterns may help
Learning heuristics
• By solving several instances of a problem, we will have a good
collection of actual cost of reaching goal from different states
• Heuristics function may now be learnt from these examples —inductive
learning
• Typically, the heuristics function to be learnt will be expressed in
terms of weighted combination of features
h(n) = w1f1 + w2f2 + · · · + wnfn
• Examples of features: “Number of misplaced tiles”, “number of pairs
of adjacent tiles that are not adjacent in the goal state”
• Several learning algorithms are available to learn these weights from
examples
Heuristics
• Even for such a simple problem, the search space is huge!
• Average depth (22) and average branching factor (3) imply searching
• about 322 3.1x1010 states!!!
• What about higher order sliding puzzle problems?
• Use of good heuristics can drastically cut down the search space!
• h1: Number of misplaced tiles
• h2: Sum of the (vertical + horizontal) distances of the tiles from their
• goal positions (total Manhattan distance)
• For the given start state, h1 = 8, and
• h2 = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18
• It can be shown that both h1 and h2 are admissible
• As discussed already, h2 dominates h1
Dominant heuristics
• Does dominance matter?
• Performance of a heuristics may be measured in terms of effective
• branching factor b
• If N nodes are generated to find a solution at depth d, then b is the
• branching factor of a uniform tree to generate N nodes with depth d
• Effective branching factor is determined by the relative error in
estimation = (h − h)/h
• A using h2 does not expand more nodes than the one using h1
• Sometimes, it may be possible to design an evaluation function f (s)
• that evaluates the “badness” (to be minimized) or “goodness” (to be
• maximized) of a state s
• In such cases, the most desirable state may be chosen from the
• working set
• Working set is maintained as a priority queue based on the evaluation
• function f
• Obviously, the quality of search depends on the evaluation function f

You might also like