0% found this document useful (0 votes)
36 views86 pages

Chapter-3 - Searching and Planning

Chapter Three of the document discusses searching and planning in artificial intelligence, emphasizing problem-solving through algorithms and techniques. It introduces concepts such as Constraint Satisfaction Problems, Problem Solving Agents, and various types of problems encountered in AI, including single-state and contingency problems. The chapter outlines the steps involved in problem-solving, including goal formulation, problem formulation, search, and execution.

Uploaded by

sofoniashagos4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views86 pages

Chapter-3 - Searching and Planning

Chapter Three of the document discusses searching and planning in artificial intelligence, emphasizing problem-solving through algorithms and techniques. It introduces concepts such as Constraint Satisfaction Problems, Problem Solving Agents, and various types of problems encountered in AI, including single-state and contingency problems. The chapter outlines the steps involved in problem-solving, including goal formulation, problem formulation, search, and execution.

Uploaded by

sofoniashagos4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Artificial Intelligence

Chapter Three:
Searching and Planning

01/24/2025 AI/COSC 1
Objectives

After successfully completing this unit, you should


be able to:
 understand the concept of problem-solving through searching
and planning;
 explore the basics of Constraint Satisfaction Problems and
their relevance in problem-solving;
 learn about Problem Solving Agents and their characteristics;
 understand the concept of problem spaces and how search
algorithms can be used to navigate them;
 examine the relationship between knowledge and rationality in
problem-solving agents;
 explore different heuristic search strategies and their
applications;
 understand the concept of search and optimization,
particularly
01/24/2025 gradient descent; 2
AI/COSC
3.1 Solving Problems by searching and
Planning
What is Problem ?
 It is a gap between what actually is and what is desired.
 A problem exists when an individual becomes aware of the
existence of a significant difference between the expected
and the actual situation, which is an obstacle and makes it
difficult to achieve a desired goal or objective.
 Solving problems through searching and planning is a
fundamental concept in the field of artificial intelligence (AI).
 It involves developing algorithms and techniques that enable
AI systems to find optimal or near-optimal solutions to
complex problems.

01/24/2025 AI/CSE 3206 3


3.2 Constraint Satisfaction Problem
 A Constraint Satisfaction Problem (CSP) is a fundamental
concept in the field of artificial intelligence, particularly in the
context of searching and planning.
 CSPs provide a framework for modeling and solving problems
where a set of variables must be assigned values while
satisfying a set of constraints.
 It has a wide-range of applications in various domains,
including scheduling, resource allocation, configuration, and
more and it involves :
• Problem Representation
• Variables and Domains
• Constraints
• Solving CSP’s etc.
01/24/2025 AI/CSE 3206 4
3.3 Problem Solving Agents

- Four general steps in problem solving:


 Goal formulation
• What are the successful world states
 Problem formulation
• What actions and states to consider
given the goal
 Search
• Determine the possible sequence of
actions that lead to the states of known
values and then choosing the best
sequence.
 Execute
• Give the solution perform the actions.

01/24/2025 AI/COSC 5
Problem-solving agent

function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an


action
static: seq, an action sequence
state, some description of the current world state
goal, a goal
problem, a problem formulation

state  UPDATE-STATE(state, percept)


if seq is empty then
goal  FORMULATE-GOAL(state)
problem  FORMULATE-PROBLEM(state,goal)
seq  SEARCH(problem)
action  FIRST(seq)– A simple problem-solving agent.
seq  REST(seq) – It first formulates a goal and a problem, searches
return action for a sequence of actions that would solve the
problem, and then executes the actions one at a
time.
– When this is complete, it formulates another goal
and starts over.
– Note that when it is executing the sequence it
ignores its percepts: it assumes that the solution
it has found will always work.
01/24/2025 AI/COSC 6
Problem Solving Agent cont’d
- A problem can be defined formally by four
components
1. The initial state (the agent starts in)
2. A description of the possible actions
(uses successor function)
3. The goal test which determines whether
a given state is a goal state
4. A path cost is function that assigns a
numeric cost to each path (distance, etc)
- Together, the initial state and successor function
implicitly define the state space of the problem-the
set of all states reachable from the initial state.
- Path in the state space is a sequence of states
connected by a sequence of actions.
01/24/2025 AI/COSC 7
Problem Solving Agent cont’d
- A solution to a problem is a path from the initial
state to a goal state. Solution quality is measured
by the path cost function, and an optimal
solution has the lowest path cost among all
solutions.
Type of agent that solve problem by searching
– Such agent is not reflex or model based reflex agent
because this agent needs to achieve some target
(goal)
– It can be goal based or utility based or learning agent
– Intelligent agent knows that to achieve certain goal,
the state of the environment will change sequentially
and the change should be towards the goal
– Intelligent agents are supposed to maximize their
performance measure
01/24/2025 AI/COSC 8
Problem Solving Agent…cont’d
- Assume a problem is to reach specified place
(location) as it is indicated on the following slide
 A problem is defined by:
• An initial state, e.g. Arad
• Successor function S(X)= set of action-state
pairs
– e.g. S(Arad)={<Arad  Zerind, Zerind>,…}
intial state + successor function = state space
• Goal test, can be
– Explicit, e.g. x=‘at bucharest’
– Implicit, e.g. checkmate(x)
• Path cost (additive)
– e.g. sum of distances, number of actions
executed, …
– c(x,a,y) is the step cost, assumed to be >=
0
A solution is a sequence of actions from initial to
01/24/2025
goal state. AI/COSC 9
Optimal solution has the lowest path cost.
Problem Solving Agent cont’d

01/24/2025 AI/COSC 10
Problem…contd

States

Actions

Start Solution

Goal

01/24/2025 AI/COSC 11
Problem Solving Agent cont’d

- In the preceding section we proposed a formulation of the


problem of getting to Bucharest in terms of the initial
state, successor function, goal test, and path cost.
- This formulation seems reasonable, yet it omits a great
many aspects of the real world. Real world is absurdly
complex.
- The state of the world (state description) includes so
many things for example , :
 The traveling companions,
 What is on the radio,
 The scenery out of the window,
 Whether there are any law enforcement officers
nearby,
 How far lit is to the next rest stop,
 The condition of the road, the weather, and so on.

01/24/2025 AI/COSC 12
Problem Solving Agent cont’d

– All these considerations are left out of our


state descriptions because they are irrelevant
to the problem of finding a route to
Bucharest.
– The process of removing detail from a
representation is called abstraction.
– In addition to abstracting the state description,
we must abstract the actions themselves.
 A driving action has many effects.
• Besides changing the location of the vehicle and its
occupants, it takes up time, consumes fuel,
generates pollution, and changes the agent (as they
say, travel is broadening).
• In formulation, in our example, we take into
01/24/2025 AI/COSC in location.
account only the change 13
Problem formulation

– For vacuum world problem, the problem


formulation involve:
 States: The agent is in one of two locations,
each of which might or might not contain dirt.
Thus there are 2 x 2^2 = 8 possible world
states.
 Initial state: Any state can be designated as
the initial state.
 Successor function: This generates the legal
states that result from trying the three actions
(Left, Right, and Suck).
 Goal test: This checks whether all the squares
are clean.
 Path cost: Each step costs 1, so the path cost
01/24/2025 is the number of steps in the path.
AI/COSC 14
Goal formation cont’d

- Goal formulation: refers to the


understanding of the objective of the agent
based on the state description of the final
environment
- For example, for the vacuum world problem,
the goal can be formulated as
[clean, Clean, agent at any block]

01/24/2025 AI/COSC 15
Problem Solving Agent cont’d
– In problem solving by searching, solution can be
described into two ways.
– Solution can be provided as state sequence or
action sequence
– For example consider the vacuum cleaner
world with initial state as shown bellow
– Solution as state sequence becomes:

Suck Move
Right
In general, an agent with several immediate options of
unknown value can decide what to do by first examining
different possible sequences of actions that lead to states
of known value, and then choosing the best sequence.

01/24/2025 AI/COSC 16
Problem Solving Agent cont’d

– This process of looking for such a


sequence is called search.
– A search algorithm takes a problem as
input and returns a solution in the form of
an action sequence.
– Once a solution is found, the actions it
recommends can be carried out. This is
called the execution phase.
– Thus, we have a simple "formulate,
search, execute" design for the agent

01/24/2025 AI/COSC 17
Problem solving agent cont’d
Example: Road map of Ethiopia
Aksum
100
200 Mekele
Gondar 80
180
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre markos 330 Dire Dawa


230
400
330
Jima Addis Ababa
100
430 Adama 370

Gambela 230 320 Nekemt

Awasa

01/24/2025 AI/COSC 18
Problem Solving agent cont’d
Example: Road map of Ethiopia
- Current position of the agent (Initial State):
Awasa.
- Needs to arrive to: Gondar
- Formulate goal: be in Gondar
- Formulate problem:
 states: various cities
 actions: drive between cities
- Find solution:
 sequence of cities, e.g., Awasa, Adama,
Addis Ababa, Dessie, Godar

01/24/2025 AI/COSC 19
Types of Problems
Four types of problems exist in the real
situations:
1. Single-state problem
– The environment is Deterministic and fully
observable
– Out of the possible state space, agent knows
exactly which state it will be in; solution is a
sequence
2. Sensor less problem(conformant
problem)
– The environment is non-observable
– It is also called multi-state problem
– Agent may have no idea where it is; solution is
a sequence
01/24/2025 AI/COSC 20
Types of Problems cont’d
3. Contingency problem
– The environment is nondeterministic
and/or partially observable
– It is not possible to know the effect of
the agent action
– percepts provide new information about
current state
4. Exploration problem
– The environment is partially observable
– It is also called unknown state space

01/24/2025 AI/COSC 21
Problem type as a summery cont’d

Environment Type Problem Type


Deterministic, fully-observable Single-state problem
Non-observable, known state space Sensorless/conformant
problem
Nondeterministic and/or partially- Contingency problem
observable
Partially observable, unknown state Exploration problem
space

01/24/2025 AI/COSC 22
Vacuum Cleaner World Example
cont’d
Example: vacuum world

Single-state
– Starting state
us known say
in #5.
– What is the
Solution?

01/24/2025 AI/COSC 23
Vacuum Cleaner World Example
cont’d
Example: vacuum world
- Single-state, start in
#5. Solution? [Right,
Suck]

01/24/2025 AI/COSC 24
Vacuum Cleaner World Example
cont’d
Example: vacuum world

Sensorless,
– It doesn’t know what
the current state is
– So the current start is
either of the
following:
{1,2,3,4,5,6,7,8}
– What is the Solution?

01/24/2025 AI/COSC 25
Vacuum Cleaner World Example
cont’d
Example: vacuum
world
Sensorless
• Solution
• Right goes to {2,4,6,8}
Solution?
• [Right,Suck,Left,Suck]

01/24/2025 AI/COSC 26
Vacuum Cleaner World Example
cont’d
Example: vacuum world
Contingency
• Nondeterministic:
- Suck may dirty a clean carpet
• Partially observable:
- Hence we have partial
information
• Let’s assume the current percept
is: [L, Clean] i.e. start in #5 or
#7
• What is the Solution?

01/24/2025 AI/COSC 27
Vacuum Cleaner World Example
cont’d
Example: vacuum world

Contingency
Solution Move right
[Right, if dirt then Suck]

suck

01/24/2025 AI/COSC 28
Vacuum Cleaner World Example
cont’d
• Real-world Problems to be solved by searching
algorithms
– We have seen two such problems:
• The road map problem and the vacuum
cleaner world problem
• Route finding
• Touring problems
• VLSI layout
• Robot Navigation
• Automatic assembly sequencing
• Drug design
• Internet searching
01/24/2025 AI/COSC 29
Vacuum Cleaner World Example
cont’d
Example: vacuum world

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

01/24/2025 AI/COSC 30
Vacuum Cleaner World Example
cont’d
Example: vacuum world

• States?? two locations with or without dirt: 2 x 22=8


states.
• Initial state?? Any state can be initial
• Actions?? {Left, Right, Suck}
• Goal test?? Check whether squares are clean.
• Path cost?? Number of actions to reach goal.

01/24/2025 AI/COSC 31
Example: 8-puzzle

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

01/24/2025 AI/COSC 32
Example: 8-puzzle cont’d

• States?? Integer location of each tile


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Up, Down}
• Goal test?? Check whether goal configuration is reached
• Path cost?? Number of actions to reach goal

01/24/2025 AI/COSC 33
Example: 8-puzzle cont’d

8 2 1 2 3

3 4 7 4 5 6

5 1 6 7 8

Initial state Goal state

01/24/2025 AI/COSC 34
Example: 8-puzzle cont’d

8 2 7

3 4

8 2 5 1 6

3 4 7

5 1 6 8 2 8 2

3 4 7 3 4 7

5 1 6 5 1 6

01/24/2025 AI/COSC 35
Example: 8-puzzle cont’d
Size of the state space = 9!/2 = 181,440

15-puzzle  .65 x 1012


0.18 sec
6 days
24-puzzle  .5 x 1025
12 billion years

10 million states/sec

01/24/2025 AI/COSC 36
Example: 8-queens cont’d

Place 8 queens in a chessboard so that no two


queens
are in the same row, column, or diagonal.

A solution Not a solution

01/24/2025 AI/COSC 37
Example: 8-queens problem
cont’d

Incremental formulation vs. complete-state formulation


• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

01/24/2025 AI/COSC 38
Example: 8-queens cont’d

Formulation #1:
• States: any arrangement of
0 to 8 queens on the board
• Initial state: 0 queens on
the board
• Actions: add a queen in any
square
• Goal test: 8 queens on the
board, none attacked
• Path cost: none
 648 states with 8 queens

01/24/2025 AI/COSC 39
Example: 8-queens cont’d

Formulation #2:
• States: any arrangement of
k = 0 to 8 queens in the k
leftmost columns with none
attacked
• Initial state: 0 queens on the
board
• Successor function: add a
queen to any square in the
leftmost empty column such
that it is not attacked by
any other queen
 2,067 states
• Goal test: 8 queens on the
bord
01/24/2025 AI/COSC 40
Example: robot assembly

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

01/24/2025 AI/COSC 41
Example: robot assembly cont’d

• States?? Real-valued coordinates of robot joint


angles; parts of the object to be assembled.
• Initial state?? Any arm position and object
configuration.
• Actions?? Continuous motion of robot joints
• Goal test?? Complete assembly (without robot)
• Path cost?? Time to execute
01/24/2025 AI/COSC 42
3.4 Search Space and Heuristic
Search Strategies

Searching For Solution (Tree search


algorithms)
– Given state space, and network of states via actions.
– The network structure is usually a graph
– Tree is a network in which there is exactly one path
defined from the root to any node
– Given state S and valid actions being at S
 the set of next state generated by executing each
action is called successor of S
– Searching for solution is a simulated exploration of state
space by generating successors of already-explored states

01/24/2025 AI/COSC 43
Searching For Solution (Tree search
algorithms)
– A state is a (representation of) a physical
configuration
– A node is a data structure constituting part
of a search tree
– It includes:
• state,
• parent node,
• action,
• depth and
• one or more costs [like path cost g(x),
heuristic cost h(x), evaluation function
cost f(x)]
01/24/2025 AI/COSC 44
Searching For Solution (Tree search
algorithms)

• The Successor-Fn generate all the successors state


and the action that leads moves the current state into
the successor state
• The Expand function creates new nodes, filling in the
various fields of the node using the information given
by the Successor-Fn and the input parameters

01/24/2025 AI/COSC 45
Searching For Solution (Tree search
algorithms)
• A search process can be viewed as
building a search tree over the state
space
• Search tree is a tree structure defined by
initial state and a successor function.
• Search(root) Node is the root of the search
tree representing initial state and without a
parent.
• A child node is a node adjacent to the
parent node obtained by applying an
operator or rule.

01/24/2025 AI/COSC 46
Tree search example cont’d
Tree search example
Awasa

Adama Addis Ababa

Gambela

Dire Nekemt Debre Awasa


Gambela AA Adama Jima
Dawa Marko
Awasa Dessie s

BahrDar AA
Lalibela AA Gondar

Gondar Debre M.

01/24/2025 AI/COSC 47
General tree search cont’d
Implementation: general tree search

01/24/2025 AI/COSC 48
Search strategies cont’d
– A search strategy is defined by picking the order of node
expansion
– Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one
exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in
memory
 optimality: does it always find a least-cost solution?
– Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
– Generally, searching strategies can be classified in to two as
uninformed and informed search strategies
01/24/2025 AI/COSC 49
Uninformed search (blind search)
strategies
 Uninformed search strategies (Blind Search)
– use only the information available in the problem definition
– They have no information about the number of steps or the
path cost from the current state to the goal
– They can distinguish the goal state from other states
– They are still important because there are problems with
no additional information.
 Six kinds of such search strategies will be discussed and each
depends on the order of expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
01/24/2025 AI/COSC 50
Uninformed search (blind search) strategies

m
b
G

01/24/2025 AI/COSC 51
Generating action sequences- search trees

– Leaf Node: is a node without successors ( or children).

 They have not been expanded yet or because they

were expanded before.

– Depth (d): of a node is the number of actions required


to reach it from the initial state.

– Frontier or Fringe Nodes: are the collection of nodes


that are waiting to be expanded.

– Path cost: of a node is the total cost leading to this node.

– Branch Factor(b): Max. number of successors for any


node.

01/24/2025 AI/COSC 52
Breadth-first search

– Uses no prior information, nor knowledge


– It tracks all nodes because it does not know
whether this node leads to a goal or not
– Keeps on trying until it gets solution
– All nodes are expanded from the root node
– That is it is a simple strategy in which
 the root node is expanded first,
 then all the successors of the root node
are expanded next,
 then their successors, and so on.

01/24/2025 AI/COSC 53
Breadth first search cont’d
– In general, all the nodes are expanded
at a given depth in the search tree
before any nodes at the next level are
expanded.
– That is, BFS expands all nodes at level d
before expanding nodes at level d+1
– It checks all paths of a given length
before moving to any longer path
– Expands the shallowest node first

01/24/2025 AI/COSC 54
Breadth first search cont’d

The figure shows the progress of the search on a


simply binary tree BFS trees after 0, 1, 2, 3, and 4
nodes expansion
01/24/2025 AI/COSC 55
Breadth first search…cont’d

A D

B D A E Move
downwards,
level by
C E E B B F level, until
goal is
D F B F C E A C G reached.

G C G F
G

01/24/2025 AI/COSC 56
Breadth first search cont’d

Algorithm for Breadth-first search(FIFO)


• Blind search in which the list of nodes is a
queue
• To solve a problem using breadth-first
search:
[Link] L to be a list of the initial node in the
problem.
[Link] L is empty, return failure otherwise pick the
first node n from L
[Link] n is a goal state, quit and return the path
from initial node to n
[Link] remove n from L and add to the
end of L all of n's children. Label each child
with its path from initial node
[Link] to 2.
01/24/2025 AI/COSC 57
Breadth first search cont’d
Algorithm for Breadth-first search(FIFO)
BFS can be implemented using a queuing
function that puts the newly generated states
at the end of the queue, after all previously
generated states
1. QUEUE <-- path only containing the root;

2. WHILE QUEUE is not empty


AND goal is not reached

DO remove the first path from the QUEUE;


create new paths (to all children);
reject the new paths with loops;
add the new paths to back of QUEUE;

3. IF goal reached
THEN success;
ELSE failure;

01/24/2025 AI/COSC 58
Breadth first search cont’d
Properties of breadth-first search

• Complete? Yes (if b is finite, which is true in most


cases)
• Time? 1+b+b2+b3+… +bd = O(bd+1)
– at depth value = i , there are bi nodes expanded
for i ≤d
• Space? O(bd) (keeps every node in memory)
– a maximum of this much node will be there while
reaching to the goal node
– This is a major problem for real problem
• Optimal? Yes (if cost = constant (k) per step)
• Space is the bigger problem (more than time)
01/24/2025 AI/COSC 59
Breadth first search cont’d
Using the same hypothetical state space find the time
and memory required for a BFS with branching factor
b=10 and various values of the solution depth d

Depth Nodes Time Memory


0 1 1 millisecond 100 bytes
2 111 0.1 second 11 kilobytes
4 11,111 11 seconds 1 megabyte
6 106 18 minutes 111 megabytes
8 108 31 hours 11 gigabytes
10 1010 128 days 1 terabyte
12 1012 35 years 111 terabytes
14 1014 3500 years 11,111 terabytes

01/24/2025 AI/COSC 60
Depth-first Search (DFS)

– Pick one of the children at every node


visited, and work forward from that
child
– Always expands the deepest node
reached so far (and therefore searches
one path to a leaf before allowing up
any other path)
– Thus, it finds the left most solution

01/24/2025 AI/COSC 61
Depth-first Search cont’d
Depth-first search- Chronological backtracking

S
• Select a child
A • convention: left-to-right
or may be alphabetical
B D order
• Repeatedly go to next
child, as long as possible.
C E • Return to left-over
alternatives (higher-up)
D F only when needed.

01/24/2025 AI/COSC 62
Depth-first search cont’d
Depth-first search(LIFO) algorithm
1. STACK <-- path only containing the root;

2. WHILE STACK is not empty


AND goal is not reached

DO remove the first path from the STACK;


create new paths (to all children);
reject the new paths with loops;
add the new paths to front of STACK;
3. IF goal reached
THEN success;
ELSE failure;

01/24/2025 AI/COSC 63
Depth-first search cont’d
• Complete: Yes, if state space finite
No, if state contains
infinite paths or loops
• Time: O(bm)
• Space: O(bm) or O(bm+1) (i.e.
linear space)
• Optimal : No
Then the worst case time complexity is O(bm)
However, for very deep (or infinite due to cycles)
trees this search may spend a lot of time (forever)
searching down the wrong branch
Backtracking search uses even less memory, one
successor instead of all b.
01/24/2025 AI/COSC 64
Depth-first search cont’d
• Time Requirements of Depth-First Search
– It is also more likely to return a
solution path that is longer than the
optimal
– Because it may not find a solution if one
exists, this search strategy is not
complete.
– Remarks: Avoid DFS for large or infinite
maximum depths.

01/24/2025 AI/COSC 65
Depth-Limited Strategy(Depth first search
with cut off)
• Depth-first with depth cutoff k (maximal depth
below which nodes are not expanded)

• Three possible outcomes:


– Solution
– Failure (no solution)
– Cutoff (no solution within cutoff)

• Solves the infinite-path problem.


• If k< d then incompleteness results.
• If k> d then not optimal.
• Time complexity: O(bk)
• Space complexity O(bk)

01/24/2025 AI/COSC 66
Depth-Limited Strategy(Depth first search
with cut off)
DFS Evaluation:
• DFS is a method of choice when there is a known
(and reasonable) depth bound, and finding any
solution is sufficient
1. Depth-first search:
IF the search space contains very deep branches
without solution, THEN Depth-first may waste
much time in them.
2. Breadth-first search:
Is VERY demanding on memory !
Solutions ??
Iterative deepening
The order of expansion of states is similar to BFS,
except that some states are expanded multiple
times
01/24/2025 AI/COSC 67
Iterative Deepening Search l = 0

• Limit = 0

01/24/2025 AI/COSC 68
Iterative Deepening Search l = 1
• Limit = 1

01/24/2025 AI/COSC 69
Iterative Deepening Search l = 2
• Limit = 2

01/24/2025 AI/COSC 70
Iterative Deepening Search l = 3
• Limit = 3

As can be seen, from the three iterations, the order


of expansion of states is similar to BFS, except that
some states are expanded multiple times

01/24/2025 AI/COSC 71
Iterative Deepening Search l = 1 to l=4

Stages in Iterative-Deepening Search

01/24/2025 AI/COSC 72
Iterative Deepening Search Ex.
cont’d

01/24/2025 AI/COSC 73
Iterative Deepening Search Ex.
cont’d

01/24/2025 AI/COSC 74
Iterative Deepening Search Ex.
cont’d

01/24/2025 AI/COSC 75
Iterative Deepening Search (IDS)

– It requires little memory (a constant


times depth of the current node)
– Is complete
– Finds a minim-depth solution as does
BFS
– It is a strategy that avoids (sidesteps) the
issue of choosing the best depth limit by
trying all possible depth limits
– Finds the best depth limit by gradually
increase the limit -> 0, 1, 2, …until goal is
found at depth limit d

01/24/2025 AI/COSC 76
Iterative Deepening Search
Algorithm cont’d

1. DEPTH <-- 1

2. WHILE goal is not reached

DO perform Depth-limited search;


increase DEPTH by 1;

01/24/2025 AI/COSC 77
Iterative Deepening Search
• Completeness
– It is complete
– It finds a solution if exists
• Optimality
– It is optimal
– Finds the shortest path (like breadth first)
• Guarantee shortest path
• Guarantee for goal node of minimal
depth

01/24/2025 AI/COSC 78
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Consider the problem that moves from node S
to G
S

A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
15 C 5
G, 11
S

A, 1 B, 5 C, 15

G, 11 G, 10

01/24/2025 AI/COSC 79
Bi-directional Search cont’d
S
Forward
A D Backwards

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
01/24/2025 AI/COSC 80
Bi-directional Search cont’d

 Bi-directional Search

Initial State Final State

* Completeness: yes
* Optimality: yes
* d/2
Time complexity: O(bd/2)
* Space complexity: O(bd/2) d

O(bd) vs. O(bd/2) ? with b=10 and d=6 results in


1,111,111 vs. 2,222.
01/24/2025 AI/COSC 81
Comparing uninformed search
strategies

01/24/2025 AI/COSC 82
3.5 Search and Optimization
(Gradient descent)

– We are faced with an initial situation and we would


like to achieve a certain goal.
– At any point in time we have different simple
actions available to us (e.g., “turn left” vs. “turn
right”). Executing a particular sequence of such
actions may or may not achieve the goal.
– Search is the process of inspecting several such
sequences and choosing one that achieves the
goal.
– For some applications, each individual action has
a certain cost. A search problem where we aim not
only at reaching our goal but also at doing so at
minimal cost is an optimization problem.

01/24/2025 AI/COSC 83
3.6 Dynamic Game Theory
• It is a mathematical framework that enables us to model
and analyze strategic interactions in dynamic and ever-
changing environments.
• The theory uses multiple players make decisions and take
actions while considering not just their own interests, but
also how the choices of others influence the overall
outcome.
• It has a significant impact in various fields, including
economics, AI, and multi-agent systems,
• it helps in understanding and optimizing decision-making
in complex, interactive situations.
• It allows intelligent systems to adapt and make strategic
decisions in real-time, in response to changing
circumstances and the actions of other agents.
• It considers the dynamic nature of interactions,
• It plays a crucial role in enhancing AI search and planning,
01/24/2025 AI/CSE 3206 84
Key elements of dynamic game theory

1. Sequential Decision-Making:
Players make decisions one after another, and each player's choice is
influenced by the choices made by previous players. This sequential
aspect introduces a temporal dimension to the analysis.
2. Information Asymmetry:
Players may have varying levels of information about the game and
other players' strategies. This can lead to complex strategies and
strategic uncertainty.
3. Dynamic Strategies:
In dynamic games, strategies are not static but evolve over time
based on the observed actions and outcomes, allowing for adaptation
and learning.
4. Equilibrium Concepts:
Dynamic game theory extends equilibrium concepts from static games
to incorporate the dynamic nature of interactions, such as subgame
perfect equilibria, Markov perfect equilibria, and sequential equilibria.

01/24/2025 AI/CSE 3206 85


Questions ?

01/24/2025 AI/COSC 86

You might also like