Solving Problem
by Searching
What is a Problem-Solving Agent?
A problem-solving agent is a type of intelligent agent in Artificial
Intelligence that aims to achieve a specific goal by exploring various
possible actions and selecting the most appropriate one using search
strategies.
• It does not rely on immediate actions or reflexes. Instead, it plans
ahead based on the current situation and a model of the environment.
How a Problem-Solving Agent Works:
1.Perceive the Environment
The agent observes or receives input about the current state of the
world.
2.Formulate a Goal
It decides what it wants to achieve based on the percept.
3.Formulate a Problem
Defines the problem in terms of states, actions, and goals.
4.Search for a Solution
Explores possible sequences of actions using search algorithms.
5.Execute the Solution
Applies the actions to reach the goal.
Real-Life Example: GPS Navigation System
Let’s break it down:
• Goal: Reach your destination from your current location.
• Perception: Current location, road network, traffic conditions.
• Search: Evaluate all possible routes based on distance, time, tolls, etc.
• Execution: Guide the user turn-by-turn using the selected route.
• So, the GPS acts as a problem-solving agent, taking input from the
environment and performing a search to find the best way to the
destination.
Key Components of a Problem (Used by Problem-Solving
Agents):
Component Description
The starting point of the agent. For GPS, this is your
Initial State
current location.
The set of possible moves the agent can take (e.g., turn
Actions
left, go straight).
A function that describes what happens when an action
Transition Model
is taken (e.g., if you turn right, you enter a new street).
A way to check if the current state is the desired goal
Goal Test
(e.g., have we reached the destination?).
A numeric cost associated with a path (e.g., distance
Path Cost
traveled, time taken, fuel used).
Problem Types in AI
When an agent is trying to solve a problem, the nature of the
environment and how much the agent knows about it determine the
type of problem. This helps us choose the right strategy or algorithm
to use.
• Let’s explore the four major types of problems with clear real-world
examples:
a. Single-State Problem
Characteristics:
• The agent knows exactly which state it's in at all times.
• The environment is fully observable (nothing is hidden).
• It's also deterministic – every action leads to a predictable result.
Example: Solving a Jigsaw Puzzle
Let’s say you're solving a 100-piece jigsaw puzzle:
• You can see the whole board and every piece.
• Each move (placing a piece) has a clear and predictable effect.
• There's no uncertainty — if a piece fits, it fits.
• The problem is single-state because you always know the exact arrangement of the puzzle at any point in
time.
Summary:
• In such problems, the agent doesn’t need to track multiple possibilities — it only deals with one known
world.
b. Multiple-State Problem
Characteristics:
• The agent does not know exactly which state it's in.
• It could be in any one of several possible states.
• The environment is either partially observable or non-deterministic.
Example: Searching in a Dark Room
Imagine you enter a dark room with multiple doors, but you can’t see:
• You don’t know where exactly you are.
• You have to guess your position based on touch, memory, or sounds.
• Actions like walking or turning may not always result in the expected movement.
So, the agent must maintain a belief about which state it might be in — maybe it's near a wall, or in a hallway,
etc.
Summary:
• This type of problem requires handling uncertainty about the current situation — not everything is visible or
predictable.
c. Contingency Problem
Characteristics:
• Actions can have unpredictable outcomes, depending on the environment.
• The agent must plan for multiple scenarios.
• Usually includes "if-then-else" planning.
Example: Cooking a Dish
Let’s say you’re cooking pasta:
• Even if you follow the same recipe, results might vary.
• The water may boil faster or slower depending on the stove.
• The vegetables may be fresher or spoiled — this affects taste and cooking time.
• So, you prepare contingency plans like:
• "If the sauce thickens too quickly, add water."
• "If the pasta is too soft, reduce cooking time next batch."
Summary:
• The agent has to be ready to respond dynamically to different situations — it can’t assume everything will
go perfectly.
d. Exploration Problem
Characteristics:
• The agent starts with no knowledge of the environment.
• It must explore to gather information before solving the actual problem.
• Common in unknown or new environments.
Example: Robot Vacuum Cleaner
A new robot vacuum is turned on in a home it’s never seen:
• It doesn’t know the layout of the house.
• It must explore — bumping into walls, detecting edges, mapping out the rooms.
• Only after gathering this knowledge, it can plan an efficient cleaning route.
This is an exploration problem — it’s like playing a game where you don’t know the rules until you explore a
bit.
Summary:
• Exploration is the first step — learning about the environment. Once that’s done, the agent can switch to a
problem-solving mode using the learned map or model.
Quick Recap Table
Problem Type Observability Uncertainty Example
Single-State Full None Jigsaw Puzzle
Multiple-State Partial Yes Navigating a dark room
Contingency Partial/Full Yes Cooking a meal
Robot learning a house
Exploration Unknown Yes
layout
What is Problem Formulation?
Problem formulation is the process of defining a problem clearly so that
an agent can search for a solution effectively. It sets up the foundation
for the agent to begin reasoning and acting.
• We describe the problem in terms the agent can understand and
process, like giving it a rulebook.
Key Elements of a Search Problem:
Element What It Means Why It’s Important
Initial State Where the agent begins Sets the starting point
Actions Possible things the agent can do Defines the agent’s capabilities
Transition Model Describes the result of each action Tells how the world changes
Goal Test Checks if the goal is reached Helps terminate the search
Helps select the best/optimal
Path Cost Numerical cost of a path
solution
Example 1: 8-Puzzle Game
Imagine a sliding puzzle with numbers from 1 to 8 and a blank tile:
• Initial State: Random arrangement of 8 tiles in a 3x3 grid.
• Actions: Move a tile into the blank space (up, down, left, right).
• Transition Model: Describes how the tile moves affect the board.
• Goal Test: Are the tiles in order (1-2-3 on top, 4-5-6 middle, 7-8-blank
bottom)?
• Path Cost: Number of moves to reach the goal configuration.
Example 2: Robot Delivery
A delivery robot must deliver a package inside a building.
• Initial State: Robot starts at the charging dock.
• Actions: Move forward, turn left/right, open doors, pick/drop package.
• Transition Model: If it moves forward and there's a wall, it stays. If
clear, it moves.
• Goal Test: Is the package delivered to Room 101?
• Path Cost: Could be calculated as battery usage, time, or distance.
Example Problems in AI Search
a. Route Finding Problem
Real-Life Example:
Description: Google Maps navigation:
Find the shortest or best path •You input source and destination.
between two locations. •It calculates the best route using
• Initial State: Starting city algorithms like A*.
• Actions: Move to connected •It considers traffic, road
cities via roads closures, etc.
• Goal Test: Have we reached the
destination?
• Path Cost: Distance, time, or toll
cost
b. Maze Solving
Description:
Find a way from the start to the exit
of a maze.
Initial State: Start location
Actions: Move forward, turn left/right
Goal Test: Have we reached the exit?
Path Cost: Number of steps or time
Maze solving is also used in robotics
– e.g., micromouse competitions
where robots autonomously solve
mazes.
c. Chess Playing
Description:
Choose a sequence of moves that leads to
checkmating the opponent.
Initial State: Current board layout
Actions: All legal moves for current
player
Goal Test: Is the opponent checkmated?
Path cost: It is not always numeric but
often involves depth of moves, piece
value, etc.
Chess is a complex search problem with
many states (about 10^120), requiring
smart pruning and evaluation.
Search Algorithms in AI
Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform
some kind of search algorithm in the background in order to achieve their tasks.
• A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal State. A function that looks at the current state returns whether or not it is the goal state.
• The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the
goal state.
• This plan is achieved through search algorithms.
Types of search algorithms:
Uninformed Search Algorithms:
The search algorithms in this section have no additional information on the goal node other than the one
provided in the problem definition. The plans to reach the goal state from the start state differ only by
the order and/or length of actions. Uninformed search is also called Blind search. These algorithms can
only generate the successors and differentiate between the goal state and non goal state.
The following uninformed search algorithms are discussed in this section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
Depth First Search(DFS):
Which solution would DFS find to move from node S to
node G if run on the graph below?
Depth-first search (DFS) is an algorithm for
traversing or searching tree or graph data structures.
The algorithm starts at the root node (selecting some
arbitrary node as the root node in the case of a graph)
and explores as far as possible along each branch
before backtracking. It uses last in- first-out strategy
and hence it is implemented using a stack
Real-Life Example:
You’re exploring a cave system.
• You follow one tunnel deep into the cave.
• Only when it's a dead end do you backtrack and
try a different tunnel.
.
Breadth First Search(BFS): Which solution would BFS find to move from
node S to node G if run on the graph below?
Breadth-first search (BFS) is an algorithm for
traversing or searching tree or graph data structures.
It starts at the tree root (or some arbitrary node of a
graph, sometimes referred to as a ‘search key’), and
explores all of the neighbor nodes at the present
depth prior to moving on to the nodes at the next
depth level. It is implemented using a queue.
Real-Life Example:
Imagine you are searching for your friend in a
huge building.
• You check every room on the 1st floor first.
• Then move to the 2nd floor, and so on.
Uniform Cost Search: Which solution would UCS find to move from node S to
node G if run on the graph below?
UCS is different from BFS and DFS because here
the costs come into play. In other words, traversing
via different edges might not have the same cost.
The goal is to find a path where the cumulative sum
of costs is the least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Real-Life Example:
Finding the cheapest travel ticket:
• A flight from Hyderabad to Delhi might cost
₹5,000
• But Hyderabad → Mumbai → Delhi might be
cheaper at ₹4,000
• UCS will prefer the cheaper route
Informed Search Algorithms:
Here, the algorithms have information on the goal state, which helps in more efficient searching. This
information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
• Search Heuristics: In an informed search, a heuristic is a function that estimates how close a state is to the
goal state. For example – Manhattan distance, Euclidean distance, etc. (Lesser the distance, closer the goal.)
Different heuristics are used in different informed algorithms discussed below.
Greedy Search:
In greedy search, we expand the node closest to the goal
node. The “closeness” is estimated by a heuristic h(x).
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Find the path from S to G using
greedy search. The heuristic
values h of each node below
Strategy: Expand the node closest to the goal
the name of the node.
state, i.e. expand the node with a lower h value.
A* Tree Search:
A* Search combines:
•Uniform Cost Search (UCS) → looks for the cheapest path so far
(g(x))
•Greedy Best-First Search → looks at the estimated distance to the
goal (h(x))
The total estimated cost function is:
f(x)=g(x)+h(x)
•g(x) → Actual cost from start to node x (also called backward cost)
•h(x) → Estimated cost from node x to goal (also called forward
cost)
A* search is optimal only when for all nodes, the forward cost for a
node h(x) underestimates the actual cost h*(x) to reach the goal. This
property of A* heuristic is called admissibility.
0 h(x) h*(x)
A* Graph Search:
• A* tree search works well, except that it takes time re-exploring the branches it has already explored. In other
words, if the same node has expanded twice in different branches of the search tree, A* search might explore
both of those branches, thus wasting time
• A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do not expand the
same node more than once.
• Heuristic. Graph search is optimal only when the forward cost between two successive nodes A and B, given
by h(A) – h (B), is less than or equal to the backward cost between those two nodes g(A -> B). This property
of the graph search heuristic is called consistency.
h(A)−h(B)≤g(A→B)