MST 1
Define problem formulation and also explain its working with real life
examples
Problem formulation in Artificial Intelligence is the process of precisely defining a problem in
such a way that it can be solved by an intelligent agent. It involves identifying the initial state,
the possible actions, the transition model, the goal test, and the path cost. A problem is well-
formulated only when these components are clearly described.
The main purpose is to convert a vague, real-world situation into a well-defined structure that
an AI system can handle using search or decision-making techniques.
Steps/Working of Problem Formulation
1. Initial State:
The state where the agent begins.
Example: For a navigation app, the initial state is your current location.
2. Actions:
The set of all possible moves or operations the agent can take.
Example: In navigation, actions could be “turn left,” “go straight,” or “take exit 5.”
3. Transition Model:
It defines the outcome of each action — i.e., what new state results after applying an
action.
Example: If you drive 2 km east, the transition model updates your location on the map.
4. Goal Test:
A condition that checks whether the agent has reached the desired goal state.
Example: Reaching your destination address in Google Maps.
5. Path Cost:
A numeric value that assigns cost to a solution path, helping to evaluate the quality of
solutions.
Example: Travel time, distance, or fuel cost in the navigation problem.
Real-Life Examples
1. Navigation Problem (Google Maps):
o Initial State: Current location (e.g., your home).
o Actions: Drive north, turn left, take a bus, etc.
o Transition Model: Each move changes your location on the map.
o Goal Test: Arriving at the desired destination (office).
o Path Cost: Minimize travel time or distance.
2. Puzzle Problem (8-Puzzle Game):
o Initial State: Scrambled puzzle configuration.
o Actions: Slide a tile up, down, left, right.
o Transition Model: Defines how the puzzle changes after a move.
o Goal Test: Puzzle pieces arranged in correct order.
o Path Cost: Number of moves required to solve.
3. Medical Diagnosis System:
o Initial State: Symptoms reported by a patient.
o Actions: Possible diagnostic tests or questions.
o Transition Model: Test results lead to new knowledge states.
o Goal Test: Correctly identifying the disease.
o Path Cost: Time, cost, and number of tests performed.
Final Summary
Problem formulation is the backbone of AI problem-solving. Without a properly defined initial
state, actions, transition model, goal test, and path cost, an AI agent cannot effectively solve
real-world problems. By structuring the problem this way, AI can use search algorithms and
reasoning techniques to find optimal solutions in applications like navigation, games, medical
diagnosis, robotics, and more.
Artificial Intelligence (AI)
Artificial Intelligence (AI) is the branch of computer science that deals with creating machines or
systems capable of performing tasks that normally require human intelligence. These tasks
include learning, reasoning, problem-solving, perception, decision-making, and understanding
natural language.
In simple words, AI is about making computers "think and act intelligently" like humans.
Characteristics of Artificial Intelligence
AI systems can be identified by the following main characteristics:
1. Learning Ability
o AI systems can learn from data, past experiences, and patterns.
o Example: Netflix recommending movies/shows based on your watch history.
2. Reasoning and Problem-Solving
o AI can analyze problems, apply logic, and come up with possible solutions.
o Example: Chess-playing AI like AlphaZero can evaluate multiple moves and
choose the best one.
3. Adaptability
o AI adapts its actions based on changes in the environment.
o Example: Self-driving cars adjusting speed and route when traffic conditions
change.
4. Automation
o AI systems can perform repetitive or complex tasks without constant human
intervention.
o Example: Industrial robots assembling cars in a factory.
5. Perception
o AI can use sensors, cameras, or other devices to perceive the world and interpret
it.
o Example: Face recognition systems in smartphones.
6. Natural Language Processing (NLP)
o AI can understand, process, and generate human language.
o Example: Virtual assistants like Siri, Alexa, and ChatGPT.
7. Decision-Making
o AI can make decisions based on analyzing data and evaluating different options.
o Example: AI fraud detection systems deciding whether a bank transaction is
legitimate or suspicious.
8. Goal-Oriented Behavior
o AI works toward achieving defined objectives efficiently.
o Example: Google Maps AI finds the fastest route to reach a destination.
Production System in Artificial Intelligence
A production system is a way of solving problems in Artificial Intelligence.
It consists of a set of rules (called productions) and some information about the problem
(called working memory or knowledge base).
The rules are in the form:
👉 IF (condition) THEN (action)
Condition: checks something about the problem.
Action: tells what to do if the condition is true.
The system keeps applying these rules step by step until the problem is solved.
Main Components of a Production System
1. Working Memory (Database/Knowledge Base):
Stores the current state of the problem.
2. Production Rules (Rule Set):
A collection of IF–THEN rules that describe possible actions.
3. Control System (Rule Interpreter):
Decides which rule to apply at each step and in what order.
Example
Suppose we want to make tea using a production system.
Rule 1: IF kettle is empty THEN fill it with water.
Rule 2: IF kettle has water THEN switch it on.
Rule 3: IF water is hot THEN add tea leaves.
Rule 4: IF tea is ready THEN pour into cup.
The system starts with an empty kettle and, by applying rules step by step, reaches the goal =
tea ready.
Why is it Important?
Easy to represent knowledge in rule form.
Useful for expert systems (like medical diagnosis systems).
Can handle complex problems step by step.
Types of Production Systems
1. Monotonic Production System
o Once a rule is applied, it never needs to be undone.
o Example: Mathematical problem solving.
2. Non-Monotonic Production System
o Rules may need to be changed or undone as new information is obtained.
o Example: Legal reasoning where evidence can change the outcome.
3. Deterministic Production System
o Applying a rule always leads to a uniquely defined next state.
o Example: Arithmetic operations.
4. Non-Deterministic Production System
o Applying a rule may lead to multiple possible states.
o Example: Natural language processing, where one sentence can have multiple
interpretations.
Real-Life Examples
1. Expert Systems (Medical Diagnosis):
o Rule: IF fever AND cough THEN possible flu.
o Rule: IF chest pain AND shortness of breath THEN possible heart issue.
2. Chatbots:
o Rule: IF user says "Hi" THEN respond "Hello, how can I help you?"
3. Route Planning (GPS):
o Rule: IF traffic jam detected THEN suggest alternative route.
Control Strategy in Artificial Intelligence
Definition:
A Control Strategy is the method or set of rules used in a problem-solving system (like a
production system) to decide which action or rule to apply next when multiple options are
available.
In other words, it is the decision-making mechanism that guides the search process from the
initial state to the goal state efficiently.
Without a control strategy, the system may apply rules randomly, leading to inefficiency or even
failure in solving the problem.
Requirements of a Good Control Strategy
1. Efficiency – It should reduce the number of steps to reach the goal.
2. Completeness – It must guarantee finding a solution if one exists.
3. Optimality – It should find the best (least cost) solution when multiple solutions exist.
4. Avoidance of loops – It should prevent repeating the same states.
Types of Control Strategies
1. Forward Chaining (Data-Driven Strategy):
o Starts with the initial data and applies rules to move forward until the goal is
reached.
o Example: Medical diagnosis systems start with symptoms and infer possible
diseases.
2. Backward Chaining (Goal-Driven Strategy):
o Starts with the goal and works backward to check if rules can support it.
o Example: Expert systems like Prolog check if a given disease can explain the
observed symptoms.
3. Search Control Strategies:
AI search algorithms also use control strategies, such as:
o Depth-First Search (DFS): Explores as far as possible along one branch before
backtracking.
o Breadth-First Search (BFS): Explores all nodes at the current depth before
moving deeper.
o Heuristic Search (A):* Uses heuristics (rules of thumb) to choose the most
promising path.
Real-Life Examples
1. Google Maps:
o Uses control strategies (like shortest distance or fastest route) to decide which
road to take.
2. Chess AI:
o Applies control strategies to select the best move from many possible moves.
3. Expert Systems (Medical):
o Forward chaining: from patient’s symptoms → possible diagnosis.
o Backward chaining: from suspected disease → check if symptoms match.
Search Strategies in Artificial Intelligence
Definition:
A search strategy is the method used by an AI problem-solving agent to explore possible states
of a problem in order to find a solution path from the initial state to the goal state.
In simple words, search strategies define how the AI agent navigates the problem space to
reach the solution.
Requirements of a Good Search Strategy
1. Completeness – Finds a solution if one exists.
2. Optimality – Finds the best (least-cost) solution.
3. Time Complexity – The time taken to find a solution.
4. Space Complexity – The memory used during the search.
Types of Search Strategies
Search strategies are mainly divided into two categories:
1. Uninformed (Blind) Search Strategies
Do not have additional knowledge about the problem domain.
They only use the information available in the problem definition (initial state, actions,
goal test).
Examples:
Breadth-First Search (BFS):
Explores all nodes at the current depth before going deeper.
Example: Finding the shortest path in an unweighted graph.
Depth-First Search (DFS):
Explores as far as possible along one branch before backtracking.
Example: File system directory traversal.
Uniform Cost Search (UCS):
Expands the least-cost node first.
Example: Finding the cheapest route in maps.
2. Informed (Heuristic) Search Strategies
Use heuristics (rules of thumb) to estimate the “closeness” of a state to the goal.
More efficient than uninformed search.
Examples:
Greedy Best-First Search:
Selects the node that appears to be closest to the goal based on a heuristic.
Example: GPS navigation estimating distance to destination.
A Search:*
Combines path cost and heuristic to find the optimal solution.
Example: Widely used in Google Maps and pathfinding in games.
Real-Life Examples
1. Navigation Systems (Google Maps):
o Uses A* search to find the shortest and fastest routes.
2. Puzzle Solving (8-Puzzle, Sudoku):
o Heuristic search strategies are used to solve efficiently.
3. Games (Chess, Tic-Tac-Toe):
o Uses DFS and heuristic strategies to evaluate possible moves.
Problem Characteristics in Artificial Intelligence
Definition:
Problem characteristics describe the nature of a problem and help in selecting the right
approach or strategy for solving it. Not all problems are the same—some are easy to formalize
and solve, while others are complex and require special methods.
Main Problem Characteristics
1. Decomposable vs. Non-Decomposable Problems
o Decomposable: Can be broken into smaller sub-problems.
o Example: Solving a jigsaw puzzle (each piece contributes to the whole).
o Non-Decomposable: Cannot be divided easily.
o Example: Medical diagnosis (all symptoms are interrelated).
2. Ignorable vs. Non-Ignorable Problems
o Ignorable: Certain facts can be safely ignored.
o Example: Crossword puzzle (only relevant words matter).
o Non-Ignorable: Every detail matters.
o Example: Navigating in traffic (every obstacle is important).
3. Static vs. Dynamic Problems
o Static: Problem does not change while being solved.
o Example: Chess (board remains the same until the next move).
o Dynamic: Problem changes during solving.
o Example: Stock market prediction (prices change constantly).
4. Discrete vs. Continuous Problems
o Discrete: Limited states or actions.
o Example: Tic-Tac-Toe (fixed number of moves).
o Continuous: Infinite possibilities.
o Example: Controlling a robot arm (continuous movement).
5. Single-Agent vs. Multi-Agent Problems
o Single-Agent: Only one agent is working toward the goal.
o Example: Solving a maze.
o Multi-Agent: Multiple agents may cooperate or compete.
o Example: Playing football or chess against an opponent.
Production System Strategies
In a production system (rule-based system), when multiple rules are applicable, the system
needs a control strategy to decide which rule to apply first. These are called production system
strategies.
Main Production System Strategies
1. Forward Chaining (Data-Driven Strategy):
o Starts with the available facts and applies rules to infer new facts until the goal is
reached.
o Example: In medical diagnosis, starting with symptoms → identifying possible
disease.
2. Backward Chaining (Goal-Driven Strategy):
o Starts with the goal and works backward to see if available facts can satisfy the
conditions.
o Example: Expert system: To check if a patient has malaria → verify symptoms like
fever, chills, and blood test results.
3. Conflict Resolution Strategy:
o When more than one rule can be applied, the system decides based on priority.
o Common methods:
Specificity: Prefer rules with more specific conditions.
Recency: Prefer rules that match the most recent facts.
Rule Priority: Some rules are predefined as more important.
o Example: In traffic navigation, a rule about avoiding accidents may override a rule
about shortest path.
Problem-Solving Method in Artificial Intelligence
Problem solving in AI refers to the process of finding a sequence of actions that leads from an
initial state to a goal state. The method provides a structured way for an AI agent to analyze the
problem, search for possible solutions, and choose the best one.
In short, it is a systematic approach to move from what is given → what is required.
Steps of the Problem-Solving Method
1. Define the Problem Clearly
o Identify the initial state, goal state, possible actions, and constraints.
o Example: In navigation, the initial state = current location, goal state =
destination.
2. Formulate the Problem
o Represent the problem in a formal structure (states, operators, transition model,
path cost).
o Example: In an 8-puzzle, each tile move is an operator, and the transition model
shows the new board configuration.
3. Search for Solutions
o Use search strategies (uninformed or informed) to explore the problem space.
o Example: BFS, DFS, or A* algorithm in route planning.
4. Apply a Control Strategy
o Select the best rule or action when multiple options are available (forward
chaining, backward chaining, heuristics).
5. Execute and Reach the Goal
o Apply the selected actions step by step until the goal state is reached.
o Example: Google Maps executes the path by giving turn-by-turn instructions.
Problem-Solving Approaches in AI
1. Search-Based Problem Solving
o Problems are solved by systematically exploring possible states.
o Example: Solving mazes, playing chess.
2. Knowledge-Based Problem Solving
o Uses a knowledge base (facts + rules) and inference engine.
o Example: Expert systems in medical diagnosis.
3. Heuristic Problem Solving
o Uses rules of thumb to reduce search space and reach the goal faster.
o Example: A* algorithm in GPS navigation.
Real-Life Examples
1. Navigation Systems (Google Maps):
o Initial state: Current location.
o Goal: Destination.
o Method: Uses heuristic search (A*) to find the fastest path.
2. Puzzle Solving (8-Puzzle):
o Initial state: Scrambled tiles.
o Goal: Tiles arranged in order.
o Method: Uses search strategies like BFS or heuristic methods.
3. Medical Diagnosis (Expert Systems):
o Initial state: Symptoms.
o Goal: Correct disease identification.
o Method: Uses knowledge-based reasoning (production rules).
Hill Climbing Algorithm in Artificial Intelligence
Hill Climbing is a heuristic search algorithm used in Artificial Intelligence for mathematical
optimization problems. It is an iterative improvement algorithm that starts with an arbitrary
solution and continuously moves to neighboring states with higher value (better solution) until
no better neighbors exist.
In simple words, it is like “climbing uphill” step by step until reaching the top (local maximum).
Key Characteristics
1. Greedy Algorithm – Always chooses the best immediate neighbor.
2. Local Search Technique – Works using only local (neighboring) information.
3. Termination Condition – Stops when it reaches a state where no better neighbor exists.
⚙️How It Works (Steps)
1. Start with a random solution (any point on the hill).
2. Look at neighboring solutions (nearby steps).
3. Move to the neighbor with the best improvement (highest point).
4. Repeat until you can’t find a better neighbor.
o That point is the "peak" (solution).
Problems with Hill Climbing
1. Local Maximum: May stop at a peak that is not the highest (suboptimal).
2. Plateau: Flat area where neighbors have the same value → algorithm may get stuck.
3. Ridges: Difficult to climb if progress requires moving in a non-greedy direction.
Real-Life Examples
1. Route Optimization: Finding a good (though not always best) path in navigation.
2. Job Scheduling: Assigning jobs to machines to maximize efficiency.
3. Puzzle Solving (8-puzzle): Moving tiles toward the goal arrangement using heuristics.