Module 1
o Syllabus
Introduction – What is Artificial Intelligence(AI) ? The Foundations of AI,
History of AI,
Applications of AI. Intelligent Agents – Agents and Environments, Good behavior: The concept
of rationality, nature of Environments, Structure of Agents.
o AI
Artificial Intelligence (AI) – Detailed Notes
1. What is Artificial Intelligence (AI)?
Definition (John McCarthy):
“AI is the science and engineering of making intelligent
machines, especially intelligent computer programs.”
General Meaning:
AI refers to the ability of a machine or computer program to
think, learn, and solve problems in ways similar to human
cognition.
AI is:
The branch of Computer Science concerned with
automating intelligent behavior.
The effort to make computers think—machines
with minds, in the full and literal sense.
The study of computations that make it possible to
perceive, reason, and act.
Other Definitions:
Focus Definition
Thinking “The exciting new effort to make computers think... m
Humanly and literal sense.” – Haugeland (1985)
“The automation of activities that we associate
decision-making, problem-solving, learning…” – Bellm
Acting “The art of creating machines that perform functions
Humanly performed by people.” – Kurzweil (1990)
“The study of how to make computers do things at
are better.” – Rich and Knight (1991)
Thinking “The study of mental faculties through the use
Rationally Charniak & McDermott (1985)
“The study of the computations that make it possibl
– Winston (1992)
Acting “Computational Intelligence is the study of the desig
Rationally et al. (1998)
“AI is concerned with intelligent behavior in artifacts
The definitions on the left measure success in terms
of fidelity to human performance, whereas
the ones on the right, measure against an ideal performance measure, which is called as
rationality.
2. Approaches to Artificial Intelligence
[Link]
There are four primary approaches, based on thought vs
behavior and human-like vs rational performance.
2.1. Thinking Humanly – The Cognitive Modeling Approach
Focuses on mimicking the internal mental processes
of humans.
Methods:
Introspection: Observing one’s own thought
process.
Psychological Experiments: Testing human
behavior.
Example:
GPS (General Problem Solver) by Newell &
Simon attempted to replicate human
reasoning.
🧠 Cognitive science combines AI + psychology to
understand how humans think.
2.2. Acting Humanly – The Turing Test Approach
Focuses on making machines behave like humans
externally.
Turing Test (Alan Turing, 1950):
draw diafram
A computer passes if a human interrogator
cannot tell whether the response is from a
human or a computer.
Capabilities Required to Pass Turing Test:
1. Natural Language Processing (NLP) – For
understanding and generating language.
2. Knowledge Representation – Storing and
organizing facts.
3. Automated Reasoning – Deriving
conclusions.
4. Machine Learning – Learning from new
data.
Total Turing Test Adds:
5. Computer Vision – Seeing and
interpreting visuals.
6. Robotics – Physical interaction with the
environment.
⚠️ AI researchers focus more on
understanding principles of intelligence
than just passing the Turing Test.
Morw
Great! Here's a deep and detailed
explanation of both the Turing Test
and the Total Turing Test, covering
everything you need to know, with
clear examples, requirements,
criticism, and how they relate to AI
development.
🤖 Turing Test & Total Turing Test –
Detailed Notes
🧠 1. Turing Test (1950)
✅ Proposed by:
o Alan Turing – British mathematician, considered the father of theoretical computer
science and artificial intelligence.
📌 Main Idea:
o If a machine can carry on a conversation indistinguishable from a human, it can be
considered intelligent.
🧪 The Original Turing Test Setup
o Imagine:
An interrogator (human) is in a room and types questions to two hidden
entities:
One is a human
The other is a machine (AI)
o The interrogator’s job is to figure out which is the machine, only by asking questions
and reading responses.
o 🔍 If the interrogator cannot reliably tell which is which, the machine is said to pass
the Turing Test.
🧾 Requirements to Pass the Turing Test
o To fool the human interrogator, the machine must demonstrate the following
abilities:
Capability
Natural Language Processing
🧠 Knowledge Representation
🔍 Automated Reasoning
📈 Machine Learning
o ⚙️Example:
o AI chatbot like ELIZA (1966) or ChatGPT (now) are attempts to pass the Turing Test
by mimicking human conversation.
o
🔁 2. Goals and Importance
of the Turing Test
Purpose
✅ Evaluate machine intelligence
✅ Encourage language understanding
✅ Benchmark for human-like behavio
🧩 3. Criticisms of the Turing
Test
Criticism Expla
🎭 Deception over intelligence Fooli
A m
🧠 Ignores understanding
comp
💬 Fails in non-linguistic tasks Intell
⌛ Time-limited Hard
🧠 Example:
A chatbot may pass by dodging tough questions or mimicking humor — but not truly
"understand" them.
🧪 4. Total Turing Test
(Extended Turing Test)
The Total Turing Test (TTT) was developed to address the limitations of the original test.
🤖 Adds More Human-Like Abilities:
o Besides conversation, the AI must also:
Extra Ability
Computer Vision
🦿 Robotics
o
🎯 Total Turing Test = Turing Test + Perception + Action
Component Functio
Language Unders
Knowledge Know f
Reasoning Solve p
Learning Improv
Vision Recogn
Action Manip
o
🔍 Example Comparison
Task
Answering questions
Recognizing a face
Pointing to an object
Describing a picture
Typing jokes or stories
🤖 Examples of AI in These
Tests
✅ Passed or Almost Passed Turing Test:
o Eugene Goostman (2014): Simulated a 13-year-old Ukrainian boy
o ChatGPT / GPT-based chatbots (recent): Can carry human-like conversations
❌ Yet to Pass Total Turing Test:
o No AI yet has vision, robotics, language, and reasoning to pass a Total Turing Test.
📚 Summary Table: Turing
Test vs. Total Turing Test
Feature
Conversation
Natural Language
Reasoning & Knowledge
Machine Learning
Computer Vision
Robotics / Physical Action
Tests Human-Like Behavior
Status
🌐 Real-Life Analogy
Turing Test: Like chatting with someone over WhatsApp. You can't see or hear them — you
judge if they’re a real person just by their messages.
>
Total Turing Test: Like being with someone in person — now you judge by talking, seeing,
and watching their actions.
If you’d like a visual diagram of both tests, or this content as a PPT or PDF, just let me know
— I can generate that for you in seconds.
What is a Turing Test. To begin, the “Turing Test’’ is named… | by ...
2.3. Thinking Rationally – The Laws of Thought Approach
Based on formal logic and reasoning processes.
Use correct logical reasoning (what's “right”
thinking).
Syllogism Example (Aristotle):
“Socrates is a man; all men are mortal;
therefore, Socrates is mortal.”
Challenges:
Difficult to formalize informal knowledge.
Computational resources may not be
enough for large data sets.
In practice, logical systems often need
heuristics (rules of thumb).
⚠️Challenges:
Hard to convert real-world
knowledge into logical symbols.
Logic-based programs are often
slow and inefficient for real-world
use.
🧩 Example:
Logic-based expert systems in
medical diagnosis.
2.4. Acting Rationally – The Rational Agent Approach
Focus: Achieve the best possible outcome using
available knowledge.
An agent perceives its environment and acts upon
it.
Characteristics of Rational Agents:
Operate autonomously.
Perceive environment.
Adapt to changes.
Take initiative to achieve goals.
Advantages of this Approach:
More general than “laws of thought.”
Easier to evaluate scientifically.
Applicable to real-world problems even with
uncertainty.
3. Levels of AI
Level Description Ex
Narrow AI Performs one task better than humans Go
General AI Can perform any intellectual task like a human Th
Strong AI Surpasses human abilities in most fields No
4. Goals of AI
Build expert systems with decision-making abilities.
Mimic human intelligence: learning, understanding, and
adaptation.
Solve general-purpose tasks (future vision).
Build special-purpose AI (e.g., translation tools, chatbots,
autonomous vehicles).
Clear notes
🧠 Create Expert Systems that learn, explain, and
help users.
🧍♂️Implement human intelligence in machines.
🎮 Solve complex real-world problems: language,
vision, planning, decision-making.
⚙️ Special-purpose AI: narrow systems (chess,
translation, navigation).
🤖 General-purpose AI: future goal – robots that
function like humans.
5. Applications of AI (Special-Purpose Examples)
Area Application
Games Chess, Poker playing bots
Healthcare Diagnosis systems, treatment planning
Automotive Self-driving cars
Security Surveillance, anomaly detection
Natural Language Chatbots, voice assistants (e.g., Siri, Alexa)
Finance Fraud detection, trading algorithms
Education Personalized learning platforms
6. Advantages of Artificial Intelligence
✅ High Accuracy – Less prone to errors.
✅ High-Speed – Tasks done faster than humans.
✅ High Reliability – 24x7 operation without fatigue.
✅ Works in Risky Environments – e.g., nuclear plants, space
missions.
✅ Digital Assistants – Simplify daily tasks (e.g., Siri, Google
Assistant).
✅ Useful in Public Utilities – Traffic systems, power grids,
etc.
7. Disadvantages of Artificial Intelligence
❌ High Cost – Expensive to develop and maintain.
❌ Lack of Creativity – Cannot innovate or imagine.
❌ No Emotions – Cannot relate or empathize.
❌ Dependence on Machines – Leads to laziness or over-
reliance.
❌ Job Displacement – Replaces repetitive human jobs.
8. Summary Diagram of AI Approaches
THINK ACT
HUMAN Think like humans _(Cognitive modeling)_🧠 Act like humans
RATIONAL Think rationally _(Laws of Thought)_📘 Act rationally _(
Let me know if you’d like these notes in PowerPoint, PDF, or
Word format or want a mind map for revision.
o Foundations of Ai
Foundations of Artificial Intelligence (AI)
AI draws from multiple disciplines. Each field contributes tools,
concepts, and perspectives that help build intelligent agents.
1. Philosophy – Thinking, Reasoning, and Knowledge
Key Questions:
Can formal rules draw valid conclusions?
How does the mind arise from the brain?
Where does knowledge come from?
How does knowledge lead to action?
Historical Contributions:
Aristotle (384–322 B.C.) – Laws of rational thought,
syllogisms for mechanical reasoning.
Thomas Hobbes (1651) – Mind as a mechanical
system (“artificial animal”).
Important Terms:
Rationalism – Reasoning as the basis of
understanding.
Dualism – Mind/soul separate from physical laws.
Materialism – Mind as a physical process in the
brain.
Induction – Learning general rules from repeated
experiences.
Logical Positivism – Knowledge linked to observable
evidence.
Confirmation Theory – Knowledge acquisition from
experience.
2. Mathematics – Logic, Computation, Probability
Key Questions:
What are the formal rules for valid conclusions?
What can be computed?
How to reason with uncertainty?
Contributions:
Logic:
George Boole – Boolean logic (1847).
Gottlob Frege – First-order logic (1879).
Alfred Tarski – Linking logic to real-world
objects.
Computation: Theoretical foundations for
algorithms.
Probability:
Thomas Bayes – Rule for updating beliefs
with evidence.
3. Economics – Decision-Making and Utility
Key Questions:
How to maximize payoff?
How to decide under uncertainty or with others
involved?
Contributions:
Adam Smith (1776) – Agents maximizing economic
well-being.
Decision Theory – Combines probability + utility for
optimal choices under uncertainty.
4. Neuroscience – Biological Information Processing
Studies how the brain processes information.
Shows that brain damage can impair thinking → evidence
for mind–brain link.
5. Psychology – Human & Animal Behavior
Hermann von Helmholtz – Applied science to vision study.
Wilhelm Wundt – Experimental psychology pioneer.
Models humans/animals as information-processing
machines.
6. Computer Engineering – Building the Artifact
Provides the hardware/software for AI systems.
Advances in speed & memory make AI applications possible.
7. Control Theory & Cybernetics – Feedback-Based Action
Ktesibios (250 B.C.) – First self-regulating machine (water
clock).
Designs systems that act optimally using feedback.
8. Linguistics – Language and Thought
B.F. Skinner (1957) – Behaviorist view of language learning.
Leads to Computational Linguistics / Natural Language
Processing (NLP).
✅ Summary:
AI’s foundations come from Philosophy (reasoning),
Mathematics (logic, probability), Economics (decision-
making), Neuroscience (brain function), Psychology
(behavior), Computer Engineering (hardware/software),
Control Theory (feedback systems), and Linguistics
(language understanding).
If you want, I can also turn this into a one-page visual mind
map so you can revise the whole topic in a glance for your
CST401 exam.
o History of AI
History of Artificial Intelligence
Although Artificial Intelligence (AI) seems like a modern concept, its
roots go back centuries — even appearing in ancient Greek and
Egyptian myths about mechanical men.
Over time, AI evolved through key milestones, winters, and
breakthroughs.
1. Maturation of AI (1943–1952)
1943 – Warren McCulloch & Walter Pitts:
Proposed the first model of artificial neurons.
1949 – Donald Hebb:
Introduced Hebbian Learning Rule — method for
updating connection strength between neurons.
1950 – Alan Turing:
Published “Computing Machinery and Intelligence”.
Proposed the Turing Test — checks a machine’s
ability to exhibit intelligent behavior equivalent to a
human.
2. Birth of AI (1952–1956)
1955 – Allen Newell & Herbert A. Simon:
Created the first AI program Logic Theorist.
Proved 38/52 mathematical theorems, often finding
more elegant proofs.
1956 – John McCarthy:
Coined the term Artificial Intelligence at the
Dartmouth Conference.
AI became an academic field.
High-level languages like FORTRAN, LISP, COBOL
were invented.
High enthusiasm for AI research.
3. The Golden Years – Early Enthusiasm (1956–1974)
1966 – Joseph Weizenbaum:
Created ELIZA, the first chatbot.
1972 –
Japan built WABOT-1, the first intelligent humanoid
robot.
4. First AI Winter (1974–1980)
Funding from government dropped due to slow progress.
Public and scientific interest declined.
5. Boom of AI (1980–1987)
1980 – Introduction of Expert Systems:
Programs that mimic human expert decision-
making.
1980 – First national conference of American Association
for Artificial Intelligence held at Stanford University.
6. Second AI Winter (1987–1993)
Investors and governments reduced funding again due to
high costs and limited efficiency.
Example: XCON expert system was expensive.
7. Emergence of Intelligent Agents (1993–2011)
1997 – IBM’s Deep Blue defeated world chess champion
Garry Kasparov.
2002 – AI entered homes via Roomba (robot vacuum).
2006 – AI used by major companies like Facebook, Twitter,
Netflix.
8. Deep Learning, Big Data, and AGI Era (2011–Present)
2011 – IBM’s Watson won Jeopardy! quiz show, showcasing
natural language understanding.
2012 – Google launched Google Now, a predictive
information assistant.
2014 – Chatbot Eugene Goostman passed a version of the
Turing Test.
2018 – IBM’s Project Debater argued with human experts
successfully.
Google’s Duplex AI made realistic phone calls to
book appointments.
Present – AI integrated into daily life: deep learning, big
data, and data science drive innovations from companies
like Google, Facebook, IBM, Amazon.
✅ Key Trends:
Transition from symbolic logic to neural networks.
Shift from academic experiments to real-world
applications.
Rise of machine learning and deep learning for
complex problem-solving.
Increasing human-AI collaboration in business,
homes, and research.
If you want, I can also make a clear timeline diagram for this
so it’s easier to revise before exams.
o Applications of AI
Applications of Artificial Intelligence
1. Gaming
AI plays a key role in strategic games such as chess, poker,
tic-tac-toe, etc.
The machine can think of a large number of possible
positions using heuristic knowledge.
ChatGPT said:
Heuristic knowledge means practical,
experience-based knowledge or rules of
thumb that help in problem-solving, even if
they don’t guarantee a perfect solutio
Example: In chess, AI evaluates all possible moves
and predicts the opponent’s strategy.
2. Natural Language Processing (NLP)
Enables interaction with computers in natural human
language.
AI can understand, interpret, and respond to speech
or text in human languages.
Example: Voice assistants like Siri or Google
Assistant process spoken commands.
3. Expert Systems
These integrate software, machines, and specialized
knowledge to provide reasoning and advice.
They explain their reasoning process and give
recommendations.
Example: MYCIN (medical diagnosis system) advises
doctors on bacterial infections.
o Agent and Envt
🌟 2.1 AGENTS AND ENVIRONMENTS
🔹 Definition of an Agent
An agent is anything that:
Perceives its environment through sensors
Acts on the environment through actuators
“Agent = Architecture + Agent Program”
🔹 Key Components
Term Description Examples
Environment The surrounding in which the agent operates A room for a v
Device used by agent to perceive the Eye (human
Sensor
environment keyboard (soft
Mechanism used to act upon the Legs (human),
Actuator
environment (software)
Percept The agent’s input at a given instant [A, Dirty]
Percept Complete history of everything the agent has
[A, Dirty], [B, C
Sequence ever perceived
Maps percept sequences to actions (abstract
Agent Function If square is dir
concept)
Implementation of the agent function
Agent Program Code in Pytho
running on an architecture
The hardware platform (e.g., robot, software,
Architecture Robotic vacuu
PC)
🔹 Example Agents
Agent Type Sensors Actuators
Human Agent Eyes, ears, skin, etc. Hands, legs, mouth
Robotic Agent Cameras, infrared range finders Motors, robotic arms
Software Agent Keystrokes, file contents, packets Display output, send
🔹 How an Agent Works
[Link]
An agent perceives the environment through sensors and
acts using actuators.
It may keep track of percepts through a percept sequence
and use this for decision-making.
It can depend on the entire percept history, not on what it
hasn't seen.
🔹 Agent Function vs Agent Program
Aspect Agent Function Agent Program
Definition Abstract mathematical mapping Practical implem
Maps Percept sequence → Action Real code that ru
Example Table: [A, Dirty] → Suck if dirty: suck; else
Execution platform Doesn't run anywhere Runs on an archi
🔹 Vacuum Cleaner World Example
[Link]
📍Environment:
Two squares: A and B
Each can be clean or dirty
📍Percepts:
[Current Location, Dirt Status]
📍Actions:
Left, Right, Suck, NoOp (Do Nothing)
📍Simple Agent Function:
Percept Sequence
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
[A, Clean], [A, Dirty]
[A, Clean], [A, Clean], [A, Dirty]
📍Simple Reflex Agent Program:
if status ==
"Dirty":
return
"Suck"
elif
location == "A":
return
"Right"
elif
location == "B":
return
"Left"
This agent has no memory or learning — just reacts
to current input.
🔹 Randomization in Agents
Some agents randomly select actions (e.g., in uncertain
environments)
We may need to repeat percept sequences to find
probabilities
Surprisingly, randomness can lead to intelligent behavior in
some cases
🔹 Purpose of the Agent Model
The concept of an "agent" is used to analyze intelligent
systems
Not every system needs to be considered as an agent (e.g.,
calculator = not useful to model as an agent)
💡 INTELLIGENT AGENTS
✅ Definition
An intelligent agent is an autonomous entity that:
Perceives its environment
Uses sensors and actuators
Acts rationally to achieve its goals
May learn from experience
✅ Features of Intelligence
An intelligent system should be able to:
Calculate
Reason
Learn from experience
Perceive relationships and analogies
Solve problems
Classify and generalize
Adapt to new situations
Understand complex ideas
✅ Rules for an AI Agent
1. Must perceive the environment.
2. Must use perception to make decisions.
3. Must take actions based on those decisions.
4. Must act rationally (maximize expected outcome).
✅ Examples of Intelligent Agents
Agent Type Sensors Actuators
Software Agent Keystrokes, file contents, packets Displays, sends pa
Human Agent Eyes, ears, etc. Legs, mouth, hand
Robotic Agent Cameras, infrared sensors Motors, grippers
Thermostat Temperature sensor Heater/cooler swi
💠 RATIONAL AGENTS
✅ What is a Rational Agent?
A rational agent is one that:
Has clear preferences
Models uncertainty
Chooses actions to maximize performance
Is said to do the "right thing"
AI = Creation of rational agents
Used in game theory, decision theory, and real-world AI
systems
✅ Performance Measure
Used to evaluate how good an agent is
Depends on the task
Example:
For a vacuum agent:
→ Total amount of dirt cleaned over time
🔁 Summary of Concepts
Term Meaning
Agent Entity that perceives and acts
Environment External context where agent operates
Sensor Tool for perception
Actuator Tool for action
Percept Current input
Percept Sequence Complete input history
Agent Function Abstract mapping (percepts → action)
Agent Program Implementation of agent function
Architecture Physical system where program runs
Rational Agent Acts to maximize performance
Intelligent Agent Learns, adapts, reasons, acts rationally
📊 Diagram: Agent-Environment Interaction
+--------------------------+
| ENVIRONMENT |
+--------------------------+
↑
(Percepts)
↑
+--------+ Sensors +--------+
| | <-------------- | |
| AGENT | | |
| | -------------> | |
+--------+ Actuators +--------+
↓
(Actions)
✅ Final Notes
Not all systems need to be modeled as agents (e.g., calculator)
The agent-based approach is a powerful way to design AI systems
The vacuum cleaner is a simple example but can be extended for
complex AI ideas
Would you like:
📄 A PDF of these notes?
🧠 Flashcards for quick revision?
Diagrams or figures separated as images?
Let me know!
o Good behavior
🧠 2.2 GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY
🔹 What is a Rational Agent?
A rational agent is one that does the right thing.
“Right thing” means:
→ The agent takes actions that maximize its
success, based on:
What it knows from perception
What it is able to do
How success is measured
✅ Formal Definition of a Rational Agent
For each possible percept sequence, a rational
agent selects the action that is expected to
maximize its performance measure, given:
The percept sequence observed so far
Any built-in prior knowledge
Available actions
📌 Performance Measure
❓What is it?
A performance measure is a criterion for
determining how well an agent has done.
It evaluates the sequence of environment states
caused by the agent's actions.
❗Important:
Objective, not subjective (should not rely on agent’s
self-opinion).
Designed by the agent designer, not by the agent
itself.
🧹 Example: Vacuum Cleaner Agent
⚡Wrong performance measure:
“Amount of dirt cleaned”
→ Agent may cheat: clean → dump
dirt again → re-clean
✅Better performance measure:
“One point for each clean square at each
time step”
Add penalties for:
Electricity consumption
Noise generated
Example: Self-driving Car
Performance Measures:
Time to reach destination (minimize)
Passenger safety (maximize)
Obeying traffic laws
Predictability of behavior
Comfort of ride
🎮 Example: Game-playing Agent
Performance Measures:
Win/loss ratio
Robustness against different opponents
Unpredictability (to confuse opponent)
🎯 What Determines Rationality?
Rational behavior at any time depends on:
# Factor Description
1 Performance Measure Defines what counts as success
2 Prior Knowledge What the agent knows about the environment
3 Available Actions What the agent is capable of doing
4 Percept Sequence What the agent has perceived so far
❌ Rationality ≠ Omniscience
Omniscient agent: Knows actual outcomes of actions ahead
of time (impossible in real life).
Rational agent: Chooses action that gives best expected
outcome, not guaranteed best.
🎓 Example:
You're crossing the street after checking there’s no traffic.
Suddenly, a plane part falls and hits you.
Was it irrational? No! You did the right thing based on your
knowledge.
You maximized expected performance, not actual outcome.
🔍 Information Gathering & Exploration
📥 Information Gathering:
Rational agents should take actions that improve
future decisions.
E.g., Look both ways before crossing a road.
🚶 Exploration:
In unknown environments, agents must explore to
learn.
E.g., a vacuum agent must explore to find dirty
squares in a new room.
📚 Learning and Adaptation
A rational agent should also learn from:
Its percepts
Its own past actions
🐞 Inflexible Agents (No Learning)
Example Description
Dung Beetle If the dung is removed mid-trip, it still pretends to plug
Repeats its actions blindly even after repeated failur
Sphex Wasp
again)
These insects have innate behavior with no learning
or flexibility.
✅ Intelligent Agent Must Be:
Autonomous: Acts based on its own experience, not
only pre-programmed rules
Flexible: Can adapt to changes in the environment
Learning-capable: Improves over time with
experience
Phases of Agent Computation
1. Design Phase: Designer gives prior knowledge & behavior
2. Action Selection: Agent chooses action based on current
knowledge
3. Learning Phase: Agent updates knowledge & improves
over time
Like evolution:
Give agent enough built-in intelligence to survive and then
learn on its own
🔁 Rational Agent vs Omniscient Agent
Aspect Rational Agent Omniscient Agent
Based on Expected performance Actual outcome
Knows future? No Yes (hypothetically)
Realistic? Yes No (not possible in r
Improves over time? Yes, via learning Doesn’t need to imp
📦 Summary Table
Concept Explanation Example
Does the right thing based on percepts, Vacuum agen
Rational Agent
knowledge, and actions location and di
Performance
Metric used to evaluate success Points for clean
Measure
Rationality Factors Performance measure, prior knowledge, 4 key factors (s
percepts, actions
Acts on what it knows, not on perfect Crossing road
Not Omniscience
future info events
Information
Acts to learn more about the world Looking before
Gathering
Exploration Seeks unknown parts of the environment Roaming unkno
Learning Learns from experience Better dirt pred
Autonomy Less dependence on prior design Learns new beh
Bad Example Sphex wasp, dung beetle (rigid behavior) No reaction to
Like a you
Good Design Built-in reflexes + ability to learn
programming
o Task ENvironment
🔹 What is a Task Environment?
A Task Environment refers to the "problem" to which an intelligent
agent is the "solution."
Designing an intelligent agent starts with understanding the task
environment in detail.
The task environment includes:
Performance measure
Environment
Actuators
Sensors
This is called the PEAS description.
PEAS Representation and Task Environment Properties
The design of an Artificial Intelligence (AI) agent or rational agent
begins by specifying its task environment as fully as possible. This
specification is often done using the PEAS representation model.
Understanding the nature of the task environment is crucial because
it directly affects the appropriate design for the agent program.
Component Meaning Example (Automated
Criteria to judge the agent's
P – Performance Measure Safe, fast, legal trip; m
success
E – Environment Everything the agent interacts with Roads, traffic, pedestr
A – Actuators Tools agent uses to act Steering, accelerator,
S – Sensors Tools to sense the environment Cameras, GPS, sonar,
1. PEAS Representation
PEAS is a model used to describe the properties of an AI
agent and its environment. It stands for four key
components:
P: Performance Measure: This defines the objective
for the success of an agent's behavior. It specifies
what the agent should strive to achieve or
maximize.
E: Environment: This refers to the world in which
the agent operates. It includes all the elements with
which the agent interacts.
A: Actuators: These are the mechanisms through
which the agent acts upon the environment. They
are the means by which the agent executes its
decisions.
S: Sensors: These are the perceptual inputs that
allow the agent to observe its environment. They
provide the agent with information about the
current state of the world.
Example: PEAS description for an Automated Taxi Driver
Agent
Agent Type: Taxi driver
Performance Measure:
Safe trip
Fast trip
Legal trip (minimizing violations of traffic
laws)
Comfortable trip (maximizing passenger
comfort)
Maximize profits
Other desirable qualities include getting to
the correct destination, minimizing fuel
consumption and wear and tear, and
minimizing trip time and/or cost. These
goals often conflict, requiring tradeoffs.
Environment:
Roads (ranging from rural lanes and urban
alleys to 12-lane freeways)
Other traffic
Pedestrians
Customers (potential and actual
passengers)
Can also include stray animals, road works,
police cars, puddles, and potholes.
The environment might vary, for example,
operating in Southern California (seldom
snow) or Alaska (often snow), or driving on
the right/left side of the road. A more
restricted environment simplifies the design
problem.
Actuators:
Steering
Accelerator
Brake
Signal
Horn
Display (for talking to passengers or
communicating with other vehicles, e.g., via
a display screen or voice synthesizer)
Sensors:
Cameras (one or more controllable TV
cameras)
Sonar (to detect distances to other cars and
obstacles)
Speedometer
GPS (Global Positioning System, for accurate
position information with respect to an
electronic map)
Odometer
Accelerometer (to control the vehicle
properly, especially on curves)
Engine sensors (to know the mechanical
state of the vehicle)
Keyboard (or microphone, for the
passenger to request a destination)
Other PEAS Examples:
Medical diagnosis system:
Performance Measure: Healthy patient,
minimize costs, lawsuits.
Environment: Patient, hospital, staff.
Actuators: Display questions, tests,
diagnoses, treatments, referrals.
Sensors: Keyboard entry of symptoms,
findings, patient's answers.
Satellite image analysis system:
Performance Measure: Correct image
categorization.
Environment: Downlink from orbiting
satellite.
Actuators: Display categorization of scene.
Sensors: Color pixel arrays.
Part-picking robot:
Performance Measure: Percentage of parts
in correct bins.
Environment: Conveyor belt with parts;
bins.
Actuators: Jointed arm and hand.
Sensors: Camera, joint angle sensors.
Refinery controller:
Performance Measure: Maximize purity,
yield, safety.
Environment: Refinery, operators.
Actuators: Valves, pumps, heaters, displays.
Sensors: Temperature, pressure, chemical
sensors.
Interactive English tutor:
Performance Measure: Maximize student's
score on test.
Environment: Set of students, testing
agency.
Actuators: Display exercises, suggestions,
corrections.
Sensors: Keyboard entry.
2. Task Environment Types
Task environments can be categorized along several
dimensions, which largely determine the appropriate agent
design.
Fully Observable vs. Partially Observable:
Fully Observable: If an agent's sensors
provide access to the complete state of the
environment at each point in time, it is fully
observable. An environment is effectively
fully observable if the sensors detect all
aspects relevant to the choice of action,
considering the performance measure.
Convenience: Fully observable
environments are convenient
because the agent does not need to
maintain any internal state to track
the world.
Examples: A crossword puzzle,
Chess (mostly), Image analysis.
Partially Observable: An environment might
be partially observable due to noisy or
inaccurate sensors, or because parts of the
state are simply missing from the sensor
data. If an agent has no sensors, the
environment is unobservable.
Examples: A vacuum agent with
only a local dirt sensor (cannot see
other squares), an automated taxi
(cannot see what other drivers are
thinking), Poker, Backgammon,
Medical diagnosis, Part-picking
robot, Refinery controller,
Interactive English tutor.
Note: If an environment is partially
observable, it can appear to be
stochastic.
Single Agent vs. Multiagent:
Single Agent: If only one agent is involved
and operating by itself in an environment.
Examples: Crossword puzzle (when
solved by itself), Medical diagnosis,
Image analysis, Part-picking robot,
Refinery controller.
Multiagent: If multiple agents are
operating in an environment. The key
distinction for treating an object as another
agent is whether its behavior is best
described as maximizing a performance
measure that depends on the current
agent's behavior.
Competitive Multiagent: Agents are
trying to maximize their own
performance measure, which often
minimizes another agent's
performance measure.
Example: Chess (opponent
tries to minimize agent A's
performance measure),
Poker, Backgammon.
Partially Cooperative Multiagent:
Agents share some common goals.
Example: Taxi driving
(avoiding collisions
maximizes performance for
all agents).
Partially Competitive: Agents also
have conflicting goals.
Example: Taxi driving (only
one car can occupy a
parking space).
Note: Multiagent environments
often lead to complex design
problems, where communication or
even stochastic behavior (to avoid
predictability) can be rational.
Other Examples: Interactive English
tutor.
Deterministic vs. Stochastic:
Deterministic: If the next state of the
environment is completely determined by
the current state and the action executed
by the agent.
Convenience: In a fully observable,
deterministic environment, an
agent does not need to worry about
uncertainty.
Examples: Crossword puzzle, Chess
(mostly, except for rare rules about
history), Part-picking robot.
Strategic: An environment is
strategic if it is deterministic except
for the actions of other agents.
Stochastic: If the next state of the
environment is not completely determined
by the current state and the agent's action.
This generally implies that uncertainty
about outcomes is quantified in terms of
probabilities.
Appearance: If an environment is
partially observable, it can appear
to be stochastic.
Uncertain: An environment is
uncertain if it is not fully observable
or not deterministic.
Nondeterministic: Actions are
characterized by their possible
outcomes, but no probabilities are
attached to them.
Examples: Taxi driving (cannot
predict traffic exactly; tires blow
out), Vacuum world (if dirt appears
randomly or suction is unreliable),
Poker, Backgammon, Medical
diagnosis, Refinery controller,
Interactive English tutor. Most real
situations are complex and must be
treated as stochastic for practical
purposes.
Episodic vs. Sequential:
Episodic: The agent's experience is divided
into atomic episodes. In each episode, the
agent receives a percept and performs a
single action. Crucially, the next episode
does not depend on the actions taken in
previous episodes.
Simplicity: Episodic environments
are much simpler because the agent
does not need to think ahead.
Examples: Many classification tasks,
an agent spotting defective parts on
an assembly line (each decision is
independent), Image analysis, Part-
picking robot, Refinery controller.
Note: Some environments can be
episodic at higher levels (e.g., a
chess tournament is a sequence of
games, each game is an episode)
even if decisions within an episode
are sequential.
Sequential: The current decision could
affect all future decisions. Short-term
actions can have long-term consequences.
Complexity: Requires the agent to
think ahead.
Examples: Chess, Taxi driving,
Crossword puzzle, Poker,
Backgammon, Medical diagnosis,
Interactive English tutor.
Static vs. Dynamic:
Dynamic: If the environment can change
while an agent is deliberating. If the agent
hasn't decided yet, it counts as deciding to
do nothing.
Examples: Taxi driving (other cars
and taxi keep moving while the
algorithm deliberates), Medical
diagnosis, Image analysis, Part-
picking robot, Refinery controller,
Interactive English tutor.
Static: If the environment does not change
while the agent is thinking. The passage of
time as the agent deliberates is irrelevant.
Examples: Crossword puzzles.
Semi-dynamic: If the environment itself
does not change with the passage of time,
but the agent's performance score does.
Examples: Chess, when played with
a clock, Poker, Backgammon.
Discrete vs. Continuous:
This distinction can apply to the state of the
environment, how time is handled, and the
percepts and actions of the agent.
Discrete: If the number of distinct percepts
and actions is limited.
Examples: Chess (finite number of
distinct states, discrete percepts
and actions), Crossword puzzle,
Poker, Backgammon, Interactive
English tutor.
Continuous: If the percepts, actions, or
state variables sweep through a range of
continuous values and do so smoothly over
time.
Examples: Taxi driving (speed,
location, steering angles are
continuous), Medical diagnosis,
Image analysis, Part-picking robot,
Refinery controller.
Note: Input from digital cameras is
strictly discrete but often treated as
representing continuously varying
intensities and locations.
Known vs. Unknown:
This is listed as a type of task environment.
However, the provided sources do not
contain a detailed explanation or definition
of what constitutes a "Known" versus
"Unknown" environment in the same way
they define the other types.
Known:
Agent knows the rules of the
environment and outcomes of
actions.
✅ Example: Board games with fixed
rules
Unknown:
Agent must learn the environment
behavior.
✅ Example: Real-world navigation in
new cities
3. Complexity of Environments
The hardest case for agent design is an environment that is
partially observable, stochastic, sequential, dynamic,
continuous, and multiagent.
Taxi driving is difficult because it exhibits almost all of these
characteristics.
For practical purposes, many real-world situations are so
complex that whether they are truly deterministic is
debatable; they must often be treated as stochastic.
4. Environment Class and Generator
Environment Class: To evaluate an agent effectively,
experiments are often carried out not just in a single
environment but across an "environment class". This
involves running many simulations with different conditions
(e.g., traffic, lighting, weather for a taxi driver) to ensure the
agent's design is robust and effective in general, not just for
a specific scenario.
Environment Generator: A tool that selects particular
environments (with certain likelihoods) from an
environment class in which to run the agent. For example, a
vacuum environment generator initializes dirt patterns and
agent locations randomly.
A rational agent for a given environment class is designed to
maximize the average performance over that class.
Task Observable Agents Determinism Episodic Dynamic D
Crossword puzzle Fully Single Deterministic Episodic Static D
Chess (with clock) Fully Multi Strategic Sequential Semi D
Poker Partially Multi Stochastic Sequential Static D
Backgammon Fully Multi Stochastic Sequential Static D
Taxi Driving Partially Multi Stochastic Sequential Dynamic C
Medical Diagnosis Partially Single Stochastic Sequential Semi C
Image Analysis Fully Single Deterministic Episodic Static C
Part-Picking Robot Partially Single Deterministic Episodic Dynamic C
Refinery Controller Partially Single Stochastic Sequential Dynamic C
Interactive English Tutor Partially Multi Stochastic Sequential Dynamic D
o Structure of Agent
Here are full, easy-to-study, and detailed notes for Chapter 2.4: The
Structure of Agents from Artificial Intelligence, covering all points including
examples and diagrams:
📘 Chapter 2.4: The Structure of Agents – Detailed Notes
🌟 Overview
We now move from agent behavior (what it does) to its
internal structure — how it decides what to do.
🧠 Agent = Architecture + Program
Architecture: Physical machinery (e.g., sensors,
actuators, computing platform).
Agent Program: Software that implements the
agent function — maps percepts to actions.
Example: If a program recommends action "WALK",
the architecture must have legs!
🧾 Agent Function vs Agent Program
Aspect Agent Function Age
Input Entire percept history Cur
Output Action Acti
Representation Abstract mathematical mapping Exe
Feasibility Often infeasible due to infinite possibilities Pra
Agents must maintain internal memory if actions depend on
percept history.
🧮 1. Table-Driven Agent (Impractical Example)
function TABLE-DRIVEN-
AGENT(percept) returns an action
static: percepts, initially
empty
table, a full lookup
table
append percept to
percepts
action ←
LOOKUP(percepts, table)
return action
Stores the entire percept sequence and looks up an
action.
This implements the agent function directly.
❌ Why Table-Driven Agents Fail
Let:
P = set of possible percepts
T = lifetime of the agent
Then table size = |P|^T entries 😵
Example: An autonomous taxi with video input
would need more entries than the number of atoms
in the universe.
Problems:
1. Too large to store
2. Impossible to design manually
3. Can't learn all entries from experience
4. No guidance for filling the table
✅ Still useful theoretically to understand agent
behavior.
🧠 2. Simple Reflex Agents
[Link]
Action is based only on current percept.
Uses condition-action rules:
Example: if car-in-front-is-braking then initiate-
braking
💡 Example: Vacuum World
function
REFLEX-VACUUM-AGENT([location, status]) returns
an action
if status
= Dirty then return Suck
else if
location = A then return Right
else
return Left
🔁 General Form:
function
SIMPLE-REFLEX-AGENT(percept) returns an action
static:
rules
state ←
INTERPRET-INPUT(percept)
rule ←
RULE-MATCH(state, rules)
action ←
RULE-ACTION(rule)
return
action
🖼 Diagram: Simple Reflex Agent Structure
(Rectangles = internal state, Ovals = rules & inputs)
+------------------+
Percept →
| Interpret Input | → State
+------------------+
↓
+------------------+
|
Rule Matching |
+------------------+
↓
+------------------+
|
Rule Action | → Action
+------------------+
⚠ Limitations:
Works only in fully observable environments.
Fails in partially observable environments.
May enter infinite loops if not enough information
is available.
🎲 Randomized Reflex Agents
Adds randomness to actions to escape infinite loops.
Helps in uncertain or partially observable
environments.
Example: Vacuum agent with only dirt sensor flips a
coin to choose direction when no dirt is perceived.
🧠 3. Model-Based Reflex Agents
[Link]
Maintains internal state = a model of the unobserved world.
Requires:
1. Knowledge of how the world evolves
2. Knowledge of how actions affect the world
📘 Example:
In driving, agent stores previous video frame to
detect if brake lights turned on.
🧩 Structure:
function
REFLEX-AGENT-WITH-STATE(percept) returns an
action
static:
state, rules, last action
state ←
UPDATE-STATE(state, action, percept)
rule ←
RULE-MATCH(state, rules)
action ←
RULE-ACTION(rule)
return
action
🖼 Diagram:
[Percept] +
[Previous State] → UPDATE-STATE → Current State
↓
RULE-MATCH
↓
RULE-ACTION → [Action]
🎯 4. Goal-Based Agents
[Link]
Uses goal information in addition to percepts and state.
Capable of planning to reach goal.
Decision based on:
“What will happen if I do X?”
“Will it help me reach the goal?”
Example: Taxi at intersection needs goal to decide which
direction to go.
🖼 Diagram:
[Percept] → Update State
→ Current State + Goal
↓
Choose
Actions to Achieve Goal
↓
Action
⚖ Pros:
More flexible and adaptable
Can change goals easily
⚠ Cons:
May require complex search and planning (Chapters
3–6, 11–12)
📈 5. Utility-Based Agents
[Link]
Utility function measures preference (not just goal
satisfaction).
Chooses action that maximizes expected utility.
Handles:
1. Conflicting goals (e.g., speed vs. safety)
2. Uncertainty in outcomes
Example: Taxi chooses the safest and fastest route, not just
any route.
🧮 Utility Function:
Maps a world state (or history) to a real number.
Rational agents maximize expected utility.
🖼 Diagram:
[Percept] → Update State
→ Current State + Utility Function
↓
Evaluate
Outcomes & Choose Best Action
↓
Action
🎓 6. Learning Agents
[Link]
🧠 Motivation
Can improve performance over time.
Useful in unknown or dynamic environments.
💡 Key Components:
Component Description
Performance Element Chooses external actions
Learning Element Improves performance based on feed
Critic Provides feedback based on performa
Problem Generator Suggests exploratory actions to impro
🖼 Diagram:
+----------------------------+
|
Performance Element |
| (takes
percepts, gives |
|
actions) |
+------------+-------------+
|
Actions + Percepts
↓
Environment
↓
Critic
↓
Performance Feedback
↓
+------------+-------------+
|
Learning Element |
+------------+-------------+
↓
Suggests modifications to
Performance Element
↓
+---------------------+
|
Problem Generator |
+---------------------+
Example: Taxi agent makes dangerous turn → other
drivers honk → Critic sends feedback → Learning
element updates driving rules.
✅ Summary Table: Types of Agents
Type of Agent Memory/State Goals
Table-driven Agent Full percept history ❌
Simple Reflex Agent No ❌
Model-based Reflex Agent Yes ❌
Goal-based Agent Yes ✅
Utility-based Agent Yes ✅
Learning Agent Yes ✅
🧠 Final Thoughts
Building intelligent agents is about more than behavior —
it’s about creating smart internal structure.
Reflex agents are simple but limited.
More advanced agents use models, goals, utilities, and
learning to handle complexity and uncertainty.
Learning makes agents adaptive and self-improving —
essential for real-world AI.
If you'd like visual slides, charts, or simplified flashcards for
revision, just let me know!