0% found this document useful (0 votes)
11 views19 pages

Lecture 3

The document discusses the concept of agents in artificial intelligence, outlining various types such as simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It emphasizes the importance of rationality in agent behavior, defined by performance measures and the agent's knowledge and actions. Additionally, it introduces the PEAS framework and different types of environments affecting agent functionality.

Uploaded by

extraubd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views19 pages

Lecture 3

The document discusses the concept of agents in artificial intelligence, outlining various types such as simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It emphasizes the importance of rationality in agent behavior, defined by performance measures and the agent's knowledge and actions. Additionally, it introduces the PEAS framework and different types of environments affecting agent functionality.

Uploaded by

extraubd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Agent

Artificial Intelligence
By

Muhammad Umar Farooq


Today topics
• Agent
• Rational agent
• PEAS
• Types of environment
• Types of Agent
– Simple reflex agents;
– Model-based reflex agents;
– Goal-based agents; and
– Utility-based agents
Agent

• Human, Computer program, Robotics


• agent function that maps any given percept sequence to an
action
– for an artificial agent it will be implemented by an agent program.
GOOD BEHAVIOR: THE CONCEPT OF
RATIONALITY
• A rational agent is one that does the right thing
• performance measure that evaluates any given
sequence of environment states.
• Rational at any given time depends on four
things:
- The performance measure that defines the criterion of
success.
- The agent’s prior knowledge of the environment.
- The actions that the agent can perform.
- The agent’s percept sequence to date.
Rational Agent
• definition of a rational agent:
– For each possible percept sequence, a rational
agent should select an action that is expected to
maximize its performance measure, given the
evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
PEAS (Performance, Environment, Actuators, Sensors)
Types of environment
• Fully observable vs. partially observable:
– If an agent’s sensors give it access to the complete state of the environment at
each point in time, then we say that the task environment is fully observable

• Single agent vs. multi-agent:


• Deterministic vs. stochastic.
• Static vs. dynamic:
• Discrete vs. continuous:
Agent

agent program that implements the agent function


Agent = architecture + program
If the program is going to recommend actions like Walk, the
architecture had better have legs.
Types of agent programs
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents
Simple reflex agents
• These agents select actions on the basis of the
current percept
• ignoring the percept history
• environment is fully observable.
• condition–action rule, If then
• Example
– If temp>50 then turn on AC
Simple reflex agents
Model-based reflex agents
Model-based reflex agents
• The most effective way to handle partial
observability
• keep track of the past of the world it can’t see
now (store the percept history)
Goal based Agent
• Expansion of model based reflex agent
• Desirable situation (goal)
• Searching and planning
• Works in partially observable environment
Goal based Agent
Utility-based agents
• Focus on utility not Goal
• Utility function (Function that deal with happy
or unhappy state)
• Works in partially observable environment

You might also like