0% found this document useful (0 votes)
10 views8 pages

Module 1 Notes

The document discusses the cognitive modeling approach in AI, which aims to replicate human thought processes by understanding how humans think through introspection, experiments, and brain imaging. It also explains the concept of agents and environments, detailing how agents perceive their surroundings, make decisions based on their percepts, and act accordingly, with a focus on the PEAS framework for defining task environments. Additionally, it categorizes task environments based on observability and distinguishes between single-agent and multi-agent environments, highlighting the implications for agent design.

Uploaded by

anithakannaiah8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

Module 1 Notes

The document discusses the cognitive modeling approach in AI, which aims to replicate human thought processes by understanding how humans think through introspection, experiments, and brain imaging. It also explains the concept of agents and environments, detailing how agents perceive their surroundings, make decisions based on their percepts, and act accordingly, with a focus on the PEAS framework for defining task environments. Additionally, it categorizes task environments based on observability and distinguishes between single-agent and multi-agent environments, highlighting the implications for agent design.

Uploaded by

anithakannaiah8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.1.

2 Thinking humanly: The cognitive modeling approach

The cognitive modeling approach aims to create AI that thinks like humans. To do this, we
need to understand how humans think, which can be done through introspection (examining our
own thoughts), psychological experiments (observing people in action), and brain imaging
(observing brain activity). Once we have a clear theory of how the mind works, we can create
computer programs based on that theory. If a program's behavior matches human behavior, it
suggests the program's processes might be similar to human thinking.

For example, Allen Newell and Herbert Simon developed a program called General Problem
Solver (GPS) that not only solved problems but also tried to mimic the way humans think while
solving those problems. This is part of the field called cognitive science, which blends AI
models and psychology to study the human mind. While AI and cognitive science were once
confused as the same thing, they are now seen as separate fields that learn from each other,
especially in areas like computer vision where insights from brain science are used to improve
AI.

2.1 AGENTS AND ENVIRONMENTS

An agent is anything that interacts with its environment by sensing it and then taking actions in
response. For example:

 A human agent uses senses like eyes and ears to perceive the world and uses hands, legs,
and voice to act.
 A robotic agent might use cameras and sensors to perceive its surroundings and motors
to move or interact with objects.
 A software agent gets inputs like keystrokes or data files and acts by displaying
information or sending network data.

The term percept refers to the information an agent perceives at a specific moment. The percept
sequence is the entire history of what the agent has sensed up until that point. An agent makes
decisions based on its entire percept sequence, not on information it hasn’t experienced.

The agent function is the mathematical description of how the agent decides what actions to
take, based on everything it has perceived so far. It maps each sequence of perceptions to a
corresponding action, essentially describing how the agent behaves.

The agent function is a mathematical description that tells us how an agent behaves by mapping
every possible sequence of perceptions (what the agent senses) to a corresponding action. If we
were to list out all possible situations and the actions the agent would take in response, it would
create a very large, or even infinite, table.

However, in real life, instead of this huge table, an agent program is what actually runs inside
an artificial agent (like a robot or software). The agent program is a practical implementation
that decides how the agent will act based on what it perceives, according to the rules described
by the agent function.
In short, the agent function is the theoretical plan for how the agent acts, and the agent
program is the code or system that makes it happen in the real world.

Here’s a simple example of how an agent works using a robotic vacuum cleaner in a room:

Scenario: Robotic Vacuum Cleaner Agent

1. Environment: A room with a few areas that may or may not have dirt (like dust or
crumbs).
2. Perception: The robotic vacuum has sensors that allow it to detect:
o Its current location (e.g., in a corner, near a wall).
o Whether the area it is in is dirty or clean.
3. Possible Actions: Based on what it perceives, the vacuum can:
o Suck up dirt (clean the area).
o Move forward.
o Turn left or right.
o Stop.

How the Agent Works:

1. Perceive: The vacuum senses that it is in a dirty area.


2. Decide Action: Based on its rules (agent function), it decides:
o If the area is dirty, it will suck up the dirt.
3. Act: The vacuum performs the action of sucking up the dirt.
4. Repeat: After cleaning, it checks its surroundings again:
o If the current area is clean, it might decide to move to another area and repeat
the process.

Summary:

In this example, the robotic vacuum cleaner acts as an agent:

 It perceives its environment using sensors.


 It decides what to do based on predefined rules (like cleaning dirty areas).
 It acts by moving or cleaning, then repeats the process based on new perceptions.

This loop of perceiving, deciding, and acting is how agents function in various environments!

2.3 THE NATURE OF ENVIRONMENTS

Before we can build rational agents, we need to understand their task environments, which are
the specific problems or situations the agents will be working in. Here's a simple breakdown of
this concept:
1. Task Environment Definition: A task environment is the context or scenario where an
agent operates and faces challenges. It includes all the conditions and factors that the
agent must consider while making decisions.
2. Importance of Specification: To design an effective agent, we first need to clearly
specify the task environment. This means identifying what the agent needs to do, the
rules it has to follow, and the resources it can use.
3. Examples of Task Environments: Different environments might include:
o A chess game (where the agent must make strategic moves).
o A robotic vacuum cleaner's home (where it needs to navigate and clean rooms).
o A self-driving car navigating traffic (where it must respond to various road
conditions and obstacles).
4. Flavors of Task Environments: Task environments vary widely; some might be fully
predictable (like a game of chess), while others could be uncertain and dynamic (like
driving in real-world traffic). The nature of the task environment influences how we
design the agent's program.

In summary, understanding the task environment is crucial because it shapes how we build
rational agents that can effectively solve problems or accomplish goals within that context.

2.3.1 Specifying the task environment

When designing an agent, it’s important to define its task environment using a framework
called PEAS, which stands for:

 Performance Measure: How we evaluate the agent's success (e.g., safe driving, timely
arrivals, passenger satisfaction).
 Environment: The context in which the agent operates (e.g., city streets, traffic
conditions, weather).
 Actuators: The tools the agent uses to take action (e.g., steering wheel, brakes,
accelerator in a taxi).
 Sensors: The inputs the agent uses to perceive its environment (e.g., cameras, radar,
GPS).

In our example of a vacuum-cleaner agent, the PEAS framework helps us clarify its role and
function.

Now, considering a more complex example, the automated taxi driver:

1. Performance Measure: We would assess its ability to drive safely, get passengers to
their destinations on time, and ensure a comfortable ride.
2. Environment: The taxi operates in a bustling urban area, facing challenges like
unpredictable traffic, pedestrians, and varying weather conditions.
3. Actuators: The taxi would use components like the steering system, brakes, and throttle
to navigate the roads.
4. Sensors: The taxi would rely on various sensors like cameras, radar, and GPS to
understand its surroundings and track its location.
Although fully automated taxis are still under development and face many challenges due to the
complexity of driving, using the PEAS framework helps us outline the important aspects needed
for their design and functionality.

When designing an automated taxi driver, we need to consider several important aspects:

1. Performance Measures:

These are the goals we want the automated driver to achieve, including:

 Getting to the correct destination: Ensuring passengers arrive where they want to go.
 Minimizing fuel consumption: Using less fuel to save money and reduce environmental
impact.
 Minimizing wear and tear: Taking care of the vehicle to extend its lifespan.
 Minimizing trip time or cost: Getting passengers to their destination quickly and
affordably.
 Minimizing traffic violations: Following traffic laws to avoid accidents and fines.
 Maximizing safety and comfort: Ensuring a safe ride and making sure passengers feel
comfortable.
 Maximizing profits: Generating revenue for the taxi service.

Since some of these goals can conflict (for example, minimizing trip time may conflict with
maximizing safety), the design will require careful trade-offs.

2. Driving Environment:

The taxi will operate in various conditions, such as:

 Different types of roads (rural lanes, urban alleys, busy highways).


 Interactions with other vehicles, pedestrians, animals, and obstacles (like road work,
puddles, and potholes).
 Engaging with potential and actual passengers.

There are also optional factors to consider, such as:

 The geographical area (e.g., driving in sunny Southern California versus snowy Alaska).
 Driving rules, like which side of the road to drive on (right in the U.S. vs. left in the U.K.
or Japan).

The more restricted or defined the driving environment is, the easier it is to design the automated
driver. In contrast, a complex and variable environment presents more challenges that the system
must be able to handle.
For an automated taxi, the actuators and sensors are crucial for it to operate effectively. Here’s
a breakdown:

Actuators:

These are the parts that allow the taxi to take action, similar to what a human driver would use:

 Accelerator: Controls the engine speed to make the taxi go faster or slower.
 Steering: Allows the taxi to turn and navigate the roads.
 Brakes: Slows down or stops the vehicle.
 Communication Outputs:
o Display Screen: Shows information to the passengers.
o Voice Synthesizer: Allows the taxi to talk to passengers (for example, giving
updates about the trip).
o Communication with Other Vehicles: Enables the taxi to interact with other
cars on the road.

Sensors:

These help the taxi perceive its environment:

 Video Cameras: Let the taxi "see" the road and surrounding area.
 Infrared or Sonar Sensors: Measure distances to other cars and obstacles to avoid
collisions.
 Speedometer: Monitors the taxi’s speed to prevent speeding.
 Accelerometer: Helps understand how the vehicle is moving, especially around curves.
 Mechanical State Sensors: Check the condition of the engine, fuel levels, and electrical
systems to ensure the taxi is functioning properly.
 GPS (Global Positioning System): Keeps the taxi on track and helps it navigate without
getting lost.
 Input Device: A keyboard or microphone for passengers to enter their destination
requests.

Together, these actuators and sensors allow the automated taxi to drive safely, communicate
effectively, and provide a good experience for passengers.

The text discusses various types of agents and emphasizes that the complexity of their
environment is more important than whether that environment is "real" or "artificial." Here’s a
simplified breakdown:

PEAS Elements for Different Agents:

 The PEAS framework (Performance, Environment, Actuators, Sensors) can be applied


to various types of agents, whether they operate in physical or digital environments.

Real vs. Artificial Environments:


 Some readers might think that agents working in a purely digital space (like a program
responding to keyboard inputs) aren’t in a "real" environment. However, the key point is
that what matters is the complexity of the interaction between the agent’s behavior, the
percepts it receives, and how its success is measured.

Examples:

1. Simple Physical Environment:


o A robot on a conveyor belt has a straightforward task: inspect parts as they come
by. It can rely on assumptions like consistent lighting and limited actions (only
accepting or rejecting parts), making the environment relatively simple.
2. Complex Digital Environment:
o A software agent (also called a softbot) that operates a website has to deal with
much more complexity. For example, a news aggregator softbot needs to:
 Understand and process natural language to filter relevant news articles.
 Learn about users' interests and advertisers’ needs.
 Adapt to changing conditions, like a news source going offline or a new
one appearing.

Conclusion:

The Internet and digital environments can be as complex as physical ones, filled with various
human and artificial agents. This complexity demands that agents develop sophisticated abilities
to navigate and operate effectively in these environments.

2.3.2 Properties of task environments


The text explains how task environments in AI can be categorized based on their observability,
which affects how agents are designed and how they operate. Here’s a simple breakdown:

Key Concepts:

1. Task Environments: The different situations or "problems" that AI agents work in.
2. Observability: This refers to how much information an agent can gather about its
environment using its sensors.

Types of Observability:

 Fully Observable:
o In this type of environment, the agent's sensors provide complete access to the
entire state of the environment at all times.
o The agent knows everything it needs to make decisions without needing to keep
track of past information.
o Example: A chess game where all pieces are visible to both players.
 Partially Observable:
o Here, the agent cannot access the complete state of the environment due to
limitations in its sensors.
o This could be due to factors like noisy sensors or missing information.
o Example: A vacuum cleaner that can only detect dirt in its immediate area and
not in other rooms, or an automated taxi that can't predict the actions of other
drivers.
 Unobservable:
o In this case, the agent has no sensors at all to gather information about the
environment.
o While this situation seems challenging, it’s sometimes still possible for the agent
to achieve its goals based on other factors or assumptions, which will be explored
further in later chapters.

Conclusion:

Understanding the observability of a task environment helps in designing AI agents because it


influences how much information they need to function effectively and make decisions.

The text describes the differences between single-agent and multi-agent environments in AI,
highlighting how these distinctions influence agent behavior and design. Here’s a simplified
explanation:

Key Concepts:

1. Single-Agent Environment:
o In this scenario, there is only one agent acting independently.
o Example: A person solving a crossword puzzle alone.
2. Multi-Agent Environment:
o This involves two or more agents that may interact with each other.
o Example: Two players competing in a game of chess.

Important Distinctions:

 Perception of Other Entities:


o An agent must decide whether to treat another entity as an agent (which has its
own goals) or simply as an object (which behaves according to predictable rules).
o Example: In chess, the opposing player is treated as an agent because their moves
directly impact your strategy. In contrast, a taxi driver might see another vehicle
merely as an obstacle to navigate around.
 Competitive vs. Cooperative:
o Competitive Environment: Agents are trying to outdo each other. In chess, each
player tries to win, which impacts the other's performance negatively.
o Cooperative Environment: Agents work together to achieve a common goal. In
a taxi scenario, all vehicles benefit from avoiding collisions.
o Partially Competitive: In the taxi example, while avoiding collisions is
beneficial for all drivers, there is also competition for limited resources like
parking spaces.

Implications for Agent Design:

 Communication: In multi-agent environments, agents often need to communicate with


each other to coordinate actions effectively.
 Randomized Behavior: In competitive settings, being unpredictable can be an
advantage, as it makes it harder for opponents to anticipate an agent's moves.

Conclusion:

Understanding whether an environment is single-agent or multi-agent helps in designing AI


systems, as the dynamics of interaction significantly influence how agents should behave and
what strategies they should employ.

You might also like