Module 1 – Worksheet 1
Mul ple Choice Ques ons
[Link] 1. What is an intelligent agent in AI?
a) A so ware program that performs simple tasks
b) A system that mimics human intelligence to achieve goals
c) An algorithm for sor ng data
d) A basic input-output system
Answer: b
[Link] 2. Which of the following is a key characteris c of intelligent agents?
a) Random behavior
b) Learning from experience
c) Fixed rule execu on
d) Limited adaptability
Answer: b
[Link] 3. What does the term "percep on" refer to in intelligent agents?
a) The ability to hear sounds
b) The process of sensing and interpre ng informa on from the environment c)
The ability to taste
d) The speed of decision-making
Answer: b
[Link] 4. In the context of intelligent agents, what is "actua on”?
a) The process of making decisions
b) The ability to learn from data
c) The execu on of ac ons in the environment
d) The ability to perceive informa on
Answer: c
[Link] 5. In the agent-environment framework, what does the environment represent?
a) The hardware on which the agent runs
b) The external system with which the agent interacts
c) The internal decision-making logic of the agent
d) The agent's memory storage
Answer: b
[Link] 6. What is the primary func on of sensors in Intelligent Agents?
a) Decision making
b) Interac on with the environment
c) Learning algorithms
d) Memory storage
Answer: b
[Link] 7. In an AI system, what is the purpose of the interac on between sensors and
actuators?
a) To op mize learning algorithms
b) To improve decision-making
c) To enable the agent to perceive and act in the environment
d) To enhance memory storage
Answer: c
[Link] 8. For the following, the par al tabula on of a simple agent func on for the vacuum-cleaner
world, what is the corresponding ac on for the data "[B, Clean]”?
Percept Ac o
Sequence n
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Le
[B, Dirty] Suck
[A, Clean],
[A, Clean] Right
[A, Clean],
[A, Dirty] Suck
a) Le
b) Right
c) Suck
d) Clean
Answer: a
[Link] 10. What is the primary characteris c of a “simple reflex agent”?
a) It considers the en re percept history.
b) It has complex decision-making rules.
c) It relies only on the current percept.
d) It uses advanced machine learning algorithms.
Answer: c
[Link] 9. What is the challenge for AI in wri ng programs for intelligent behavior?
a) Efficient memory usage
b) Genera ng vast lookup tables
c) Producing ra onal behavior from small programs
d) Implemen ng agent func ons
Answer: c
Module 1 – Worksheet 2
Fill in the blanks
[Link].1. An agent is anything that can be viewed as perceiving its environment through
__________ and ac ng upon that environment through __________.
Answer: Sensors, actuators
[Link].2. We use the term __________ to refer to the agent’s perceptual inputs at any given
instant.
Answer: Percept
[Link].3. The vacuum agent perceives which square it is in and whether there is __________ in
the square; if dirty, then __________; otherwise, move to the other square.
Answer: Dirt, Clean it
[Link].4. The behavior of a ra onal agent can become effec vely independent of its prior
knowledge a er sufficient ____________.
Answer: Experience
[Link].5. If the next state of the environment depends on the current state and the ac on
executed by the agent, the environment is considered _______.
Answer: Determinis c
Match the following
Associate the given Agen ype with the corresponding PEAS descrip on for an automated taxi.
PART A PART B ANSWER
(1) Performance (A) Roads, other traffic, pedestrians, customers.
Measure
(2) Environment (B) safe, legal, comfortable trip, maximize profits.
(3) Actuators (C) Cameras, Sonar, Speedometer, GPS, odometer,
accelerometer, engine sensors, keyboard
(4) Sensors (D) Steering, accelerator, brake, signal, horn, display
Answer
(1) – ( B )
(2) – ( A )
(3) – ( D )
(4) – ( C )
Module 1 – Worksheet 3
[Link] 1: Is it ra onal for the agent to oscillate needlessly back and forth a er all the dirt is cleaned
up? Why? How might a vacuum-cleaner agent adapt its behavior in an environment where clean
squares can become dirty again?
Hint: The agent's behavior should be evaluated based on its contribu on to the performance
measure, which is having a clean floor.
No, it is inefficient for a vacuum-cleaner agent to oscillate back and forth when all the dirt has been
cleaned. This wastes energy and leads to inefficiency and increased wear on the components. It should
behave intelligently, conserving resources and adap ng to its environment.
In such environments where clean squares can become dirty again, the agent can adapt by
Periodic recheck: The agent can visit cleaned areas at intervals such that it does not miss new dirt without
con nuous mo on.
Predic ve cleaning: Using sensors or past data, it can predict which areas are more likely to get dirty
frequently and concentrate on those.
Idle or standby mode: The agent can go into low power un l new dirt is detected to conserve energy.
Dynamic path planning: An agent can u lize coverage algorithms that are energy efficient in patrolling
regions systema cally without wandering stupidly.
These approaches allow the agent to keep the environment clean while maximizing energy efficiency and
extending its opera onal life me.
[Link] 2: Explain the four basic kinds of agent programs.
Hint:
There are four basic kinds of agent programs that embody the principles underlying almost all
intelligent systems.
The four basic types of agent programs are:
Simple Reflex Agents:
These agents operate according to condi on-ac on rules if-then. They respond in the present
percep on, without any thought of how it has occurred.
Limita on: Can't handle complex environments needing memory.
Model-Based Reflex Agents:
These agents hold an internal model of the environment, recording past states to use in determining the
ac ons to be taken at certain points in the future.
Advantage: Can handle par ally observable environments.
Goal-Based Agents:
These agents behave in ways that are directed toward the achievement of certain ends. They make plans
for ac ons based on contempla on of how each move takes them closer to the end.
Advantage: Enables highly adap ve, goal-directed behaviour.
U lity-Based Agents:
These agents a empt to maximize a u lity func on. They make choices about ac ons that appear to
have the highest net benefit, taking into account any trade-offs.
Advantage: Handles uncertainty and conflic ng goals well.
Q. No 3: Explain the difference between a reflex agent and a goal-based agent in ar ficial
intelligence, and how does the inclusion of goal informa on impact decision-making?
Hint:
Consider the role of goal informa on in decision-making for agents. How does a goal-based
agent use this informa on to make decisions, and how does it differ from the decision-making
process of a reflex agent? Think about flexibility, adaptability, and the explicit representa on of
knowledge.
A reflex agent and a goal-based agent differ significantly in terms of decision-making in ar ficial
intelligence:
1. Reflex Agent:
Decision-making: A reflex agent decides purely on the current percep on of the environment using
simple if-then rules. It reacts immediately without recalling previous states or an cipa ng later
outcomes.
Flexibility: They are fast and efficient for trivial tasks but not flexible.
Adaptability: reflex agents cannot adapt to changes happening in their environment or address
complicated situa ons that require planning or memory.
2. Goal-Based Agent:
Decision-making: A goal-based agent considers the goal they are trying to achieve while making
decisions. They plan their ac ons according to which steps bring them closer towards the goal.
Adaptability: Goal based agents can cope be er with a more complicated environment, adapt to
changing environment, and achieve long term objec ves.
Effect of Goal Knowledge:
The existence of goal knowledge allows a goal based agent to,
derive decisions that bring it closer to the goal.
What is referred to as a reflex agent essen ally reacts based on current condi ons without foresight,
whereas goal-based agents plan by determining how ac ons may impact progressing toward a goal,
thus offering much greater flexibility and adaptability.
Q. No: 4. Explain the significance of u lity in the context of agent behavior and decision-
making, and how does it relate to goals and performance measures?
Hint: Consider the limita ons of goals alone in genera ng high-quality behavior for
agents. Explore the role of u lity as a more general performance measure and its
rela onship with an agent's u lity func on. Think about the advantages of u lity-based
agents in handling conflic ng goals, uncertain es, and the tradeoff between mul ple
goals.
In this regard, u lity becomes crucial in the context of improving agent behavior and decision-
making in an environment where there might be uncertainty or too many conflic ng goals.
Meaning of U lity:
A u lity is a measure that indicates how desirable or preferable an agent finds an outcome. It
allows for the comparison and selec on of the compara ve desirability of different ac ons in
achieving the agents' goals.
Goals define the desired end-state, be it success or failure, for an agent; however, they may not be
rich enough to guide the agent toward the best and most efficient way of achieving such goals.
Goals alone cannot be expected to account for trade-offs or uncertain es.
U lity vs Objec ves
Goals are binary, achieved or not achieved, but they don't guide an agent on how to op mize its
ac ons, especially in complex environments with mul ple objec ves or conflic ng goals.
U lity gives a more general performance measure since it enables the agent to weigh in on
different outcomes and make trade-offs. This is really important when goals conflict (e.g., like
balancing between speed and safety) or impossible to achieve the goal perfectly.
U lity-Based Agents:
U lity-based agents use a u lity func on to specify values for different possible states or
outcomes. They try to maximize their overall u lity by taking the ac ons to produce the greatest
expected benefit.
They be er manage uncertainty, taking into account the likelihood as well as possible risks, to be
able to make be er decisions in unpredictable environments.
Trade-offs are managed by balancing several goals. For example, an agent may trade off the u lity
of a aining a goal fast against its a aining safely.
Advantages:
U lity enables agents to op mize their behavior through choices not only aligned with goals but
also sensi ve to considera ons such as efficiency, risk, and incommensurable desiderata. It lets
agents adapt more flexibly and decide more closely along the lines of maximizing overall
performance rather than goal a ainment.