0% found this document useful (0 votes)
1K views69 pages

Bcs515b Notes Dr. Sbl-1

Uploaded by

gagannag306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views69 pages

Bcs515b Notes Dr. Sbl-1

Uploaded by

gagannag306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Department of Computer Science and Engineering

Notes of Lesson
SUBJECT TITLE AND CODE: ARTIFICIAL INTELIGENCE – BCS515B
SEMESTER AND SCHEME : V SEMESTER
INSTITUTIONAL MISSION AND VISION
Objectives

• To provide quality education and groom top-notch professionals, entrepreneurs and leaders for different
fields of engineering, technology and management.
• To open a Training-R & D-Design-Consultancy cell in each department, gradually introduce doctoral
and postdoctoral programs, encourage basic & applied research in areas of social relevance, and develop
the institute as a center of excellence.
• To develop academic, professional and financial alliances with the industry as well as the academia at
national and transnational levels.
• To develop academic, professional and financial alliances with the industry as well as the academia at
national and transnational levels.
• To cultivate strong community relationships and involve the students and the staff in local community
service.
• To constantly enhance the value of the educational inputs with the participation of students, faculty,
parents and industry.

Vision
• Development of academically excellent, culturally vibrant, socially responsible and globally competent
human resources.

Mission
• To keep pace with advancements in knowledge and make the students competitive and capable at the
global level.
• To create an environment for the students to acquire the right physical, intellectual, emotional and
moral foundations and shine as torch bearers of tomorrow's society.
• To strive to attain ever-higher benchmarks of educational excellence.
Department of Computer Science And Engineering

Vision of the Department


Nurture graduates in Computer Science and Engineering to face challenges in industry, education and society
at global level.
Promote research that responds swiftly to the needs of the 21st century and to build a Center of Excellence.

Mission of the Department

➢ To inculcate professional behavior, strong ethical values, innovative research capabilities and leadership
abilities in the young minds & to provide a teaching environment that emphasizes depth, originality and critical
thinking.

➢ Motivate students to put their thoughts and ideas adoptable by industry or to pursue higher studies leading
to research.

Program Educational Objectives (PEO'S):

1. Empower students with a strong basis in the mathematical, scientific and engineering fundamentals to
solve computational problems and to prepare them for employment, higher learning and R&D.

2. Gain technical knowledge, skills and awareness of current technologies of computer science engineering
and to develop an ability to design and provide novel engineering solutions for software/hardware
problems through entrepreneurial skills.

3. Exposure to emerging technologies and work in teams on interdisciplinary projects with effective
communication skills and leadership qualities.

4. Ability to function ethically and responsibly in a rapidly changing environment by applying innovative
ideas in the latest technology, to become effective professionals in Computer Science to bear a life-long
career in related areas.
Program Specific Outcomes (PSOs)

1. Ability to apply skills in the field of algorithms, database design, web design, cloud
computing and data analytics.

2. Apply knowledge in the field of computer networks for building network and internet-
based applications.
Artificial Intelligence - BCS515B

MODULE - 1

Introduction to Artificial Intelligence:


AI is one of the newest fields in science and engineering. Work started in earnest soon after
World War II, and the name itself was coined in 1956. AI currently encompasses a huge
variety of subfields, ranging from the general (learning and perception) to the specific, such
as playing chess, proving mathematical theorems, writing poetry, driving a car on a crowded
street, and diagnosing diseases. AI is relevant to any intellectual task; it is truly a universal
field.
1.1 WHAT IS AI?
In Figure 1.1 we see eight definitions of AI, laid out along two dimensions. The definitions on
top are concerned with thought processes & reasoning, whereas the ones on the bottom address
behavior. The definitions on the left measure success in terms of fidelity to human performance,
whereas theones on the right measure against an ideal performance measure, called rationality.
A system is rational if it does the “right thing,” given what it knows. Historically, all four

approaches to AI have been followed, each by different people with different methods. A
human-centered approach must be in part an empirical science, involving observations and
hypotheses about human behavior. A rationalist approach involves a combination of
mathematics and engineering.

Department of CSE, ATMECE, Mysuru Page 1


Artificial Intelligence - BCS515B

1.1.1 Acting humanly: The Turing Test approach TURING TEST


The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the written responses come from a person
or from a computer. The computer would need to possess the following capabilities:
• Natural language processing to enable it to communicate successfully in English;
• Knowledge representation to store what it knows or hears;
• Automated reasoning to use the stored information to answer questions and to drawn
conclusions;
• Machine learning to adapt one circumstances and to detect and extrapolate patterns.

Turing’s test deliberately avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
“through the hatch.” To pass the total Turing Test, the computer will need
• Computer vision to perceive objects, and
• Robotics to manipulate objects and move about.

1.1.2 Thinking humanly: The cognitive modeling approach


If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are three ways to do this: through introspection—trying to catch our own thoughts as
they go by; through psychological experiments—observing a person in action; and through
brain imaging—observing the brain in action . Once we have a sufficiently precise theory of
the mind, it becomes possible to express the theory as a computer program. If the program’s
input–output behavior matches corresponding human behavior, that is evidence that some of
the program’s mechanisms could also be operating in humans.

1.1.3 Thinking rationally: The “laws of thought” approach


The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,”
that is, certain reasoning processes. His syllogisms provided patterns for argument structures
that always yielded correct conclusions when given correct premises. Logicians in the 19th
century developed a precise notation for statements about all kinds of objects in the world

Department of CSE, ATMECE, Mysuru Page 2


Artificial Intelligence - BCS515B

and the relations among them. The so-called logic is tradition within artificial intelligence
hopes to build on such programs to create intelligent systems. There are two main obstacles
to this approach. First, it is not easy to take informal knowledge and state it in the formal
terms required by logical notation, particularly when the knowledge is less than 100% certain.
Second, there is a big difference between solving a problem “in principle” and solving it in
practice.

1.1.4 Acting rationally: The rational agent approach


An agent is just something that acts (agent comes from the Latin agere, to do). Of course,
all computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, adapt to
change, and create and pursue goals. A rational agent is one that acts so as to achieve the best
outcome or, when there is uncertainty ,the best expected outcome. In the “laws of thought”
approach to AI, the emphasis was on correct inferences. Making correct inferences is
sometimes part of being a rational agent, because one way to act rationally is to reason
logically to the conclusion that a given action will achieve one’s goals and then to act on that
conclusion.

The rational agent approach hast advantages over the other approaches. First, it is more
general than the “laws of thought” approach because correct inference is just one of several
possible mechanisms for achieving rationality. Second, it is more amenable to scientific
development than are approaches based on human behavior or human thought. The standard
of rationality is mathematically well defined and completely general and can be “unpacked”
to generate agent designs that provably achieve it.

1.4 The State of The Art

What can AI do today? A concise answer is difficult because there are so many activities in
so many subfields. Here we sample a few applications:
Robotic vehicles:
A driverless robotic car named STANLEY sped through the rough terrain of the Mojave dessert
at 22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand Challenge.
STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders to
sense the environment and onboard software to command the steering, braking, and

Department of CSE, ATMECE, Mysuru Page 3


Artificial Intelligence - BCS515B

acceleration (Thrun, 2006). The following year CMU’s BOSS won the Urban Challenge, safely
driving in traffic through the streets of a closed Air Force base, obeying traffic rules and
avoiding pedestrians and other vehicles.
Speech recognition:
A traveler calling United Airlines to book a flight can have the entire conversation guided by
an automated speech recognition and dialog management system.
Autonomous planning and scheduling:
A hundred million miles from Earth, NASA’s Remote Agent program became the first on-
board autonomous planning program to control the scheduling of operations for a spacecraft
(Jonsson et al., 2000). REMOTE AGENT generated plans from high-level goals specified from
the ground and monitored the execution of those plans—detecting, diagnosing, and recovering
from problems as they occurred. Successor program MAPGEN (Al-Chang et al., 2004) plans
the daily operations for NASA’s Mars Exploration Rovers, and MEXAR2 (Cesta et al., 2007)
did mission planning—both logistics and science planning—for the European Space Agency’s
Mars Express mission in 2008
Game playing:
IBM’s DEEP BLUE became the first computer program to defeat the world champion in a
chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match
(Goodman and Keene, 1997). Kasparov said that he felt a “new kind of intelligence” across the
board from him. Newsweek magazine described the match as “The brain’s last stand.” The
value of IBM’s stock increased by $18 billion. Human champions studied Kasparov’s loss and
were able to draw a few matches in subsequent years, but the most recent human-computer
matches have been won convincingly by the computer.
Spam fighting: Each day, learning algorithms classify over a billion messages as spam, saving
the recipient from having to waste time deleting what, for many users, could comprise 80% or
90% of all messages, if not classified away by algorithms. Because the spammers are
continually updating their tactics, it is difficult for a static programmed approach to keep up,
and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
Logistics planning:
During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and
Replanning Tool, DART (Cross and Walker, 1994), to do automated logistics planning and
scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people at a time,
and had to account for starting points, destinations, routes, and conflict resolution among all
parameters. The AI planning techniques generated in hours a plan that would have taken weeks
Department of CSE, ATMECE, Mysuru Page 4
Artificial Intelligence - BCS515B

with older methods. The Defense Advanced Research Project Agency (DARPA) stated that
this single application more than paid back DARPA’s 30-year investment in AI.
Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners
for home use. The company also deploys the more rugged PackBot to Iraq and Afghanistan,
where it is used to handle hazardous materials, clear explosives, and identify the location of
snipers.
Machine Translation: A computer program automatically translates from Arabic to English,
allowing an English speaker to see the headline “Ardogan Confirms That Turkey Would Not
Accept Any Pressure, Urging Them to Recognize Cyprus.” The program uses a statistical
model built from examples of Arabic-to-English translations and from examples of English
text totaling two trillion words (Brants et al., 2007). None of the computer scientists on the
team speak Arabic, but they do understand statistics and machine learning algorithms.

These are just a few examples of artificial intelligence systems that exist today. Not magic or
science fiction—but rather science, engineering, and mathematics.
Chapter-02: - Problem‐solving agents

Agents and Environment:

An agent is anything that can be viewed as perceiving its environment through sensors and
SENSOR acting upon that environment through actuators. This simple idea is illustrated in
Figure 2.1. ACTUATOR A human agent has eyes, ears, and other organs for sensors and hands,
legs, vocal tract, and so on for actuators.
We use the term percept to refer to the agent’s perceptual inputs at any given instant. An
PERCEPT SEQUENCE agent’s percept sequence is the complete history of everything the
agent has ever perceived.

Department of CSE, ATMECE, Mysuru Page 5


Artificial Intelligence - BCS515B

Mathematically speaking, we say that an agent’s behavior is AGENT FUNCTION described


by the agent function that maps any given percept sequence to an action.
We can imagine tabulating the agent function that describes any given agent;
Given an agent to experiment with, we can, in principle, construct this table by trying out all
possible percept sequences and recording which actions the agent does in response. The table
is, of course, an external characterization of the agent. Internally, the agent function for an
artificial agent will be implemented by an AGENT PROGRAM agent program.
It is important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running within some physical
system.
The Vacuum-Cleaner World:
• This world is so simple that we can describe everything that happens; it’s also a made-
up world, so we can invent many variations. This particular world has just two
locations: squares A and B. The vacuum agent perceives which square it is in and
whether there is dirt in the square. It can choose to move left, move right, suck up the
dirt, or do nothing. One very simple agent function is the following: if the current square
is dirty, then suck; otherwise, move to the other square.

Department of CSE, ATMECE, Mysuru Page 6


Artificial Intelligence - BCS515B

Program implements the agent function tabulated in Fig. 2.3

Function Reflex-Vacuum-Agent([location,status]) return an action


If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left

2.2 Good Behavior: The Concept of Rationality

Rationality in AI agents involve performing the "right" actions to achieve desirable outcomes,
which are evaluated through a performance measure. This measure assesses the desirability of
the resulting states in the environment, rather than the agent's own perception of success. A key
challenge is designing performance measures that truly reflect the goals of the environment, as
poor design can lead to unintended behaviors. For example, a vacuum-cleaner agent might
focus on cleaning up dirt repeatedly rather than maintaining a clean floor. Additionally,
complex philosophical questions arise, such as whether consistent mediocre performance is
preferable to alternating highs and lows. Ultimately, rationality in agents depends on aligning
their actions with well-defined performance measures that ensure the desired outcomes are
achieved in the environment.
2.2.1 Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date
This leads to a definition of a rational agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the agent has.
2.2.2 Omniscience, learning, and autonomy

It’s important to distinguish between rationality and omniscience. An omniscient agent would
know the exact outcomes of its actions and could act accordingly, but in reality, omniscience
is impossible. For example, if I decide to cross a street after seeing a friend and, unexpectedly,
a cargo door falls from an airplane and hits me, this does not mean my decision to cross the
street was irrational. Rationality involves making decisions that maximize expected
performance based on available information, not achieving perfect outcomes. Expecting an

Department of CSE, ATMECE, Mysuru Page 7


Artificial Intelligence - BCS515B

agent to always make the best possible decision after the fact is unrealistic, as it would require
knowledge of future events—something only a time machine or crystal ball could provide.
Rationality is about doing the best one can with the information at hand, not achieving flawless
outcomes.
Our definition of rationality does not require omniscience because rationality depends only on
the percept sequence received by the agent so far, not on future knowledge. However,
rationality also means avoiding actions that are underinformed or risky. For example, if an
agent doesn't look both ways before crossing a busy road, it lacks crucial information, and
crossing would be irrational. A rational agent should engage in information gathering, such as
looking before crossing, to maximize expected performance. Information gathering and
exploration are key aspects of rationality, as they help the agent improve decision-making by
acquiring necessary data.

Additionally, rational agents must not only gather information but also learn from their
experiences. While an agent may start with some prior knowledge, it should adapt and update
its understanding as it interacts with the environment. This learning process helps the agent
become more autonomous, reducing reliance on initial assumptions and making it more
effective in diverse environments. For instance, a vacuum-cleaning agent that learns where dirt
typically appears will perform better than one that does not.

Autonomy is essential for rationality. An agent that depends too much on its designer's prior
knowledge without learning from its own percepts lacks autonomy. Over time, a rational agent
should be able to act independently of its initial programming, adjusting its behavior based on
experience. This combination of initial knowledge and continuous learning enables the design
of a single rational agent capable of succeeding in a wide range of environments.
2.3 The nature of environment

Now that we understand what rationality means, the next step in creating rational agents is to
think about task environments. A task environment is basically the "problem" that the agent
is designed to solve. Before we can build a rational agent, we need to clearly define the task
environment, including what the agent needs to achieve, what information it receives, and what
actions it can take. Different environments have different needs, so the way we design the agent
depends on the specific environment it will operate in. In the next sections, we’ll look at
examples of task environments and see how they influence the design of the agent.

Department of CSE, ATMECE, Mysuru Page 8


Artificial Intelligence - BCS515B

2.3.1 Specifying the task environment:


When discussing the rationality of the simple vacuum-cleaner agent, we had to define four key
elements: the Performance measure (how success is judged), the Environment (where the
agent operates), the Actuators (the agent's actions), and the Sensors (how the agent perceives
its surroundings). These elements are grouped under the heading of the task environment, and
we use the acronym PEAS (Performance, Environment, Actuators, Sensors) to remember
them. When designing any agent, the first step is to fully specify its task environment using
this PEAS framework.
For example, consider the more complex problem of designing an automated taxi driver. Unlike
the simple vacuum-cleaner scenario, the task of driving a taxi is extremely open-ended, with
countless possible situations that the agent might encounter. While fully automated taxis are
still beyond current technology, this example highlights the importance of carefully defining
the task environment to ensure the agent is designed effectively. By specifying the performance
measures, environment, actuators, and sensors, we can better understand the challenges the
agent will face and how it should be designed to handle them.
Figure 2.4 summarizes the PEAS description for the taxi’s task environment.

2.3.2 Properties of task environments


The range of task environments in AI is incredibly broad, but we can categorize them using a
small set of key dimensions. These dimensions help determine the best way to design the agent
and choose the appropriate techniques for implementation. By understanding these dimensions,
we can better match the agent's design to the specific challenges it will face in different
environments. First, we'll list these dimensions, and then we'll analyze several task
environments to see how these ideas apply in practice. While the definitions provided here are
informal, later chapters will offer more detailed explanations and examples of each type of

Department of CSE, ATMECE, Mysuru Page 9


Artificial Intelligence - BCS515B

environment.

2.4 The structure of agents:


• Agents can be grouped into five classes based on their degree of
perceived intelligence and capability.
• All these agents can improve their performance and generate better
action over the time. These are given below:
1) Simple Reflex Agent

The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current precepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of precepts history during their
decision and action process.

Department of CSE, ATMECE, Mysuru Page 10


Artificial Intelligence - BCS515B

• Model-based reflex agent

• The Model-based agent can work in a partially observable environment, and track
the situation.
• A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.

• Internal State: It is a representation of the current state based on percept history.

• These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
• Updating the agent state requires information about:

• How the world evolves

• How the agent's action affects the world.

Department of CSE, ATMECE, Mysuru Page 11


Artificial Intelligence - BCS515B

2. Goal-based agents
• The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.

• The agent needs to know its goal which describes desirable situations.

• Goal-based agents expand the capabilities of the model-based agent by having


the "goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.

3. Utility-based agent
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by providing
a measure of success at a given state.

• Utility-based agent act based not only goals but also the best way to achieve the goal.

• The Utility-based agent is useful when there are multiple possible


alternatives, and an agent has to choose in order to perform the best action.

• The utility function maps each state to a real number to check how efficiently
each action achieves the goals.

Department of CSE, ATMECE, Mysuru Page 12


Artificial Intelligence - BCS515B

4. Learning agent
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.

• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.

• A learning agent has mainly four conceptual components, which are:


• Learning element: It is responsible for making improvements by learning
from environment

• Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.

• Performance element: It is responsible for selecting external action


• Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.

Department of CSE, ATMECE, Mysuru Page 13


Artificial Intelligence - BCS515B

How the components of agent programs work:


We've discussed agent programs at a high level, focusing on their components that answer
questions like "What is the world like now?" and "What action should I take?" Now, the next
step for AI students is to understand how these components actually work. This is a complex
topic that requires much deeper exploration, but we can start by highlighting some basic
distinctions in how these components represent the environment.
These representations can be categorized along a spectrum of complexity and expressive
power: atomic, factored, and structured. This classification helps us understand how an
agent models its environment and decides on actions. For example, consider the component
that handles "What my actions do"—this part of the agent program represents the changes in
the environment that result from the agent's actions. By understanding these different levels
of representation, we can begin to grasp the various ways in which an agent can interpret and
interact with its environment. Figure 2.16 provides schematic depictions of how those
transitions might be represented.

Department of CSE, ATMECE, Mysuru Page 14


Artificial Intelligence - BCS515B

MODULE - 2

3.1 Problem solving agents


Intelligent agents are supposed to maximize their performance measure. Goal formulation,
based on the current situation and the agent’s performance measure, is the first step in
problem solving. The agent’s task is to find out how to act, now and in the future, so that it
reaches a goal state. Before it can do this, it needs to decide (or we need to decide on its
behalf) what sorts of actions and states it should consider.
Problem formulation is the process of deciding what actions and states to consider,
given a goal. The agent will not know which of its possible actions is best, because it does
not yet know enough about the state that results from taking each action. If the agent has no
additional information—i.e., if the environment is unknown in the sense defined then it is has
no choice but to try one of the actions at random.
The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an action
sequence. Once a solution is found, the actions it recommends can be carried out. This is
called the execution phase. Thus, we have a simple “formulate, search, execute” design for
the agent, as shown in Figure 3.1.After formulating a goal and a problem to solve, the agent
calls a search procedure to solve it. It then uses the solution to guide its actions, doing
whatever the solution recommends as the next thing to do—typically, the first action of the
sequence—and then removing that step from the sequence. Once the solution has been
executed, the agent will formulate a new goal. Notice that while the agent is executing the
solution sequence it ignores its percepts when choosing an action because it knows in advance
what they will be. An agent that carries out its plans with its eyes closed, so to speak, must be
quite certain of what is going on. Control theorists call this an open-loop system, because
ignoring the percepts breaks the loop between agent and environment. We first describe the
process of problem formulation, and then devote the bulk of the chapter to various algorithms
for the SEARCH function. We do not discuss the workings of the UPDATE-STATE and
FORMUL AT E-GO AL functions further in this chapter.
3.1.1 Well-defined problems and solutions
A problem can be defined formally by five components:
• The initial state that the agent starts in. For example, the initial state for our agent in
Roman a might be described as In (Arad)

Department of CSE, ATMECE, Mysuru Page 15


Artificial Intelligence - BCS515B

• A description of the possible actions available to the agent. Given a particular states,
ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these
actions is applicable in s. For example, from the state In(Arad), the applicable actions
are{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
• A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from doing
action a in states. We also use the term successor to refer to any state reachable from a given
state by a single action.2 For example, we have RESULT(In(Arad),Go(Zerind))=In(Zerind).
Together, the initial state, actions, and transition model implicitly define the state space of
the problem—the set of all states reachable from the initial state by any sequence of actions.
The state space forms a directed network or graph in which the nodes are states and the links
between nodes are actions. (The map of Romania shown in Figure 3.2 can be interpreted as a
state-space graph if we view each road as standing for two driving actions, one in each
direction.) A path in the state space is a sequence of states connected by a sequence of actions.
• The goal test, which determines whether a given state is a goal state. Sometimes there
is an explicit set of possible goal states, and the test simply checks whether the given state is
one of them. The agent’s goal in Romania is the single tonset{In(Bucharest)}.

Department of CSE, ATMECE, Mysuru Page 16


Artificial Intelligence - BCS515B

Sometimes the goal is specified by an abstract property rather than an explicitly enumerated
set of states. For example, inches, the goal is to reach a state called “check mate,” where the
opponent’s king is under attack and can’t escape.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure. For the agent trying to
get to Bucharest, time is of the essence, so the cost of a path might be its length in
kilometers. In this chapter, we assume that the cost of a path can be described as the sum of
the costs of the individual actions along the path. The step cost of taking action a in states
to reach states’ is denoted by c(s, a, s’ ). The step costs for Romania are shown in Figure
3.2 as route distances. We assume that step costs are nonnegative. The preceding elements
define a problem and can be gathered into a single data structure that is given as input to a
problem-solving algorithm. A solution to a problem is an action sequence that leads from
the initial state to a goal state. Solution quality is measured by the path cost function, and
an optimal solution has the lowest path cost among all solutions.

3.1.2 Formulating problems


In the preceding section we proposed a formulation of the problem of getting to Bucharest
in terms of the initial state, actions, transition model, goal test, and path cost. This formulation
seems reasonable, but it is still a model—an abstract mathematical description—and not there
al l thing. Compare the simple state description we have chosen, In(Arad), to an actual cross

Department of CSE, ATMECE, Mysuru Page 17


Artificial Intelligence - BCS515B

country trip, where the state of the world includes so many things: the traveling companions,
the current radio program, the scenery out of the window, the proximity of law enforcement
officers, the distance to the next rest stop, the condition of the road, the weather, and so on.
All these considerations are left out of our state descriptions because they are irrelevant to the
problem of finding a route to Bucharest. The process of removing detail from are presentation
is called abstraction. In addition to abstracting the state description, we must abstract the
actions themselves. A driving action has many effects. Besides changing the location of the
vehicle and its occupants, it takes up time, consumes fuel, generates pollution, and changes
the agent (as they say, travel is broadening).Our formulation takes into account only the
change in location. Also, there are many actions that we omit altogether: turning on the radio,
looking out of the window, slowing down for law enforcement officers, and soon. And of
course, we don’t specify actions at the level of “turn steering wheel to the left by one degree.”
Can we be more precise about defining the appropriate level of abstraction? Think of the
abstract states and actions we have chosen as corresponding to large sets of detailed world
states and detailed action sequences. Now consider a solution to the abstract problem: for
example, the path from Arad to Sibiu to Rimnicu Vilce a to Pitesti to Bucha rest. This abstract
solution corresponds to a large number of more detailed paths. For example, we could drive
with the radio on between Sibiu and Rimnicu Vilcea, and then switch it off for the rest of the
trip. The abstraction is valid if we can expand any abstract solution into a solution in the more
detailed world; a sufficient condition is that for every detailed state that is “in Arad,” there is
a detailed path to some state that is “in Sibiu,” and so on. The abstraction is useful if carrying
out each of the actions in the solution is easier than the original problem; in this case they are
easy enough that they can be carried out without further search or planning by an average
driving agent. The choice of a good abstraction thus involves removing as much detail as
possible while retaining validity and ensuring that the abstract actions are easy to carry out.
Were it not for the ability to construct useful abstractions, intelligent agents would be
completely swamped by the real world.

3.2 EXAMPLE PROBLEMS


The problem-solving approach has been applied to a vast array of task environments. We list
some of the best known here, distinguishing between toy and real-world problems. A toy
problem is intended to illustrate or exercise various problem-solving methods. It can be given
a concise, exact description and hence is usable by different researchers to compare the
performance of algorithms. A real-world problem is one whose solutions people actually care
Department of CSE, ATMECE, Mysuru Page 18
Artificial Intelligence - BCS515B

about. Such problems tend not to have a single agreed-upon description, but we can give the
general flavor of their formulations.

3.2.1 Toy problems


The first example we examine is the vacuum world first introduced in Chapter 2. (See Figure
2.2.) This can be formulated as a problem as follows:
• States: The state is determined by both the agent location and the dirt locations. The agent
is in one of two locations, each of which might or might not contain dirt. Thus, there are 2
× 2^2 = 8 possible world states. A larger environment within locations has n ·2^n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
left most square, moving Right in the right most square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs1, so the path cost is the number of steps in the path. Compared
with the real world, this to y problem has discrete locations, discrete dirt, reliable cleaning,
and it never gets any dirtier. Chapter 4 relaxes some of these assumptions. The 8-puzzle, an
instance of which is shown in

Department of CSE, ATMECE, Mysuru Page 19


Artificial Intelligence - BCS515B

Figure 3.4, consists of a 3×3 board with eight numbered tiles and a blank space. A tile adjacent
to the blank space can slide into the space. The object is to reach a specified goal state, such
as the one shown on the right of the figure. The standard formulation is as follows:
• States: A state description specifies the location of each of the eight tiles and the blank in
one of then in squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can be
reached from exactly half of the possible initial states (Exercise3.4).
• Actions: The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the resulting state; for example, if
we apply Left to the star state in Figure3.4, the resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1,so the path cost is the number of steps in the path.
What abstractions have we included here? The actions are abstracted to their beginning and
final states, ignoring the intermediate locations where the block is sliding. We have abstracted
away actions such as shaking the board when pieces get stuck and ruled out extracting the pieces
with a knife and putting them back again. We are left with a description of the rules of the
puzzle, avoiding all the details of physical manipulations. The 8-puzzle belongs to the family
of sliding-block puzzles, which are often used as test problems for new search algorithms in
AI. This family is known to be NP-complete, so one does not expect to find methods
significantly better in the worst case than the search algorithms described in this chapter and
the next. The 8-puzzle has 9!/2=181,440 reachable states and is easily solved. The 15-puzzle
(on a 4×4 board) has around 1.3 trillion states, and random instances can be solved optimally
in a few milliseconds by the best search algorithms. The 24-puzzle (on a 5 × 5 board) has around
1025 states, and random instances take several hours to solve optimally.

Department of CSE, ATMECE, Mysuru Page 20


Artificial Intelligence - BCS515B

The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen
attacks any other. (A queen attacks any piece in the same row, column or diagonal.) Figure 3.5
shows an attempted solution that fails: the queen in the right most column is attacked by the
queen at the top left. Although efficient special-purpose algorithms exist for this problem and
for the whole n-queens family, it remains a useful test problem for search algorithms. There are
two main kinds of formulation. An incremental formulation involves operators that augment
the state description, starting with an empty state; for the 8-queens problem, this means that
each action adds a queen to the state. A complete state formulation starts with all 8 queens on
the board and moves them around. In either case, the path cost is of no interest because only
the final state counts. The first incremental formulation one might try is the following:

• States: Any arrangement of 0 to 8 queens on the board disastate.


• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked. In this formulation, we have 64 ·
63 ··· 57 ≈ 1.8 ×1014 possible sequences to investigate. A better formulation would prohibit
placing a queen in any square that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the
leftmost n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column such that it is not
attacked by any other queen.
This formulation reduces the 8-queens state space from1.8×10^14 to just 2,057, and solutions
are easy to find. On the other hand, for 100 queens the reduction is from roughly 10^400
states to about 10^52 states (Exercise3.5)—a big improvement, but not enough to make the

Department of CSE, ATMECE, Mysuru Page 21


Artificial Intelligence - BCS515B

problem tractable. Section


4.1 describes the complete-state formulation, and Chapter 6 gives a simple algorithm that
solves even the million-queens problem with ease.Our final toy problem was devised by
Donald Knuth (1964) and illustrates how infinite state spaces canarise. Knuth conjectured
that, starting with the number 4, a sequence of factorial, square root, and floor operations will
reach any desired positive integer. For example, we can reach from for as follows

The problem definition is very simple:


• States: Positive numbers.
• Initial state:4.
• Actions: Apply factorial, square root, or floor operation (factorial for integers only).
•Transition model: As given by the mathematical definitions of the operations.
• Goal test: State is the desired positive integer. To our knowledge there is no bound on
how large a number might be constructed in the process of reaching a given target—for
example, the number 620,448,401,733,239,439,360,000 is generated in the expression
for5—so the state space for this problem is infinite. Such state spaces arise frequently in
tasks involving the generation of mathematical expressions, circuits, proofs, programs, and
other recursively defined objects.

3.2.2 Real-world problems


We have already seen how the route-finding problem is defined in terms of specified locations
and transitions along links between them. Route-finding algorithms are used in a variety of
applications. Some, such as Websites and in-car systems that provide driving directions, are
relatively straight forward extensions of the Romania example. Others, such as routing video
streams in computer networks, military operations planning, and air line travel-planning
systems, involve much more complex specifications. Consider the air line travel problems that
must be solved by a travel-planning Website:
• States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must record
ex train formation about these “historical” aspects.

Department of CSE, ATMECE, Mysuru Page 22


Artificial Intelligence - BCS515B

• Initial state: This is specified by the user’s query.


• Actions: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-air port transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination
as the current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and soon.
Commercial travel advice systems use a problem formulation of this kind, with many additional
complications to handle the by zantine fare structures that airlines impose. Any seasoned
traveler knows, however, that not all air travel goes according to plan. A really good system
should include contingency plans—such as back up reservations on alternate flights—to the
extent that these are justified by the cost and likelihood of failure of the original plan.
Touring problems are closely related to route-finding problems, but with an important
difference. Consider, for example, the problem “Visit every city in Figure 3.2 at least once,
starting and ending in Bucharest.” As with route finding, the actions correspond to trips between
adjacent cities. The state space, however, is quite different. Each state must include not just the
current location but also these to cities the agent has visited. So the initial state would be
In(Bucharest), Visited({Bucharest}), a typical intermediate state would be In(Vaslui),
Visited({Bucharest, Urziceni, Vaslui}), and the goal test would check whether the agent is in
Bucharest and all 20 cities have been visited.

The traveling sales person problem (TSP) is a touring problem in which each city must be
visited exactly once. The aim is to find the shortest tour. The problem is known to be NP-hard,
but an enormous amount of effort has been expended to improve the capabilities of TSP
algorithms. In addition to planning trips for traveling salespersons, these algorithms have been
used for tasks such as planning movements of automatic circuit-board drills and of stocking
machine son shopfloors.

A VLSI layout problem requires positioning millions of components and connections on a chip
to minimize area, minimize circuit delays, minimize stray capacitances, and maximize
manufacturing yield. The layout problem comes after the logical design phase and is usually

Department of CSE, ATMECE, Mysuru Page 23


Artificial Intelligence - BCS515B

split into two parts: cell layout and channel routing. In cell layout, the primitive components of
the circuit are grouped into cells, each of which performs some recognized function. Each cell
has a fixed foot print(size and shape) and requires a certain number of connections to each of
the other cells. The aim is to place the cells on the chip so that they do not overlap and so that
there is room for the connecting wires to be placed between the cells. Channel routing finds a
specific route for each wire through the gaps between the cells. These search problems are
extremely complex, but definitely worth solving. Later in this chapter, we present some
algorithms capable of solving them. Robot navigation is a generalization of the route-finding
problem described earlier. Rather than following a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions and states. For a circular
robot moving on a flat surface, the space is essentially two-dimensional. When the robot has
arms and legs or wheels that must also be controlled, the search space becomes many-
dimensional. Advanced techniques are required just to make the search space finite. In addition
to the complexity of the problem, real robot must also deal with error in their sensor readings
and motor controls.

Automatic assembly sequencing of complex objects by a robot was first demonstrated by


FREDDY(Michie, 1972). Progress since then has been slow but sure, to the point where the
assembly of intricate objects such as electric motors is economically feasible. In assembly
problems, the aimist of in ordering which to assemble the parts of some object. If the wrong
order is chosen, there will be no way to add some part later in the sequence without undoing
some of the work already done. Checking a step in the sequence for feasibility is a difficult
geometrical search problem closely related to robot navigation. Thus, the generation of legal
actions is the expensive part of assembly sequencing. Any practical algorithm must avoid
exploring all but a tiny fraction of the state space. Another important assembly problem is
protein design, in which the goal is to find a sequence of amino acids that will fold into a
three-dimensional protein with the right properties to cure some disease.

3.3 SEARCHING FOR SOLUTIONS


Having formulated some problems, we now need to solve them. A solution is an action
sequence, so search algorithms work by considering various possible action sequences. The
possible action sequences starting at the initial state form a search tree with the initial state
NODE at the root; the branches are actions and the nodes correspond to states in the state
space of the problem. Figure 3.6 shows the first few steps in growing the search tree for
Department of CSE, ATMECE, Mysuru Page 24
Artificial Intelligence - BCS515B

finding a route from Arad to Bucharest. The root node of the tree corresponds to the initial
state, In(Arad). The first step is to test whether this is a goal state. (Clearly it is not, but it is
important to check so that we can solve trick problems like “starting in Arad, get to
Arad.”)Then we need to consider taking various actions. We do this by expanding the current
state; that is, applying each legal action to the current state, there by generating an ewset of
states. In this case, we add three branches from the parent node . In(Arad) leading to three
new child nodes: In(Sibiu), In(Timisoara), and In(Zerind).Now we must choose which of
these three possibilities to consider further.
This is the essence of search—following upon option now and putting the others a side for
later, in case the first choice does not lead to a solution. Suppose we choose S first. We check
to see whether it is a goal state (it is not) and then expand it to get In(Arad), In(Fagaras),
In(Oradea), and In(RimnicuVilcea).We can then choose any of these four or go back and
choose. Timisoara or Zerind. Each of these six no desis a leaf node, that is, a node with no
children in the tree. The set of all leaf nodes available for expansion at any given point is called
the frontier. (Many authors call it the open list, which is both geographically less evocative and
less accurate, because other data structures are better suited than a list.). In Figure 3.6, the
frontier of each tree consists of those nodes with bold outlines.

The process of expanding nodes on the frontier continues until either a solution is found or
there are no more states to expand. The general TREE-SEARCH algorithm is shown informally
in Figure 3.7. Search algorithms all share this basic structure; they vary primarily according to
how they choose which state to expand next—the so-called search strategy.
The eagle eyed reader will notice one peculiar thing about the search tree shown in Figure 3.6:
it includes the path from Arad to Sibiuand back to Arad again! We say that In(Arad) is a
repeated state in the search tree, generated in this case by a loop y path. Considering such loop
y paths means that the complete search tree for Romania is infinite because there is no limit to
how often one can traverse a loop. On the other hand, the state space—the map shown in Figure
3.2—has only 20 states. As we discuss in Section 3.4,loops can cause certain algorithms to fail,
making otherwise solvable problems unsolvable. Fortunately, there is no need to consider loopy
paths. We can rely on more than intuition for this: because path cost are additive and step costs
are nonnegative, a loopy path to any given state is never better than the same path with the loop
removed. Loopy paths are a special case of the more general concept of redundant paths, which
exist whenever there is more than one way to get from one state to another. Consider the paths

Department of CSE, ATMECE, Mysuru Page 25


Artificial Intelligence - BCS515B

Arad–Sibiu (140 km long) and Arad–Zerind–Oradea–Sibiu (297 km long). Obviously, the


second path is redundant it’s just a worse way to get to the same state. If you are concerned
about reaching the goal, there’s never any reason to keep more than one path to any given state,
because any goal state that is reachable by extending one path is also reachable by extending
the other. In some cases, it is possible to define the problem itself so as to eliminate redundant
paths. For example, if we formulate the 8-queens problem (page 71) so that a queen can be
placed in any column, then each state with n queens can be reached by n! different paths; but if
we reformulate the problem so that each new queen is placed in the left most empty column,
then each state can be reached only through one path.

Department of CSE, ATMECE, Mysuru Page 26


Artificial Intelligence - BCS515B

In other cases, redundant paths are unavoidable. This includes all problems where the actions
are reversible, such as route- finding problems and sliding-block puzzles. Route finding on a
rectangular grid(like the one used later for Figure 3.9) is a particularly important example in
computer games. In such a grid, each state has four successors, so a search tree of depth that
includes repeated states has4 d leaves; but there are only about distinct states within steps of
any given state. Ford=20, this means a bout a trillion nodes but only about 800 distinct states.
Thus, following redundant paths can cause a tractable problem to become in tractable. This is
true even for algorithms that know how to avoid infinite loops. As the saying goes, algorithms
that forget their history are doomed to repeat it. The way to avoid exploring redundant paths is
to remember where one has been. To do this, we augment the TREE-SEARCH algorithm with
a data structure called the explored set (also known as the closed list), which remembers every
expanded node. Newly generated nodes that match previously generated nodes—ones in the
explored set or the frontier—can be discarded instead of being added to the frontier. The new
algorithm, called GRAPH-SEARCH, is shown informally in Figure3.7. The specific algorithms
in this chapter draw on this general design. Clearly, the search tree constructed by the GRAPH-
SEARCH algorithm contains at most one copy of each state, so we can think of it as growing a
tree directly on the state-space graph, as shown in Figure 3.8. The algorithm has another nice
property: the frontier separates the state-space graph into the explored region and the
unexplored region, so that every path from the initial state to an unexplored state has to pass
through a state in the frontier. (If this seems completely obvious, try Exercise 3.13 now.). This

Department of CSE, ATMECE, Mysuru Page 27


Artificial Intelligence - BCS515B

property is illustrated in Figure3.9. As every step moves a state from the frontier into the
explored region while moving some states from the unexplored region into the frontier, we see
that the algorithm is systematically examining the states in the state space, one by one, until it
finds a solution.

3.3.1 Infrastructure for search algorithms


Search algorithms require a data structure to keep track of the search tree that is being
constructed. For each node of the tree, we have a structure that contains four components:
• n. STATE: the state in the state space to which the node corresponds;
• n. PARENT: the node in the search tree that generated this node;
• n. ACTION: the action that was applied to the parent to generate the node;
• n. PATH COST: the cost, traditionally denoted by g(n), of the path from the initial state
to the node, as indicated by the parent pointers.

Department of CSE, ATMECE, Mysuru Page 28


Artificial Intelligence - BCS515B

Given the components for a parent node, it is easy to see how to compute the necessary
components for a child node. The function CHILD NODE takes a parent node and an action
and returns the resulting child node:

The node data structure is depicted in Figure 3.10. Notice how the PARENT pointers string the
nodestogether into a tree structure. These pointers also allow the solution path to be extracted
when a goalnode is found; we use the SOLUTION function to return the sequence of actions
obtained by following parent pointers back to the root.Upto now,we have not been very careful
to distinguish between nodes and states, but in writing detailed algorithms it’s important to
make that distinction. A node is a book keeping data structure used to represent the search tree.
A state corresponds to a configuration ofthe world. Thus, nodes are on particular paths, as
defined by PARENT pointers, whereas states are not.Furthermore, two different nodes can
contain the same world state if that state is generated via twodifferent search paths. Now that
we have nodes, we need some where to put them. The frontier needs to best oreder in such away
that the search algorithm can easily choose then extnode to expand according to its preferred
strategy. The appropriate data structure for this is a queue. The operations on a queue areas
follows:
• EMPTY?(queue) returns true only if there are no more elements in the queue.
• POP(queue)removes the first element of the queue and returnsit.
• INSERT(element,queue) inserts an element and returns the resulting queue.
Queues are characterized by the order in which they store the inserted nodes. Three common

Department of CSE, ATMECE, Mysuru Page 29


Artificial Intelligence - BCS515B

variants are the first-in, first-out or FIFO queue, which pops the oldest element of the queue;the
last-in, first-out or LIFO queue (also known as a stack), which pops the newest element of the
queue; and the priority queue, which pops the element of the queue with the highest priority
according to some ordering function. The explored set can be implemented with a hash table to
allow efficient checking for repeated states. With a good implementation, insertion and look up
can be done in roughly constant time no matter how many states are stored. One must take care
to implement the hash table with the right notion of equality between states. For example, in
the traveling sales person problem the hash table needs to know that the set of visited cities
{Bucharest, Urziceni, Vaslui}is the same as {Urziceni, Vaslui, Bucharest}. Sometimes this can
be achieved most easily by insisting that the data structures for states be in some canonical
form; that is, logically equivalent states should map to the same data structure. In the case of
states described by sets, for example, a bit-vector representation or a sorted list without
repetition would be canonical, where as an unsorted list would not.

3.3.2 Measuring problem-solving performance

Before we get into the design of specific search algorithms, we need to consider the criteria that
might be used to choose among them. We can evaluate an algorithm’s performance in four
ways:
• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
Time and space complexity are always considered with respect to some measure of the
problem difficulty. In theoretical computer science, the typical measure is the size of the state
space graph, |V| |E|, where Vertices set of vertices (nodes) of the graph and Eis the set of edges
(links). This is appropriate when the graph is an explicit data structure that is input to the search
program. (The map of Romania is an example of this.) In AI, the graph is often represented
implicitly by the initial state, actions, and transition model and is frequently infinite. For these
reasons, complexity is expressed in terms of three quantities: b, the branching factor or
maximum number of successors of any node; d, the depth of the shallowest go all node(i.e., the
number of steps along the path from the root); and m, the maximum length of any path in the
state space. Time is often measured in terms of the number of nodes generated during the search,
and space in terms of the maximum number of nodes stored in memory. For the most part, we
Department of CSE, ATMECE, Mysuru Page 30
Artificial Intelligence - BCS515B

describe time and space complexity for search on a tree; for a graph, the answer depends on
how “redundant” the paths in the state space are. To assess the effectiveness of a search
algorithm, we can consider just the search cost—which typically depends on the time
complexity but can also include a term form emory usage—or we can use the total cost, which
combines the search cost and the path cost of the solution found. For the problem of finding a
route from Arad to Bucharest, the search cost is the amount of time taken by the search and the
solution cost is the total length of the path in kilometers. Thus, to compute the total cost, we
have to add milliseconds and kilometers. There is no “official exchange rate” between the two,
but it might be reasonable in this case to convert kilometers into milli seconds by using an
estimate of the car’s average speed (because time is what the agent cares about). This enables
the agent to find an optimal tradeoff point at which further computation to find a shorter path
becomes counterproductive. The more general problem of tradeoffs between different goods.

3.4 UNINFORMED SEARCH STRATEGIES


This section covers several search strategies that come under the heading of uninformed search
(also called blind search). The term means that the strategies have no additional information
about states beyond that provided in the problem definition. All they can do is generate
successors and distinguish a goal state from a non-goal state. All search strategies are
distinguished by the order in which nodes are expanded. Strategies that know whether one non-
goal state is “more promising” than another are called informed search or heuristic search
strategies.
3.4.1 Breadth-first search
Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on. In general, all
the nodes are expanded at a given depth in the search tree before any nodes at the next level are

expanded. Breadth-first search is an instance of the general graph-search algorithm (Figure


3.11) in which the shallowest unexpanded node is chosen for expansion. This is achieved very

Department of CSE, ATMECE, Mysuru Page 31


Artificial Intelligence - BCS515B

simply by using a FIFO queue for the frontier. Thus, new nodes (which are always deeper than
their parents) go to the back of the queue, and old nodes, which are shallower than the new
nodes, get expanded first. There is one slight tweak on the general graph-search algorithm,
which is that the goal test is applied to each node when it is generated rather than when it is
selected for expansion. This decision is explained below, where we discuss time complexity.

Note also that the algorithm, following the general template for graph search, discards any new
path to a state already in the frontier or explored set; it is easy to see that any such path must be
at least as deep as the one already found. Thus, breadth first search always has the shallowest
path to every node on the frontier. Pseudo code is given in Figure 3.11. Figure 3.12 shows the
progress of the search on a simple binary tree. How does breadth-first search rate according to
the four criteria from the previous section? We can easily see that it is complete—if the
shallowest goal node is at some finite depth d, breadth-first search will eventually find it after
generating all shallower nodes (provided the branching factor b is finite). Note that as soon as
a goal node is generated, we know it is the shallowest goal node because all shallow nodes must
have been generated already and failed the goal test. Now, the shallowest goal node is not
necessarily the optimal one. technically, breadth-first search is optimal if the path cost is a
nondecreasing function of the depth of the node.

The most common such scenario is that all actions have the same cost. So far, the news about
breadth-first search has been good. The news about time and space is not so good. Imagine
searching a uniform tree where every state has b successors. The root of the search tree
generates b nodes at the first level, each of which generates b more nodes, for a total of b^2 at
these cond level. Each of these generates b more nodes, yielding b^3 nodes at the third level,
and soon. Now suppose that the solution is at depth d. In the worst case, it is the last node
generated at that level. Then the total number of nodes generate dis
b+b^2+b^3+···+b^d=O(b^d). (If the algorithm were to apply the goal test to nodes when
selected for expansion, rather than when generated, the whole layer of nodes at depth d would
be expanded be for the goal was detected and the time complexity would be

As for space complexity: for any kind of graph search, which stores every expanded node in
the explored set, the space complexity is always within a factor of b of the time complexity. For
breadth-first graphsearch in particular, every node generated remains in memory. There will be

Department of CSE, ATMECE, Mysuru Page 32


Artificial Intelligence - BCS515B

nodes in the explored set and O(b^d) nodes in the frontier,

so the space complexity is O(b^d), i.e., it is dominated by the size of the frontier. Switching to
a tree search would not save much space, and in a state space with many redundant paths,
switching could cost a great deal of time. An exponential complexity bound such asO(b^d) is
scary. Figure3.13 shows why. It lists, for various values of the solution depth d, the time and
memory required for a breadth first search with branching factor b = 10. The table assumes that
1 million nodes can be generated per second and that a node requires 1000 bytes of storage.
Many search problems fit roughly within these assumptions (give or take a factor of 100) when
run on a modern personal computer.
Two lessons can be learned from Figure 3.13. First, the memory requirements are a bigger
problem for breadth-first search than is the execution time. One might wait 13 days for the
solution to an important problem with search depth 12, but no personal computer has the
petabyte of memory it would take. Fortunately, her strategies require less memory. These
condlessonist has time is still a major factor. If your problem has a solution at depth 16, then
(given our assumptions) it will take about 350 years for breadth-first search (or indeed any
uninformed search) to find it. In general, exponential-complexity search problems cannot be
solved by uninformed methods for any but the smallest instances.

Department of CSE, ATMECE, Mysuru Page 33


Artificial Intelligence - BCS515B

3.4.3 Depth-first search


Depth-first search always expands the deepest node in the current frontier of the search tree.
The progress of the search is illustrated in Figure 3.16. The search proceeds immediately to
the deepest level of the search tree, where the nodes have no successors. As those nodes are
expanded, they are dropped from the frontier, so then the search “backs up” to the next
deepest node that still has unexplored successors. The depth-first search algorithm is an
instance of the graph-search algorithm in Figure3.7; whereas breadth-first-search uses a FIFO
queue, depth-first search uses a LIFO queue. A LIFO queue means that the most recently
generated node is chosen for expansion. This must be the deepestunexpanded node because
it is one deeper than its parent—which, in turn, was the deepest unexpanded node when it
was selected. As an alternative to the GRAPH-SEARCH-style implementation, it is common
to implement depth-first search with a recursive function that calls itself on each of its
children in turn.(A recursive depth- first algorithm in corporating a depth limit is shown in
Figure3.17.)

The properties of depth-first search depend strongly on whether the graph-search or tree-search

Department of CSE, ATMECE, Mysuru Page 34


Artificial Intelligence - BCS515B

version is used. The graph-search version, which avoids repeated states and redundant paths, is
complete infinite state spaces because it will eventually expand every node. The tree-search
version, on the other hand.

Department of CSE, ATMECE, Mysuru Page 35


Artificial Intelligence - BCS515B

MODULE - 3

3.5 INFORMED (HEURISTIC) SEARCH STRATEGIES


A Heuristic technique helps in solving problems, even though there is no guarantee that it
will never lead in the wrong direction. There are heuristics of every general applicability as
well as domain specific. The strategies are general purpose heuristics. In order to use them
in a specific domain they are coupler with some domain specific heuristics. There are two
major ways in which domain - specific, heuristic information can be incorporated into rule-
based search procedure. A heuristic function is a function that maps from problem state
description to measures desirability, usually represented as number weights. The value of a
heuristic function at a given node in the search process gives a good estimate of that node
being on the desired path to solution.
3.5.1 Greedy best-first search
Greedy best-first search tries to expand the node that is closest to the goal, on the: grounds
that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic
function:
f(n) = h(n)
Taking the example of Route-finding problems in Romania, the goal is to reach Bucharest
starting from the city Arad. We need to know the straight-line distances to Bucharest from
various cities as shown in Figure 3.22. For example, the initial state is In (Arad), and the
straight line distance heuristic hSLD (In (Arad)) is found to be 366. Using the straight-line
distance heuristic hSLD, the goal state can be reached faster.

Figure 3.23 shows the progress of greedy best-first search using hSLD to find a path from Arad

Department of CSE, ATMECE, Mysuru Page 36


Artificial Intelligence - BCS515B

to Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer to
Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest. Fagaras in turn generates Bucharest, which is the goal.

Evaluation Criterion of Greedy Search


• Complete: NO [can get stuck in loops, e.g., Complete in finite space with repeated
state checking]
• Time Complexity: O (bm) [but a good heuristic can give dramatic improvement]
• Space Complexity: O (bm) [keeps all nodes in memory]
• Optimal: NO

Department of CSE, ATMECE, Mysuru Page 37


Artificial Intelligence - BCS515B

Greedy best-first search is not optimal, and it is incomplete. The worst-case time and space
complexity is O (bm), where m is the maximum depth of the search space.

3.5.2 A* search: Minimizing the total estimated solution cost:


The most widely known form of best-first search is called A∗ A search (pronounced “A-star ∗
search”). It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal: f(n) = g(n) + h(n). Since g(n) gives the path cost from the start
node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal, we have
f(n) = estimated cost of the cheapest solution through n
Conditions for optimality: Admissibility and consistency:
The first condition we require for optimality is that h(n) be an admissible heuristic. An
admissible heuristic is one that never overestimates the cost to reach the goal. Because g(n) is
the actual cost to reach n along the current path, and f(n) = g(n) + h(n), we have as an immediate
consequence that f(n) never overestimates the true cost of a solution along the current path
through n.
In Figure 3.24, we show the progress of an A∗ tree search for Bucharest. Notice in particular
that Bucharest first appears on the frontier at step (e), but it is not selected for expansion
because its f-cost (450) is higher than that of Pitesti (417). Another way to say this is that there
might be a solution through Pitesti whose cost is as low as 417, so the algorithm will not settle
for a solution that costs 450.
A second, slightly stronger condition called consistency (or sometimes monotonicity) is
required only for applications of A∗ to graph search.9 A heuristic h(n) is consistent if, for every
node n and every successor nl of n generated by any action a, the estimated cost of reaching the
goal from n is no greater than the step cost of getting to n plus the estimated cost of reaching
the goal from nl : h(n) ≤ c(n, a, nl ) + h(nl) . This is a form of the general triangle inequality,
which stipulates that each side of a triangle cannot be longer than the sum of the other two
sides. Here, the triangle is formed by n, nl, and the goal Gn closest to n.
3.5.3 Memory-bounded heuristic search
The simplest way to reduce memory requirements for A∗ is to adapt the idea of iterative
deepening to the heuristic search context, resulting in the iterative-deepening A∗ (IDA∗)

Department of CSE, ATMECE, Mysuru Page 38


Artificial Intelligence - BCS515B

algorithm. The main difference between IDA∗ and standard iterative deepening is that the
cutoff used is the f-cost (g +h) rather than the depth; at each iteration, the cutoff value is the
smallest f-cost of any node that exceeded the cutoff on the previous iteration.
Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to mimic the
operation of standard best-first search, but using only linear space. The algorithm is shown in
Figure 3.26.

Department of CSE, ATMECE, Mysuru Page 39


Artificial Intelligence - BCS515B

Its structure is similar to that of a recursive depth-first search, but rather than continuing
indefinitely down the current path, it uses the f limit variable to keep track of the f-value of

Department of CSE, ATMECE, Mysuru Page 40


Artificial Intelligence - BCS515B

the best alternative path available from any ancestor of the current node. If the current node
exceeds this limit, the recursion unwinds back to the alternative path.

As the recursion unwinds, RBFS replaces the f-value of each node along the path with a
backed-up value—the best f-value of its children. In this way, RBFS remembers the f-value of
the best leaf in the forgotten subtree and can therefore decide whether it’s worth reexpanding
the subtree at some later time. Figure 3.27 shows how RBFS reaches Bucharest. RBFS is
somewhat more efficient than IDA∗, but still suffers from excessive node regeneration. In the
example in Figure 3.27, RBFS follows the path via Rimnicu Vilcea, the “changes its mind”
and tries Fagaras, and then changes its mind back again.
Like A∗ tree search, RBFS is an optimal algorithm if the heuristic function h(n) is admissible.
Its space complexity is linear in the depth of the deepest optimal solution, but its time
complexity is rather difficult to characterize: it depends both on the accuracy of the heuristic
function and on how often the best path changes as nodes are expanded. IDA∗ and RBFS suffer
from using too little memory. Between iterations, IDA∗ retains only a single number: the
current f-cost limit. RBFS retains more information in memory, but it uses only linear space:
even if more memory were available, RBFS has no way to make use of it.
On very hard problems, however, it will often be the case that SMA∗ is forced to switch back
and forth continually among many candidate solution paths, only a small subset of which can

Department of CSE, ATMECE, Mysuru Page 41


Artificial Intelligence - BCS515B

fit in memory. That is to say, memory limitations can make a problem intractable from the
point of view of computation time. Although no current theory explains the tradeoff between
time and memory, it seems that this is an inescapable problem. The only way out is to drop the
optimality requirement.

Department of CSE, ATMECE, Mysuru Page 42


Artificial Intelligence - BCS515B

3.5.4 Learning to search better


Each state in a metalevel state space captures the internal (computational) state of a program
that is searching in an object-level state space such as Romania. For example, the internal state
of the A∗ algorithm consists of the current search tree. Each action in the metalevel state space
is a computation step that alters the internal state; for example, each computation step in A∗
expands a leaf node and adds its successors to the tree.

3.6 Heuristic functions


• Heuristic functions in order to reach the goal node in a more prominent way.
• Therefore, there are several pathways in a search tree to reach the goal node from the current
node.
• The selection of a good heuristic function matters certainly.
• A good heuristic function is determined by its efficiency.
• More is the information about the problem, more is the processing time.
Example:
• Consider the following 8-puzzle problem where we have a start state and a goal state.
• Our task is to slide the tiles of the current/start state and place it in an order followed in the
goal state. There can be four moves either left, right, up, or down.
• There can be several ways to convert the current/start state to the goal state, but we can use a
heuristic function h(n) to solve the problem more efficiently.

Department of CSE, ATMECE, Mysuru Page 43


Artificial Intelligence - BCS515B

• It is seen from the above state space tree that the goal state is minimized from h(n)=3 to
h(n)=0. However, we can create and use several heuristic functions as per the requirement.
• It is also clear from the above example that a heuristic function h(n) can be defined as the
information required to solve a given problem more efficiently.
• The information can be related to the nature of the state, cost of transforming from one state
to another, goal node characteristics, etc., which is expressed as a heuristic function.

Properties of a Heuristic search Algorithm


Use of heuristic function in a heuristic search algorithm leads to following properties of a
heuristic search algorithm:
• Admissible Condition: An algorithm is said to be admissible, if it returns an optimal solution.
• Completeness: An algorithm is said to be complete, if it terminates with a solution (if the

Department of CSE, ATMECE, Mysuru Page 44


Artificial Intelligence - BCS515B

solution exists).
• Dominance Property: If there are two admissible heuristic algorithms A1 and A2 having h1
and h2 heuristic functions, then A1 is said to dominate A2 if h1 is better than h2 for all the
values of node n.
• Optimality Property: If an algorithm is complete, admissible, and dominating other
algorithms, it will be the best one and will definitely give an optimal solution

Chapter 7:
Logical Agents

7.1 Knowledge-Based Agent in Artificial intelligence

An intelligent agent needs knowledge about the real world for taking decisions and
reasoning to act efficiently.

Knowledge−based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
Knowledge−based agents are composed of two main parts:
• Knowledge-base and
• Inference system.

A knowledge−based agent must able to do the following:


o An agent should be able to represent states, actions, etc.
o An agent should be able to incorporate new percepts.
o An agent can update the internal representation of the world
o An agent can deduce the internal representation of the world
o An agent can deduce appropriate actions.

Department of CSE, ATMECE, Mysuru Page 45


Artificial Intelligence - BCS515B

The architecture of knowledge−based agent:

The above diagram is representing a generalized architecture for a knowledge−based agent.


The knowledge−based agent (KBA) take input from the environment by perceiving the
environment. The input is taken by the inference engine of the agent and which also
communicate with KB to decide as per the knowledge store in KB. The learning element of
KBA regularly updates the KB by learning new knowledge.

Knowledge base: Knowledge−base is a central component of a knowledge− based agent, it


is also known as KB. It is a collection of sentences (here 'sentence' is a technical term and
it is not identical to sentence in English). These sentences are expressed in a language which
is called a knowledge representation language. The Knowledge−base of KBA stores fact
about the world.

Why use a knowledge base?

Knowledge−base is required for updating knowledge for an agent to learn with experiences
and take action as per the knowledge.

Inference system:

Inference means deriving new sentences from old. Inference system allows us to add a new
sentence to the knowledge base. A sentence is a proposition about the world. Inference
system applies logical rules to the KB to deduce new information.
Department of CSE, ATMECE, Mysuru Page 46
Artificial Intelligence - BCS515B

Inference system generates new facts so that an agent can update the KB. An inference
system works mainly in two rules which are given as:
o Forward chaining
o Backward chaining

Operations Performed by KBA

Following are three operations which are performed by KBA in order to show the
intelligent behavior:

1. TELL: This operation tells the knowledge base what it perceives from the
environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.

A generic knowledge−based agent:

Following is the structure outline of a generic knowledge−based agents program:

function KB−AGENT(percept):

persistent: KB, a knowledge base t, a counter, initially 0, indicating

time TELL(KB, MAKE−PERCEPT−SENTENCE(percept, t))

Action = ASK(KB, MAKE−ACTION−QUERY(t))

TELL(KB, MAKE−ACTION−SENTENCE(action, t))

t=t+1

return action

The knowledge−based agent takes percept as input and returns an action as output. The
agent maintains the knowledge base, KB, and it initially has some background knowledge
of the real world. It also has a counter to indicate the time for the whole process, and this
counter is initialized with zero.

Each time when the function is called, it performs its three operations:

Department of CSE, ATMECE, Mysuru Page 47


Artificial Intelligence - BCS515B

o Firstly it TELLs the KB what it perceives.


o Secondly, it asks KB what action it should take
o Third agent program TELLS the KB that which action was chosen.
The MAKE−PERCEPT−SENTENCE generates a sentence as setting that the agent
perceived the given percept at the given time.
The MAKE−ACTION−QUERY generates a sentence to ask which action should be done
at the current time.
MAKE−ACTION−SENTENCE generates a sentence which asserts that the chosen action
was executed.

Various levels of knowledge−based agent:


A knowledge−based agent can be viewed at different levels which are given below:

1. Knowledge level
Knowledge level is the first level of knowledge−based agent, and in this level, we need to
specify what the agent knows, and what the agent goals are. With these specifications, we can
fix its behavior. For example, suppose an automated taxi agent needs to go from a station A to
station B, and he knows the way from A to B, so this comes at the knowledge level.

2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored.
At this level, sentences are encoded into different logics. At the logical level, an encoding of
knowledge into logical sentences occurs. At the logical level we can expect to the automated
taxi agent to reach to the destination B.
3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level agent
perform actions as per logical and knowledge level. At this level, an automated taxi agent
actually implement his knowledge and logic so that he can reach to the destination.

Approaches to designing a knowledge−based agent:

There are mainly two approaches to build a knowledge−based agent:

Department of CSE, ATMECE, Mysuru Page 48


Artificial Intelligence - BCS515B

1. Declarative approach: We can create a knowledge−based agent by initializing with


an empty knowledge base and telling the agent all the sentences with which we want to
start with. This approach is called Declarative approach.

2. Procedural approach: In the procedural approach, we directly encode desired behavior


as a program code. Which means we just need to write a program that already encodes the
desired behavior or agent.

What is knowledge representation?

Humans are best at understanding, reasoning, and interpreting knowledge. Human knows
things, which is knowledge and as per their knowledge they perform various actions in the
real world. But how machines do all these things comes under knowledge
representation and reasoning. Hence we can describe Knowledge representation as
following:

o Knowledge representation and reasoning (KR, KRR) is the part of Artificial


intelligence which concerned with AI agents thinking and how thinking contributes to
intelligent behavior of agents.
o It is responsible for representing information about the real world so that a
computer can understand and can utilize this knowledge to solve the complex real world
problems such as diagnosis a medical condition or communicating with humans in natural
language.
o It is also a way which describes how we can represent knowledge in artificial
intelligence. Knowledge representation is not just storing data into some database, but it
also enables an intelligent machine to learn from that knowledge and experiences so that
it can behave intelligently like a human.

What to Represent:

Following are the kind of knowledge which needs to be represented in AI systems:

• Object: All the facts about objects in our world domain. E.g., Guitars contains
strings, trumpets are brass instruments.

Department of CSE, ATMECE, Mysuru Page 49


Artificial Intelligence - BCS515B

• Events: Events are the actions which occur in our world.


• Performance: It describe behavior which involves knowledge
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
• Knowledge-Base: The central component of the knowledge- based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and
situations. Following are the types of knowledge in artificial intelligence:
Types of knowledge
• Following are the various types of knowledge:

1. Declarative Knowledge:
o Declarative knowledge is to know about something.
o It includes concepts, facts, and objects.
o It is also called descriptive knowledge and expressed in declarative
sentences.

Department of CSE, ATMECE, Mysuru Page 50


Artificial Intelligence - BCS515B

o It is simpler than procedural language.


2. Procedural Knowledge
o It is also known as imperative knowledge.
o Procedural knowledge is a type of knowledge which is responsible for
knowing how to do something.
o It can be directly applied to any task.
o It includes rules, strategies, procedures, agendas, etc.
o Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
o Knowledge about the other types of knowledge is called Meta- knowledge.
4. Heuristic knowledge:
o Heuristic knowledge is representing knowledge of some experts in a filed or
subject.
o Heuristic knowledge is rules of thumb based on previous experiences, awareness
of approaches, and which are good to work but not guaranteed.
5. Structural knowledge:
o Structural knowledge is basic knowledge to problem-solving.
o It describes relationships between various concepts such as kind of, part of, and
grouping of something.
o It describes the relationship that exists between concepts or objects.

The relation between knowledge and intelligence:

Knowledge of real-worlds plays a vital role in intelligence and same for creating artificial
intelligence. Knowledge plays an important role in demonstrating intelligent behavior in AI
agents. An agent is only able to accurately act on some input when he has some knowledge
or experience about that input.

Let's suppose if you met some person who is speaking in a language which you don't know,
then how you will able to act on that. The same thing applies to the intelligent behavior of
the agents.

Department of CSE, ATMECE, Mysuru Page 51


Artificial Intelligence - BCS515B

As we can see in below diagram, there is one decision maker which act by sensing the
environment and using knowledge. But if the knowledge part will not present then, it cannot
display intelligent behavior.

AI knowledge cycle:
An Artificial intelligence system has the following components for displaying
intelligent behavior:
o Perception
o Learning
o Knowledge Representation and Reasoning
o Planning
o Execution

The below diagram is showing how an AI system can interact with the real world and what
components help it to show intelligence. AI system has Perception component by which it
retrieves information from its environment. It can be visual, audio or another form of sensory
input. The learning component is responsible for learning from data captured by Perception
comportment. In the complete cycle, the main components are knowledge representation and
Reasoning. These two components are involved in showing the intelligence in machine-like
humans. These two components are independent with each other but also coupled together.
The planning and execution depend on analysis of Knowledge representation and reasoning.

Department of CSE, ATMECE, Mysuru Page 52


Artificial Intelligence - BCS515B

Approaches to knowledge representation:

There are mainly four approaches to knowledge representation, which are given below:

1. Simple relational knowledge:


o It is the simplest way of storing facts which uses the relational method, and each
fact about a set of the object is set out systematically in columns.
o This approach of knowledge representation is famous in database systems where
the relationship between different entities is represented.
o This approach has little opportunity for inference.

Example: The following is the simple relational knowledge representation.

Player Weight Age

Player1 65 23

Player2 58 18

Player3 75 24

Department of CSE, ATMECE, Mysuru Page 53


Artificial Intelligence - BCS515B

2. Inheritable knowledge:
o In the inheritable knowledge approach, all data must be stored into a hierarchy of
classes.
o All classes should be arranged in a generalized form or a hierarchal manner.
o In this approach, we apply inheritance property.
o Elements inherit values from other members of a class.
o This approach contains inheritable knowledge which shows a relation between
instance and class, and it is called instance relation.
o Every individual frame can represent the collection of attributes and its value.
o In this approach, objects and values are represented in Boxed nodes.
o We use Arrows which point from objects to their values.
Example:

3. Inferential knowledge:
• Inferential knowledge approach represents knowledge in the form of formal
logics.
• This approach can be used to derive more facts.
• It guaranteed correctness.
• Example: Let's suppose there are two statements:

Department of CSE, ATMECE, Mysuru Page 54


Artificial Intelligence - BCS515B

a. Marcus is a man
b. All men are mortal
Then it can represent as;
man(Marcus)
∀x = man (x)-------------- > mortal (x)s

4. Procedural knowledge:
• Procedural knowledge approach uses small programs and codes which describes
how to do specific things, and how to proceed.
• In this approach, one important rule is used which is If-Then rule.
• In this knowledge, we can use various coding languages such as LISP language
and Prolog language.
• We can easily represent heuristic or domain-specific knowledge using this
approach.
• But it is not necessary that we can represent all cases in this approach.

Requirements for knowledge Representation system:


A good knowledge representation system must possess the following properties.
1. Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.
2. Inferential Adequacy:
KR system should have ability to manipulate the representational structures to produce new
knowledge corresponding to existing structure.
3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most productive
directions by storing appropriate guides.
4. Acquisitional efficiency- The ability to acquire the new knowledge easily
using automatic methods.

The Wumpus World in Artificial intelligence

Wumpus world:

The Wumpus world is a simple world example to illustrate the worth of a knowledge−based
agent and to represent knowledge representation. It was inspired by a video game Hunt the
Wumpus by Gregory Yob in 1973.
Department of CSE, ATMECE, Mysuru Page 55
Artificial Intelligence - BCS515B

The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there
are total 16 rooms which are connected with each other. We have a knowledge−based agent
who will go forward in this world. The cave has a room with a beast which is called Wumpus,
who eats anyone who enters the room. The Wumpus can be shot by the agent, but the agent
has a single arrow. In the Wumpus world, there are some Pits rooms which are bottomless,
and if agent falls in Pits, then he will be stuck there forever. The exciting thing with this cave
is that in one room there is a possibility of finding a heap of gold. So the agent goal is to find
the gold and climb out the cave without fallen into Pits or eaten by Wumpus. The agent will
get a reward if he comes out with gold, and he will get a penalty if eaten by Wumpus or
falls in the pit.

Note: Here Wumpus is static and cannot move.


Following is a sample diagram for representing the Wumpus world. It is showing some
rooms with Pits, one room with Wumpus and one agent at (1, 1) square location of the world.

There are also some components which can help the agent to navigate the cave. These
components are given as follows:

a. The rooms adjacent to the Wumpus room are smelly, so that it would have some

Department of CSE, ATMECE, Mysuru Page 56


Artificial Intelligence - BCS515B

stench.
b. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then
he will perceive the breeze.
c. There will be glitter in the room if and only if the room has gold.
d. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus
will emit a horrible scream which can be heard anywhere in the cave.
PEAS description of Wumpus world:
To explain the Wumpus world we have given PEAS description as below
Performance measure:
+1000 reward points if the agent comes out of the cave with the gold.
• −1000 points penalty for being eaten by the Wumpus or falling into the pit.
• −1 for each action, and −10 for using an arrow.
• The game ends if either agent dies or came out of the cave.

Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square
[1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.

Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.

Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus.
(Not diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the glitter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere in
the cave
• These percepts can be represented as five element list, in which we will have
different indicators for each sensor.
• Example if agent perceives stench, breeze, but no glitter, no bump, and no scream
then it can be represented as:
[Stench, Breeze, None, None, None].

Department of CSE, ATMECE, Mysuru Page 57


Artificial Intelligence - BCS515B

The Wumpus world Properties:


• Partially observable: The Wumpus world is partially observable because the
agent can only perceive the close environment such as an adjacent room.
• Deterministic: It is deterministic, as the result and outcome of the world are
already known.
• Sequential: The order is important, so it is sequential.
• Static: It is static as Wumpus and Pits are not moving.
• Discrete: The environment is discrete.
• One agent: The environment is a single agent as we have one agent only and
Wumpus is not considered as an agent.
Exploring the Wumpus world:
Now we will explore the Wumpus world and will determine how the agent will find its
goal by applying logical reasoning.
Agent's First step:
Initially, the agent is in the first room or on the square [1,1], and we already know that this
room is safe for the agent, so to represent on the below diagram (a) that room is safe we
will add symbol OK. Symbol A is used to represent agent, symbol B for the breeze, G for
Glitter or gold, V for the visited room, P for pits, W for Wumpus.
At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent
squares are also OK.

Department of CSE, ATMECE, Mysuru Page 58


Artificial Intelligence - BCS515B

Agent's second Step:

Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose
agent moves to the room [2, 1], at this room agent perceives some breeze which means Pit
is around this room. The pit can be in [3, 1], or [2,2], so we will add symbol P? to say that,
is this Pit room?

Now agent will stop and think and will not make any harmful move. The agent will go back
to the [1, 1] room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol
V to represent the visited squares.

Agent's third step:

At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent
perceives a stench which means there must be a Wumpus nearby. But Wumpus cannot be in
the room [1,1] as by rules of the game, and also not in [2,2] (Agent had not detected any
stench when he was at [2,1]). Therefore agent infers that Wumpus is in the room [1,3], and
in current state, there is no breeze which means in [2,2] there is no Pit and no Wumpus. So it
is safe, and we will mark it OK, and the agent moves further in [2,2].

Department of CSE, ATMECE, Mysuru Page 59


Artificial Intelligence - BCS515B

Agent's fourth step:

At room [2,2], here no stench and no breezes present so let's suppose agent decides to move
to [2,3]. At room [2,3] agent perceives glitter, so it should grab the gold and climb out of
the cave.
Propositional logic in Artificial intelligence
Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.
Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.

Following are some basic facts about propositional logic:


• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can

Department of CSE, ATMECE, Mysuru Page 60


Artificial Intelligence - BCS515B

use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.


• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical
connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional
logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called
a valid sentence.
• A proposition formula which is always false is called Contradiction.
• A proposition formula which has both true and false values is called
• Statements which are questions, commands, or opinions are not propositions such
as "Where is Rohini", "How are you", "What is your name", are not
propositions.
Syntax of propositional logic:
The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
a. Atomic Propositions
b. Compound propositions
. Atomic Proposition: Atomic propositions are the simple propositions. It
consists of a single proposition symbol. These are the sentences which must be
either true or false.
Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
a. Compound proposition: Compound propositions are constructed by
combining simpler or atomic propositions, using parenthesis and logical
connectives.
Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."

Department of CSE, ATMECE, Mysuru Page 61


Artificial Intelligence - BCS515B

Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a
sentence logically. We can create compound propositions with the help of logical
connectives. There are mainly five connectives, which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either
Positive literal or negative literal.
2. Conjunction: A sentence which has 𝖠 connective such as, P 𝖠 Q is called a
conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P𝖠 Q.
3. Disjunction: A sentence which has 𝗏 connective, such as P 𝗏 Q. is called
disjunction, where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P 𝗏 Q.
4. Implication: A sentence such as P → Q, is called an implication.
Implications are also known as if−then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am
breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.

Following is the summarized table for Propositional Logic Connectives:

Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible
Department of CSE, ATMECE, Mysuru Page 62
Artificial Intelligence - BCS515B

scenarios. We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table. Following are
the truth table for all logical connectives:

Department of CSE, ATMECE, Mysuru Page 63


Artificial Intelligence - BCS515B

Truth table with three propositions:


We can build a proposition composing three propositions P, Q, and R. This truth table is
made−up of 8n Tuples as we have taken three proposition symbols.

Precedence of connectives:

Just like arithmetic operators, there is a precedence order for propositional connectors or
logical operators. This order should be followed while evaluating a propositional problem.
Following is the list of the precedence order for operators:

Precedence Operators

First Precedence Parenthesis


Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Biconditional

Note: For better understanding use parenthesis to make sure of the correct
interpretations. Such as ¬R𝗏 Q, It can be interpreted as (¬R) 𝗏 Q.

Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two propositions are
said to be logically equivalent if and only if the columns in the truth table are identical to
each other.
Department of CSE, ATMECE, Mysuru Page 64
Artificial Intelligence - BCS515B

Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B. In
below truth table we can see that column for ¬A𝗏 B and A→B, are identical hence A is
Equivalent to B

Properties of Operators:
Commutativity:
P𝖠 Q= Q 𝖠 P, or P 𝗏 Q = Q 𝗏 P.
Associativity:
(P 𝖠 Q) 𝖠 R= P 𝖠 (Q 𝖠 R), (P 𝗏 Q) 𝗏 R= P 𝗏 (Q 𝗏 R)
Identity element:
P 𝖠 True = P,
P 𝗏 True= True.
Distributive:
P𝖠 (Q 𝗏 R) = (P 𝖠 Q) 𝗏 (P 𝖠 R). P 𝗏 (Q 𝖠 R) = (P 𝗏 Q) 𝖠 (P 𝗏 R). DE Morgan's Law:
¬ (P 𝖠 Q) = (¬P) 𝗏 (¬Q)
¬ (P 𝗏 Q) = (¬ P) 𝖠 (¬Q).
Double-negation elimination:
¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with propositional logic.
Example:
• All the girls are intelligent.
• Some apples are sweet.
b. Propositional logic has limited expressive power.
c. In propositional logic, we cannot describe statements in terms of their properties or
logical relationships.

Department of CSE, ATMECE, Mysuru Page 65

Common questions

Powered by AI

Breadth-first search is complete and optimal when all actions have the same cost, making it straightforward for systematic exploration of nodes. However, its major drawbacks include high time and space complexity. Time complexity grows exponentially with depth (O(b^d)), and memory requirements can become impractical as it requires storing a large number of nodes in the frontier, which can be problematic in large search spaces .

A learning element helps an AI agent improve accuracy and efficiency by updating its knowledge based on feedback from the environment. This process enhances the agent's autonomy, allowing it to adapt its behavior independently of initial programming and reduce reliance on predefined assumptions, ultimately leading to better performance in diverse environments .

An agent function is an abstract mathematical description that defines what an agent should do in response to any given percept sequence, while an agent program is the concrete implementation of that function within a specific physical system .

Incorporating a utility measure allows a goal-based agent to evaluate the quality of its actions not just in terms of achieving goals but also in terms of efficiency and effectiveness. This enhances decision-making by allowing the agent to assess multiple possible actions and outcomes, enabling it to choose the one that maximizes overall utility, which is beneficial in complex and variable environments .

Rationality in AI is about making decisions that maximize expected performance based on available information rather than achieving perfect outcomes. It does not require predicting the future, unlike omniscience, which assumes complete knowledge of all events. Understanding this distinction is crucial because it informs how we design AI systems to handle uncertainties and incomplete information, ensuring they perform effectively in real-world environments .

A goal-based agent navigates long sequences of actions by employing search and planning techniques that evaluate potential action sequences towards achieving its goals. This proactive approach involves considering various possible sequences of actions and selecting the one most likely to lead to a successful goal state, utilizing its understanding of current states and desired outcomes .

A model-based reflex agent functions by maintaining an internal state that is updated according to its percept history and a model of how the world evolves. It uses this model to derive knowledge about the world's state despite incomplete information, enabling it to make decisions and take actions that align with its goals even in partially observable environments .

Depth-first search may face completeness issues in infinite search spaces, as it could infinitely follow one branch without ever considering others. Memory requirements are low because it uses a LIFO stack, but it risks getting stuck and not finding solutions in expansive searches. The algorithm must systematically backtrack when reaching depths without success, which can be inefficient in large, complex problem spaces .

A utility-based agent uses a utility function to assign a real number to each possible state, evaluating which actions are most likely to achieve its goals efficiently. This decision process is significant as it enables the agent to consider trade-offs and varying degrees of success, allowing it to choose actions that optimize not just for achieving goals, but also for overall effectiveness and resource use .

Exploration and information gathering are essential for rationality because they help AI agents acquire the necessary data to make informed decisions, thereby maximizing expected performance. Rather than solely relying on pre-existing knowledge, agents must actively seek new information, adapt to new situations, and learn from their environment to improve decision-making and autonomy over time .

You might also like