0% found this document useful (0 votes)
13 views17 pages

Module4 11

Uploaded by

adiedu5204
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views17 pages

Module4 11

Uploaded by

adiedu5204
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

• Intelligence of humans is achieved—not by purely reflex mechanisms but

by processes of reasoning that operate on internal representations of


knowledge.
• In AI, this approach to intelligence is embodied in knowledge-based agents.

Logical Agents

The idea is that an agent can represent knowledge of its world,


its goals and the current situation by sentences in logic and decide
what to do by inferring that a certain action or course of action is
appropriate to achieve its goals.
Knowledge based Agents

• Main component of a knowledge-based agent is its knowledge base, or


KB.
• A knowledge base is a set of sentences. (Here “sentence” is used as a
technical term. It is related but not identical to the sentences of English
and other natural languages.)
• Each sentence is expressed in a language called a knowledge
representation language and represents some assertion about the
world.
• Sometimes we dignify a sentence with the name axiom, when the
sentence is taken as given without being derived from other sentences.
• There must be a way to add new sentences to the knowledge base and a way
to query what is known.
• The standard names for these operations are TELL and ASK, respectively.
• Both operations may involve inference—that is, deriving new sentences from
old.
• Inference must obey the requirement that when one ASKs a question of the
knowledge base, the answer should follow from what has been told to the
knowledge base previously.
• The below figure shows the outline of a knowledge-based agent program.
• Like all agents, it takes a percept as input and returns an action.
• The agent maintains a knowledge base, KB, which may initially contain some
background knowledge.
• Each time the agent program is called, it does three things.
➢First, it TELLs the knowledge base what it perceives.
➢Second, it ASKs the knowledge base what action it should perform. In the process of
answering this query, extensive reasoning may be done about the current state of the
world, about the outcomes of possible action sequences, and so on.
➢Third, the agent program TELLs the knowledge base which action was chosen, and
the agent executes the action.
• The details of the representation language are hidden inside three functions that
implement the interface between the sensors and actuators on one side and the core
representation and reasoning system on the other.
• MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived
the given percept at the given time.
• MAKE-ACTION-QUERY constructs a sentence that asks what action should be done
at the current time.
• MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action
was executed.
• The program explained here is in a Knowledge level - where we need specify only
what the agent knows and what its goals are, in order to fix its behavior.
• A knowledge-based agent can be built simply by TELLing it what it needs to know.
• Starting with an empty knowledge base, the agent designer can TELL sentences one by
one until the agent knows how to operate in its environment. This is called the
declarative approach to system building.
• In contrast, the procedural approach encodes desired behaviors directly as program
The Wumpus World

• An environment in which knowledge-based agents can show their worth.


• The Wumpus world is a cave consisting of rooms connected by passageways.
• Lurking somewhere in the cave is the terrible Wumpus, a beast that eats anyone
who enters its room.
• The Wumpus can be shot by an agent, but the agent has only one arrow.
• Some rooms contain bottomless pits that will trap anyone who wanders into
these rooms (except for the wumpus, which is too big to fall in).
• The only mitigating feature of this bleak environment is the possibility of finding
a heap of gold.
• A sample wumpus world is shown in Figure .
The precise definition of the task
environment is given by the PEAS
description:
➢ Performance measure:
• +1000 for climbing out of the cave with
the gold
• –1000 for falling into a pit or being eaten
by the wumpus
• –1 for each action taken
• –10 for using up the arrow.
• The game ends either when the agent dies
or when the agent climbs out of the cave.
➢Environment:
• A 4 × 4 grid of rooms.
• The agent always starts in the square labeled [1,1], facing to the right.
• The locations of the gold and the wumpus are chosen randomly, with a uniform distribution,
from the squares other than the start square.
• In addition, each square other than the start can be a pit, with probability 0.2.
➢ Actuators:
• The agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦.
• The agent dies a miserable death if it enters a square containing a pit or a live wumpus.
• If an agent tries to move forward and bumps into a wall, then the agent does not move.
• The action Grab can be used to pick up the gold if it is in the same square as the agent.
• The action Shoot can be used to fire an arrow in a straight line in the direction the agent is
facing.
• The arrow continues until it either hits (and hence kills) the wumpus or hits a wall.
• The agent has only one arrow, so only the first Shoot action has any effect.
• Finally, the action Climb can be used to climb out of the cave, but only from square [1,1].
➢Sensors:
• The agent has five sensors, each of which gives a single bit of
information: –
• In the square containing the wumpus and in the directly (not
diagonally) adjacent squares, the agent will perceive a Stench.
• In the squares directly adjacent to a pit, the agent will perceive a
Breeze.
• In the square where the gold is, the agent will perceive a Glitter.
• When an agent walks into a wall, it will perceive a Bump.
• When the wumpus is killed, it emits a woeful Scream that can be
perceived anywhere in the cave.
• The agent’s initial knowledge base contains the rules of the environment, as
described previously; in particular, it knows that it is in [1,1] and that [1,1] is
a safe square; we denote that with an “A” and “OK,” respectively, in square
[1,1].
• The first percept is [None, None, None, None, None], from which the agent
can conclude that its neighboring squares, [1,2] and [2,1], are free of
dangers—they are OK.
• A cautious agent will move only into a square that it knows to be OK.
• Let us suppose the agent decides to move forward to [2,1]. The agent
perceives a breeze (denoted by “B”) in [2,1], so there must be a pit in a
neighboring square.
• The pit cannot be in [1,1], by the rules of the game, so there must be a pit in
[2,2] or [3,1] or both.
• The notation “P?” in Figure indicates a possible pit in those squares. At this
point, there is only one known square that is OK and that has not yet been
visited. So the prudent agent will turn around, go back to [1,1], and then
proceed to [1,2].
• The agent perceives a stench in [1,2] means that there must be a wumpus nearby.
• But wumpus cannot be in [1,1], by the rules of the game, and it cannot be in [2,2].
• Therefore, the agent can infer that the wumpus is in [1,3]. The notation W! indicates
this inference.
• Moreover, the lack of a breeze in [1,2] implies that there is no pit in [2,2].
• Yet the agent has already inferred that there must be a pit in either [2,2] or [3,1], so
this means it must be in [3,1].
• This is a fairly difficult inference, because it combines knowledge gained at different
times in different places and relies on the lack of a percept to make one crucial step.
• The agent has now proved to itself that there is neither a pit nor a wumpus in [2,2],
so it is OK to move there.
• Now the agent turns and moves to [2,3] detects a glitter, so it should grab the gold
and then return home.
• In each case for which the agent draws a conclusion from the available information,
that conclusion is guaranteed to be correct if the available information is correct.
• This is a fundamental property of logical reasoning.
Logic
• Logics are formal languages for representing information so that conclusions can be
drawn.
• The knowledge bases consist of sentences which are expressed according to the
syntax of the representation language, which specifies all the sentences that are well
formed.
eg:- “ x + y = 4” is a well-formed sentence in arithmetic while
“x4y+ =” is not
• Semantics is the meaning of sentences which also define the truth of a sentence with
respect to each possible world (or mathematically, models).
E.g., the language of arithmetic
x+2 ≥ y is a sentence
x2+y > {} is not a sentence
x+2 ≥ y is true iff the number x+2 is not less than the number y
x+2 ≥ y is true in a world(assignment/condition) where x = 7, y = 1 but
false in the world where x=0 and y=5
• The possible models are just all possible assignments of real numbers to the
variables x and y.
• If a sentence ‘α’ is true in model ‘m’, we say that ‘m’ satisfies ‘α’ or sometimes ‘m’
is a model of ‘α’. We use the notation M(α) to mean the set of all models of α.
• If a sentence follows logically from another sentence then there involves the
relation of logical entailment between sentences.
• Entailment is the relation between a sentence and another sentence that follows
from it.
• In mathematical notation, α |= β means that the sentence α entails the sentence β.
Formal definition of entailment:
• α |= β if and only if, in every model in which α is true, β is also true.
ie., α |= β if and only if M(α) ⊆ M(β)
• The direction of the ⊆ here: if α |= β, then α is a stronger assertion than β: it rules
out more possible worlds.
eg:- the sentence x = 0 entails the sentence xy = 0 ; (Obviously, in any model
where x is zero, it is the case that xy is zero (regardless of the value of y)
Inference and Entailment

• Inference is a procedure that allows new sentences to be derived from


a knowledge base.

●Understanding inference and entailment: think of

Set of all consequences of a KB as a haystack α as the needle

Entailment is like the needle being in the haystack.

Inference is like finding it.

You might also like