0% found this document useful (0 votes)
14 views17 pages

Ai Unit Iii

Haha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

Ai Unit Iii

Haha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit-III -Artificial Intelligence

Knowledge Reasoning and Inference (Chapter 7,8 and 9 from 3rd Edition)

 The central component of a knowledge-based agent is its knowledge base, or KB. A


knowledge base is a set of sentences.
 Each sentence is expressed in a language called a knowledge representation language
and represents some assertion about the world.
 Sometimes we dignify a sentence with the name axiom, when the sentence is taken as
given without being derived from other sentences.
 Inference—that is, deriving new sentences from old.
 The agent maintains a knowledge base, KB, which may initially contain some
background knowledge.
 The agent maintains a knowledge base, KB, may initially contain some background
knowledge.
The psuedocode for generic knowledge based agents is shown in Figure 7.1

THE WUMPUS WORLD:


 The wumpus world is a cave consisting of rooms connected by passageways. Lurking
somewhere in the cave is the terrible wumpus, a beast that eats anyone who enters its
room. The wumpus can be shot by an agent, but the agent has only one arrow.
 Some rooms contain bottomless pits that will trap anyone who wanders into these rooms
(except for the wumpus, which is too big to fall in).
 The only mitigating feature of this bleak environment is the possibility of finding a heap
of gold.
A typical wumpus world is shown in Figure 7.2

The PEAS description for Wumpus world are given by


• Performance measure: +1000 for climbing out of the cave with the gold, –1000 for falling
into a pit or being eaten by the wumpus, –1 for each action taken and –10 for using up the
arrow. The game ends either when the agent dies or when the agent climbs out of the cave.

• Environment: A 4×4 grid of rooms. The agent always starts in the square labelled [1,1],
facing to the right. The locations of the gold and the wumpus are chosen randomly.

• Actuators:
The agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦. The agent dies a
miserable death if it enters a square containing a pit or a live wumpus. If an agent tries
to move forward and bumps into a wall, then the agent does not move.
The action Grab can be used to pick up the gold if it is in the same square as the agent.
The action Shoot can be used to fire an arrow in a straight line in the direction the agent
is facing.
Finally, the action Climb can be used to climb out of the cave, but only from square
[1,1].
• Sensors: The agent has five sensors, each of which gives a single bit of information: – In the
square containing the wumpus and in the directly (not diagonally) adjacent squares, the agent
will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be perceived anywhere in the
cave.

PROPOSITIONAL LOGIC: A VERY SIMPLE LOGIC


 The syntax of propositional logic defines the allowable sentences. The atomic
sentences consist of a single proposition symbol.
 Each such symbol stands for a proposition that can be true or false.
 We use symbols that start with an uppercase letter and may contain other letters or
subscripts, for example: P, Q, R, W1,3 and North.

Basics of Logic:
The syntax of propositional logic defines the allowable sentences. The atomic sentences
consist of a single proposition symbol. Each such symbol stands for a proposition that can be
true or false. We use symbols that start with an uppercase letter and may contain other letters
or subscripts, for example: P, Q, R, W1,3 and North.

There are two proposition symbols with fixed meanings: True is the always-true proposition
and False is the always-false proposition. Complex sentences are constructed from simpler
sentences, using parentheses and logical connectives. There are five connectives in common
use:

1) ¬ (not). A sentence such as ¬W1,3 is called the negation of W1,3. A literal is either
an atomic sentence (a positive literal) or a negated atomic sentence (a negative literal).
2) ∧ (and). A sentence whose main connective is ∧, such as W1,3 ∧ P3,1, is called a
conjunction; its parts are the conjuncts. (The ∧ looks like an “A” for “And.”)
3) ∨ (or). A sentence using ∨, such as (W1,3∧P3,1)∨W2,2, is a disjunction of the
disjuncts (W1,3 ∧ P3,1) and W2,2. (Historically, the ∨ comes from the Latin “vel,”
which means “or.” For most people, it is easier to remember ∨ as an upside-down ∧
.)
4) ⇒ (implies). A sentence such as (W1,3∧P3,1) ⇒ ¬W2,2 is called an implication (or
conditional). Its premise or antecedent is (W1,3 ∧P3,1), and its conclusion or
consequent is ¬W2,2. Implications are also known as rules or if–then statements. The
implication symbol is sometimes written in other books as ⊃ or →.
5) ⇔ (if and only if). The sentence W1,3 ⇔ ¬W2,2 is a biconditional. Some other books
write this as ≡.
The grammar for propositional logic is shown in Figure 7.7

Semantics:
The semantics defines the rules for determining the truth of a sentence with respect to a
particular model. In propositional logic, a model simply fixes the truth value—true or false—
for every proposition symbol.
For example, if the sentences in the knowledge base make use of the proposition symbols
P1,2, P2,2, and P3,1, then one possible model is m1 = {P1,2 =false, P2,2 =false, P3,1 =true}.
The semantics for propositional logic must specify how to compute the truth value of any
sentence, given a model.
Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the model.
For example, in the model m1 given earlier, P1,2 is false.

For complex sentences, we have five rules, which hold for any subsentences P and Q in any
model m (here “iff” means “if and only if”):
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.
The rules can also be expressed with truth tables that specify the truth value of a complex
sentence for each possible assignment of truth values to its components. Truth tables for the
five connectives are given in Figure 7.8.

Logical Equivalence: two sentences α and β are logically equivalent if they are true in the
same set of models. We write this as α ≡ β. For example, we can easily show (using truth
tables) that P ∧ Q and Q ∧ P are logically equivalent; other equivalences are shown in Figure
7.11.
An alternative definition of equivalence is as follows: any two sentences α and β are equivalent
only if each of them entails the other: α ≡ β if and only if α |= β and β |= α.

Validity: A sentence is valid if it is true in all models. For example, the sentence P ∨¬P is
valid. Valid sentences are also known as tautologies
Satisfiability: A sentence is satisfiable if it is true in, or satisfied by, some model.
Validity and satisfiability are of course connected: α is valid iff ¬α is unsatisfiable; contrapositively, α
is satisfiable iff ¬α is not valid. We also have the following useful result: α |= β if and only if the
sentence (α ∧ ¬β) is unsatisfiable.
Sound: An inference algorithm that derives only entailed sentences is called sound or truth
preserving. Soundness is a highly desirable property.
Inference Rules:
Modus Ponens: The Modus Ponens is written

The notation means that, whenever any sentences of the form α ⇒ β and α are given, then the
sentence β can be inferred. For example, if (WumpusAhead ∧WumpusAlive) ⇒ Shoot
and (WumpusAhead ∧ WumpusAlive) are given, then Shoot can be inferred.
And-Eliminations: Another useful inference rule is And-Elimination, which says that, from
a conjunction, any of the conjuncts can be inferred:

For example, from (WumpusAhead ∧ WumpusAlive), WumpusAlive can be inferred.

Conjunctive normal form: every sentence of propositional logic is logically equivalent to a


conjunction of clauses. A sentence expressed as a conjunction of clauses is said to be in
conjunctive normal form or CNF.
Example: We now describe a procedure for converting to CNF. We illustrate the procedure by
converting the sentence B1,1 ⇔ (P1,2 ∨ P2,1) into CNF. The steps are as follows:
1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α).
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1) .
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1) .
3. CNF requires ¬ to appear only in literals, so we “move ¬ inwards” by repeated application
of the following equivalences from Figure 7.11:
¬ (¬α) ≡ α (double-negation elimination)
¬ (α ∧ β) ≡ (¬α ∨ ¬β) (De Morgan)
¬ (α ∨ β) ≡ (¬α ∧ ¬β) (De Morgan)
In the example, we require just one application of the last rule:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1) .
4. Now we have a sentence containing nested ∧ and ∨ operators applied to literals. We apply
the distributivity law from Figure 7.11, distributing ∨ over ∧ wherever possible.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1) .
The original sentence is now in CNF, as a conjunction of three clauses. It is much harder to
read, but it can be used as input to a resolution procedure.
Resolution:
Unit Resolution: The unit resolution inference rule is written

where each l is a literal and li and m are complementary literals (i.e., one is the negation of
the other). Thus, the CLAUSE unit resolution rule takes a clause—a disjunction of literals—and
a literal and produces a new clause. Note that a single literal can be viewed as a disjunction of
UNIT CLAUSE one literal, also known as a unit clause.
Full Resolution: The unit resolution rule can be generalized to the full resolution rule,

where li and mj are complementary literals. This says that resolution takes two clauses and
produces a new clause containing all the literals of the two original clauses except the two
complementary literals.

First Order Logic:


Syntax and Semantics of First Order Logic:
The basic syntactic elements of first-order logic are the symbols that stand for objects, relations,
and functions. The semantics must relate sentences to models in order to determine truth. For
this to happen, we need an interpretation that specifies exactly which objects, relations and
functions are referred to by the constant, predicate, and function symbols.
The syntax of the first order logic shown in Figure 8.3.

Term: A term is a logical expression that refers TERM to an object. Constant symbols are therefore terms,
but it is not always convenient to have a distinct symbol to name every object. For example, in English
we might use the expression “King John’s left leg” rather than giving a name to his leg.

Atomic Sentence: An atomic sentence (or atom for short) is formed from a predicate
symbol optionally followed by a ATOM parenthesized list of terms, such as Brother (Richard
, John). This states, under the intended interpretation given earlier, that Richard the Lionheart
is the brother of King John.6 Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John)) states that Richard the Lionheart’s father is married
to King John’s mother (again, under a suitable interpretation).
An atomic sentence is true in a given model if the relation referred to by the predicate symbol
holds among the objects referred to by the arguments.
Complex Sentences: We can use logical connectives to construct more complex sentences, with the
same syntax and semantics as in propositional calculus.
Ex:
Qunatifiers: Quantifiers express properties of entire collection of objects, instead of enumerating the
objects by name. First-order logic contains two standard quantifiers, called universal and existential.
Recall the difficulty we had in Chapter 7 with the expression of general rules in propositional
logic. Rules such as “Squares neighboring the wumpus are smelly” and “All kings
are persons” are the bread and butter of first-order logic. “All kings are persons,” is written in first-
order logic as

Equality: We can use the equality symbol to signify that two terms refer to the same object.
For example, Father (John) = Henry says that the object referred to by Father (John) and the
object referred to by Henry are the same.
Using the first order logic:

returns true.
The kinship domain:
The first example we consider is the domain of family relationships, or kinship. This domain
includes facts such as “Elizabeth is the mother of Charles” and “Charles is the father of
William” and rules such as “One’s grandmother is the mother of one’s parent.” Clearly, the
objects in our domain are people. We have two unary predicates, Male and Female. Kinship
relations—parenthood, brotherhood, marriage, and so on—are represented by binary
predicates: Parent, Sibling, Brother , Sister , Child , Daughter, Son, Spouse, Wife, Husband,
Grandparent , Grandchild , Cousin, Aunt, and Uncle. We use functions for Mother and Father
, because every person has exactly one of each of these (at least according to nature’s design).
Axioms:

Propositional vs First order

Inference rules for quantifiers:


Let us begin with universal quantifiers. Suppose our knowledge base contains the standard folkloric
axiom stating that all greedy kings are evil:

Then it seems quite permissible to infer any of the following sentences:


Unification and Lifting:

Generalized Modus Ponens: For atomic sentences pi, pi, and q, where there is a substitution
θ such that

Generalized Modus Ponens is a lifted LIFTING version of Modus Ponens—it raises Modus Ponens from
ground (variable-free) propositional logic to first-order logic.

Unification: This process is called unification and is a key component of all first-order inference
algorithms. The UNIFY algorithm takes two sentences and returns a unifier for them if one exists:

The unification algorithm shown in Figure 9.1


Ex:
Given a sentence to be stored, it is possible to construct indices for all possible queries that unify with
it. For the fact Employs(IBM ,Richard), the queries are
Employs(IBM ,Richard) Does IBM employ Richard?
Employs(x,Richard ) Who employs Richard?
Employs(IBM , y) Whom does IBM employ?
Employs(x, y) Who employs whom?

Forward Chaining: Forward chaining idea is simple, start with the atomic sentences in
the knowledge base and apply Modus Ponens in the forward direction, adding new atomic
sentences, until no further inferences can be made. Here, we explain how the algorithm is
applied to first-order definite clauses.

Definite clauses such as Situation ⇒ Response are especially useful for systems that make
inferences in response to newly arrived information. Many systems can be defined this way,
and forward chaining can be implemented very efficiently.
The forward chaining algorithm and the proof tree generated shown in Figure 9.3 and Figure
9.4
Planning: (Chpater 11 from 2nd Edition)
 The task of coming up with a sequence of actions that will achieve a goal is called
planning.
 Planning in AI is about decision-making actions performed by robots or computer
programs to achieve a specific goal.
 Execution of the plan is about choosing a sequence of tasks with a high probability of
accomplishing a specific task.

There are two types of planning environments:

 The environments that are fully observable, deterministic, finite, static (change happens
only when the agent acts), and discrete (in time, action, objects, and effects). These are
called classical planning environments.
 In contrast, nonclassical planning is for partially observable or stochastic environments
and involves a different set of algorithms and agent designs.

The language of planning problems: The language of planning problems has three
components.

1. Representation of States. Planners decompose the world into logical conditions and
represent a state as a conjunction of positive literals.
a. -EX: At(Plane1, Melbourne) A At(Plane2, Sydney)
i. might represent a state in the package delivery problem.
2. Representation of Goals. A goal is a partially specified state, represented as a
conjunction of positive ground literals.
a. -Ex Rich ∧ Famous
3. Representation of Actions. An action is specified in terms of the preconditions that
must hold before it can be executed and the effects that ensue when it is executed. For
example, an action for flying a plane from one location to another is:

An action schema consists of three parts:

- The action name and parameter list-for example, Fly (p, from, to)-serves to identify
the action.
- The precondition is a conjunction of function-free positive literals stating what
must be true in a state before the action can be executed. Any variables in the
precondition must also appear in the action's parameter list.
- The effect is a conjunction of function-free literals describing how the state changes
when the action is executed. A positive literal P in the effect is asserted to be true
in the state resulting from the action, whereas a negative literal ¬P is asserted to be
false.

The basic classical planners represents using the following two basic languages.
 STRIPS Language (STanford Research Institute Problem Solver):
1. One of the most important restrictions in STRIPS Language is that literals be
function free.
2. With this restriction, we can be sure that any action schema for a given problem
can be propositionalized- i.e, turned into a finite collection of purely
propositional action representations with no variables.

 ADL (Action Description Language): ADL overcomes the drawbacks of STRIPS. It


has more expressiveness and extensions compare to STRIPS.
Example: the fly action could be written as

The Comparison of STRIPS and ADL languages for representing planning problems is shown
in Figure 11.1

Example-1: (Air Cargo transport)


 The air cargo transport problem involves loading and unloading cargo onto and off of
planes and flying it from place to place.
 The problem can be defined with three actions: Load, Unload, and Fly.
 The actions affect two predicates: In(c, p) means that cargo c is inside plane p, and
At(x, a) means that object x (either plane or cargo) is at airport a.
 Note: that cargo is not At anywhere when it is In a plane, so At really means “available
for use at a given location.”
The problem is defined in pure STRIPS Language shown in Figure 11.2
 The following is a solution to the air cargo problem.

Example-2: The spare tire problem

 Consider the problem of changing a flat tire.


 The goal is to have a good spare tire properly mounted onto the car's axle, where the
initial state has a flat tire on the axle and a good spare tire in the trunk.
 There are four actions: removing the spare from the trunk, removing the flat tire from
the axle, putting the spare on the axle, and leaving the car unattended overnight.
 We assume that the car is in a particularly bad neighborhood, so that the: effect of
leaving it overnight is that the tires disappear.

The ADL description of the problem is shown in Figure 11.3.

 The following is a solution for spare tire problem:


[Remove(Flat, Axle), Remove(Spare, Trunk), PutOn(Spare, Axle)]

Partially Ordered Plan: A partially ordered Plan is a collection of steps:


• Start step has the initial state description and its effect
• Finish step has the goal description as its precondition
• Causal links from outcome of one step to precondition of another step
• Temporal ordering between pairs of steps
• An open condition is a precondition of a step not yet causally linked
• A plan is complete iff every precondition is achieved
• A precondition is achieved iff it is the effect if an earlier step and no possibly
intervening step undoes it

Start

Right Sock

Right Shoe

Left Sock

Left Shoe

Finish

You might also like