0% found this document useful (0 votes)
18 views31 pages

AI Unit 3

This document covers First-Order Logic, including its representation, syntax, and semantics, as well as knowledge engineering and inference methods. It discusses the structure of logical expressions, the use of quantifiers, and the representation of various domains such as kinship and natural numbers. Additionally, it addresses the role of knowledge engineers in formalizing knowledge within specific domains.

Uploaded by

kvenu8637
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views31 pages

AI Unit 3

This document covers First-Order Logic, including its representation, syntax, and semantics, as well as knowledge engineering and inference methods. It discusses the structure of logical expressions, the use of quantifiers, and the representation of various domains such as kinship and natural numbers. Additionally, it addresses the role of knowledge engineers in formalizing knowledge within specific domains.

Uploaded by

kvenu8637
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT-3

Logic and Knowledge Representation


First-Order Logic: Representation, Syntax and Semantics of First-Order Logic,
Using First-Order Logic, Knowledge Engineering in First-Order Logic.
Inference in First-Order Logic: Propositional vs. First-Order Inference,
Unification and Lifting, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects,
Events. Mental Events and Mental Objects, Reasoning Systems for Categories,
Reasoning with Default Information

FIRST-ORDER LOGIC
FIRST ORDER PREDICATE LOGIC
Propositional logic is a declarative language because its semantics is based on a truth relation
between sentences and possible worlds. It also has sufficient expressive power to deal with
partial information, using disjunction and negation.

Combining the best of formal and natural languages


The syntax of natural language
 nouns and noun phrases that refer to objects (squares, pits, wumpuses)
 verbs and verb phrases that refer to relations among objects (is breezy, is adjacent to,
shoots).
 Some of these relations are functions—relations in which there is only one “value” for a
given “input.”

It is easy to start listing examples of objects, relations, and functions:


 Objects: people, houses, numbers, theories, Ronald McDonald, colors, baseball games,
wars, centuries ...
 Relations: these can be unary relations or properties such as red, round, bogus, prime,
multistoried ..., or more general n-ary relations such as brother of, bigger than, inside, part
of, has color, occurred after, owns, comes between, ...
 Functions: father of, best friend, third inning of, one more than, beginning of ..

Some examples follow:


1. “One plus two equals three.”
1
Objects: one, two, three, one plus two
Relation: equals
Function: plus
2. “Squares neighboring the wumpus are smelly.”
Objects: wumpus, squares
Property: smelly
Relation: neighboring.
3. “Evil King John ruled England in 1200.”
Objects: John, England, 1200
Relation: ruled
Properties: evil, king.

Syntax and Semantics of first-order logic


Models for first-order logic
Models for first-order logic have objects in them.
The domain of a model is the set of objects or domain elements it contains.

Figure 3: A model containing five objects, two binary relations, three unary relations (indicated
by labels on the objects), and one unary function, left-leg.

Above figure 3 shows a model with five objects:


 Richard the Lionheart, King of England from 1189 to 1199;
 his younger brother, the evil King John, who ruled from 1199 to 1215;
 the left legs of Richard and John;
2
 and a crown.
The objects in the model may be related in various ways - Richard and John are brothers.
 relation is just the set of tuples of objects that are related.
 brotherhood relation in this model is the set
 { <Richard the Lionheart, King John> , <King John, Richard the Lionheart> }
 The crown is on King John’s head
 so the “on head” relation contains just one tuple
 <the crown, King John>
 “brother” and “on head” relations are binary relations—that is, they relate pairs of objects
The model also contains unary relations, or properties:
 the “person” property is true of both Richard and John;
 the “king” property is true only of John;
 the “crown” property is true only of John.
relationships are best considered as functions:
 given object must be related to exactly one object
 For example, each person has one left leg, so the model has a unary “left leg” function:-
 <Richard the Lionheart> → Richard’s left leg
 <King John> → John’s left leg.

Symbols and interpretations

Figure 3.1: The syntax of first-order logic with equality, specified in Backus–
Naur form
3
 The symbols that stand for objects, relations, and functions. The symbols, come in three
kinds:
 constant symbols - which stand for objects;
 predicate symbols - which stand for relations
 function symbols - which stand for functions
Note: these symbols will begin with uppercase letters. (Richard and John, Brother….)
 Each predicate and function symbol comes with an arity that fixes the number of
arguments.
 As in propositional logic, every model must provide the information required to
determine if any given sentence is true or false.
 Each model includes an interpretation that specifies exactly which objects, relations and
functions are referred to by the constant, predicate, and function symbols.

Terms
A term is a logical expression that refers to an object.

Atomic sentences
 An atomic sentence is formed from a predicate symbol optionally followed by a
parenthesized list of terms such as
Brother (Richard, John).
 Atomic sentences can have complex terms as arguments
 Married (Father (Richard), Mother (John))

Complex sentences
use logical connectives to construct complex sentences
 ¬ Brother (LeftLeg (Richard), John)
 Brother (Richard, John) ∧ Brother (John, Richard)
 King (Richard) ∨ King (John)
 ¬ King (Richard) ⇒ King (John)

4
Quantifiers
First-order logic contains two standard quantifiers, called universal and existential
1. Universal quantification (∀) - “ For all ”
 “All kings are persons,” is written in first-order logic as
∀ x King (x) ⇒ Person (x) .
 For all x, if x is a king, then x is a person.
 The symbol x is called a variable. Variables are lowercase letters
 A variable is a term all by itself and can also serve as the argument of a function
 For example, LeftLeg (x)
 Term with no variables is called a ground term.

2. Existential quantification (∃) –


 Universal quantification makes statements about every object.
 Similarly, we can make a statement about some object in the universe without naming it,
by using an existential quantifier.
 King John has a crown on his head
∃ x Crown (x) ∧ OnHead (x, John)
 ∃x is pronounced “ There exists an x such that ...” or “For some x...”
 ⇒ is the natural connective to use with ∀,
 ∧ is the natural connective to use with ∃.

Nested quantifiers
Can express more complex sentences using multiple quantifiers. The simplest case is where the
quantifiers are of the same type.
 For example, “Brothers are siblings”
∀ x ∀ y Brother (x, y) ⇒ Sibling (x, y)
 Consecutive quantifiers of the same type can be written as one quantifier with several
variables.
 For example, siblinghood is a symmetric relationship
∀ x, y Sibling (x, y) ⇔ Sibling (y, x)
In other cases we will have mixture of quantifiers
5
 “Everybody loves somebody” means that for every person, there is someone that person
loves: ∀ x ∃ y Loves (x, y) .
 “There is someone who is loved by everyone”
∃ y ∀ x Loves (x, y)
 The order of quantification is therefore very important
 ∀ x (∃ y Loves (x, y)) says that everyone has a particular property, namely, the property
that they love someone.
 ∃ y (∀ x Loves (x, y)) says that someone in the world has a particular property, namely
the property of being loved by everybody.

Connections between ∀ and ∃


The two quantifiers are actually intimately connected with each other, through negation.
∀ x ¬Likes (x, Parsnips ) is equivalent to ¬∃ x Likes (x, Parsnips)
“Everyone likes ice cream” means that there is no one who does not like ice cream:
∀ x Likes (x, IceCream) is equivalent to ¬ ∃ x ¬ Likes (x, IceCream)
The De Morgan rules for quantified and unquantified sentences are as follows

Equality
The equality symbol is used to signify that two terms refer to the same object.
For example,
Father (John) = Henry
says that the object referred to by Father (John) and the object referred to by Henry are the same.
To say that Richard has at least two brothers
∃ x, y Brother (x, Richard ) ∧ Brother (y, Richard ) ∧ ¬ (x = y)

6
Using first-order logic
Assertions and queries in first-order logic
 Sentences are added to a knowledge base using TELL. Such sentences are called
assertions.
TELL (KB, sentence)
 For example, we can assert that John is a king, Richard is a person, and all kings are
persons:
TELL (KB, King (John))
TELL (KB, Person (Richard))
TELL (KB, ∀ x King (x) ⇒ Person (x))
 We can ask questions of the knowledge base using ASK.
ASK (KB, King (John)) returns true
ASK (KB, Person (John)) returns true
ASK (KB, ∃x Person (x)) returns true
 Questions asked with ASK are called queries or goals

The kinship domain


This domain includes facts such as
“Elizabeth is the mother of Charles” and “Charles is the father of William” and rules such as
“One’s grandmother is the mother of one’s parent.”
 The objects in our domain are people.
 Kinship relations—parenthood, brotherhood, marriage, and so on—are represented by
binary predicates: Parent, Sibling, Brother, Sister, Child, Daughter, Son, Spouse, Wife,
Husband, Grandparent, Grandchild, Cousin, Aunt and Uncle.
 functions for Mother and Father, because every person has exactly one of each of these.
1. One’s mother is one’s female parent:
∀ m, c Mother (c) = m ⇔ Female (m) ∧ Parent (m, c)
2. One’s husband is one’s male spouse:
∀ w, h Husband (h, w) ⇔ Male (h) ∧ Spouse (h, w)
3. Male and female are disjoint categories:

7
∀ x Male (x) ⇔ ¬ Female (x)
4. Parent and child are inverse relations:
∀ p, c Parent (p, c) ⇔ Child (c, p)
5. A grandparent is a parent of one’s parent:
∀ g, c Grandparent (g, c) ⇔ ∃ p Parent (g, p) ∧ Parent (p, c)
6. A sibling is another child of one’s parents:
∀ x, y Sibling (x, y) ⇔ x = y ∧ ∃ p Parent (p, x) ∧ Parent (p, y)

Numbers, sets, and lists


 We need a predicate NatNum that will be true of natural numbers; we need one constant
symbol, 0, and we need one function symbol, S (successor).
 Natural numbers are defined recursively:
NatNum(0) .
∀ n NatNum (n) ⇒ NatNum (S(n))
 We also need axioms to constrain the successor function:
∀ n 0 = S(n).
∀ m, n m = n ⇒ S(m) = S(n).
 define addition in terms of the successor function:
∀ m NatNum(m) ⇒ + (0, m) = m .
∀ m, n NatNum(m) ∧ NatNum(n) ⇒ + (S(m), n) = S(+ (m, n))
SETS
The use of infix notation is an example of syntactic sugar; we will use the normal vocabulary of
set theory as syntactic sugar.
 The empty set is a constant written as {}.
 There is one unary predicate, Set, which is true of sets.
 The binary predicates are x ∈ s (x is a member of set s) and s1 ⊆ s2 (set s1 is a subset)
 The binary functions are s1 ∩ s2 (the intersection of two sets), s1 ∪ s2 (the union of two
sets), and {x | s} (the set resulting from adjoining element x to set s).

One possible set of axioms is as follows:


1. The only sets are the empty set and those made by adjoining something to a set:
8
∀ s Set (s) ⇔ (s = {} ) ∨ (∃ x, s2 Set (s2) ∧ s = {x | s2})
2. The empty set has no elements adjoined into it. In other words, there is no way to
decompose {} into a smaller set and an element:
¬ ∃ x, s {x | s} = {}
3. Adjoining an element already in the set has no effect:
∀ x, s x ∈ s ⇔ s = {x | s}
4. The only members of a set are the elements that were adjoined into it. We express this
recursively, saying that x is a member of s if and only if s is equal to some set s2 adjoined
with some element y, where either y is the same as x or x is a member of s2:
∀ x, s x ∈ s ⇔ ∃ y, s2 (s = {y | s2} ∧ (x = y ∨ x ∈ s2))
5. A set is a subset of another set if and only if all of the first set’s members are members of the
second set:
∀ s1, s2 s1 ⊆ s2 ⇔ (∀ x x ∈ s1 ⇒ x ∈ s2)
6. Two sets are equal if and only if each is a subset of the other:
∀ s1, s2 (s1 = s2) ⇔ (s1 ⊆ s2 ∧ s2 ⊆ s1)
7. An object is in the intersection of two sets if and only if it is a member of both sets:
∀ x, s1, s2 x ∈ (s1 ∩ s2) ⇔ (x ∈ s1 ∧ x ∈ s2)
8. An object is in the union of two sets if and only if it is a member of either set:
∀ x, s1, s2 x ∈ (s1 ∪ s2) ⇔ (x ∈ s1 ∨ x ∈ s2).

LISTS
Lists are similar to sets. The differences are that lists are ordered and the same element can
appear more than once in a list.

The wumpus world


The wumpus agent receives a percept vector with five elements:
Percept ( [ Stench, Breeze, Glitter , None, None ], 5)
The actions in the wumpus world can be represented by logical terms:
Turn (Right), Turn (Left), Forward, Shoot, Grab, Climb
To determine which is best, the agent program executes the query
ASKVARS (∃ a BestAction (a, 5)) which returns a binding list such as {a / Grab}
9
Simple “reflex” behavior can also be implemented by quantified implication sentences
∀ t Glitter (t) ⇒ BestAction (Grab, t)

Knowledge engineering in first-order logic


A knowledge engineer is someone who investigates a particular domain, learns what concepts
are important in that domain, and creates a formal representation of the objects and relations in
the domain.
It includes the following steps:
1. Identify the task: - The task will determine what knowledge must be represented in order
to connect problem instances to answers. This step is analogous to the PEAS process for
designing agents.
2. Assemble the relevant knowledge: - The knowledge engineer might already be an expert
in the domain, or might need to work with real experts to extract what they know—a
process called knowledge acquisition.
3. Decide on a vocabulary of predicates, functions, and constants: - That is, translate the
important domain-level concepts into logic-level names. Once the choices have been made,
the result is a vocabulary that is known as the ontology of the domain. The word ontology
means a particular theory of the nature of being or existence.
4. Encode general knowledge about the domain: - The knowledge engineer writes down
the axioms for all the vocabulary terms. This pins down (to the extent possible) the meaning
of the terms, enabling the expert to check the content. Often, this step reveals
misconceptions or gaps in the vocabulary that must be fixed by returning to step 3 and
iterating through the process.
5. Encode a description of the specific problem instance: - For a logical agent, problem
instances are supplied by the sensors, whereas a “disembodied” knowledge base is sup
plied with additional sentences in the same way that traditional programs are supplied with
input data.
6. Pose queries to the inference procedure and get answers: - This is where the reward is:
we can let the inference procedure operate on the axioms and problem-specific facts to
derive the facts we are interested in knowing.
7. Debug the knowledge base:- the answers will be correct for the knowledge base as

10
written, assuming that the inference procedure is sound, but they will not be the ones that
the user is expecting.
For example, the sentence ∀x NumOfLegs (x, 4) ⇒ Mammal (x) is false for reptiles,
amphibians

INFERENCE IN FIRST-ORDER LOGIC

Propositional versus first-order inference


1) Inference rules for quantifiers
 Suppose our knowledge base contains the standard axiom stating that all greedy kings are
evil:
∀ x King(x) ∧ Greedy(x) ⇒ Evil(x)
 Then it seems quite permissible to infer any of the following sentences:
King (John) ∧ Greedy (John) ⇒ Evil (John)
King (Richard) ∧ Greedy (Richard) ⇒ Evil (Richard)
King (Father (John)) ∧ Greedy (Father (John)) ⇒ Evil (Father (John)) .

The rule of Universal Instantiation (UI for short) says that we can infer any sentence obtained
by substituting a ground term (a term without variables) for the variable.

In the rule for Existential Instantiation, the variable is replaced by a single new constant
symbol.

11
 For example, from the sentence
∃ x Crown(x) ∧ OnHead (x, John)
 we can infer the sentence
Crown(C1) ∧ OnHead(C1, John)
C1 does not appear elsewhere in the knowledge base.

2) Reduction to propositional inference


 suppose our knowledge base contains just the sentences
∀ x King(x) ∧ Greedy(x) ⇒ Evil(x)
King(John)
Greedy(John)
Brother (Richard, John)
 apply UI to the first sentence using all possible ground-term substitutions from {x/John}
and {x/Richard}. We obtain
King(John) ∧ Greedy(John) ⇒ Evil(John)
King(Richard ) ∧ Greedy(Richard) ⇒ Evil(Richard)

Forward chaining
Start with the atomic sentences in the knowledge base and apply Modus Ponens in the forward
direction, adding new atomic sentences, until no further inferences can be made.

“The law says that it is a crime for an American to sell weapons to hostile nations. The
country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it
by ColonelWest, who is American.”
Prove that West is a criminal.

Represent these facts as first-order definite clauses:-


 “it is a crime for an American to sell weapons to hostile nations”:
American(x) ∧ Weapon(y) ∧ Sells (x, y, z) ∧ Hostile(z) ⇒ Criminal (x)
 Nono . . . has some missiles.

12
 The sentence ∃x Owns (Nono,x) ∧ Missile(x) is transformed into two definite clauses by
Existential Instantiation, introducing a new constant M1:
Owns (Nono, M1)
Missile(M1)
 “All of its missiles were sold to it by Colonel West”
Missile(x) ∧ Owns (Nono, x) ⇒ Sells (West, x, Nono) .
 We will also need to know that missiles are weapons:
Missile(x) ⇒ Weapon(x)
 and we must know that an enemy of America counts as “hostile”:
Enemy (x, America) ⇒ Hostile(x)
 “West, who is American . . .”
American(West)
 “The country Nono, an enemy of America . . .”:
Enemy (Nono, America) .

Forward Chaining Algorithm: -


 American(x) ∧Weapon(y) ∧ Sells (x, y, z) ∧ Hostile(z) ⇒ Criminal (x)
 Owns (Nono, M1)
 Missile(M1)
 Missile(x) ∧ Owns (Nono, x) ⇒ Sells (West, x, Nono)
 Missile (x) ⇒ Weapon(x)
 Enemy (x, America) ⇒ Hostile(x)
 American(West)
 Enemy (Nono, America)

Step-1: -
Start with the known facts and will choose the sentences which do not have implications, such
as:
 American(West)
 Enemy (Nono, America)
 Owns (Nono, M1)
13
 Missile(M1)
Step-2: -
We will see those facts which infer from available facts and with satisfied premises.

Figure 3.2: The proof tree generated by forward chaining on the crime example.

Resolution
1) Conjunctive normal form for first-order logic
 first-order resolution requires that sentences be in conjunctive normal form (CNF)
 a conjunction of clauses, where each clause is a disjunction of literals

“Everyone who loves all animals is loved by someone,”


First Order Logic: -
∀ x [∀ y Animal(y) ⇒ Loves(x, y)] ⇒ [∃ y Loves(y, x)]

The steps are as follows: -


 Eliminate implications:
 Move ¬ inwards:
 Standardize variables:
 Skolemize:
 Drop universal quantifiers:
 Distribute ∨ over ∧:
14
 Eliminate implications:
∀x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃ y Loves(y, x)]
 Move ¬ inwards:
¬∀ x p becomes ∃ x ¬p
¬∃ x p becomes ∀ x ¬p
∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)] .
∀ x [∃ y ¬¬Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] .
∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)]
 Standardize variables:
∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ z Loves(z, x)]
 Skolemize: process of removing existential quantifiers by elimination. translate ∃ x P(x)
into P(A), where A is a new constant
∀ x [Animal(A) ∧ ¬Loves(x, A)] ∨ Loves(B,x)
∀ x [Animal(F(x)) ∧ ¬Loves(x, F(x))] ∨ Loves(G(z), x)
• Drop universal quantifiers:
[Animal(F(x)) ∧ ¬Loves(x, F(x))] ∨ Loves(G(z), x)
• Distribute ∨ over ∧:
[Animal(F(x)) ∨ Loves(G(z), x)] ∧ [¬Loves(x, F(x)) ∨ Loves(G(z), x)]

2) The resolution inference rule

15
KNOWLEDGE REPRESENTATION
ONTOLOGICAL ENGINEERING
Concepts such as Events, Time, Physical Objects, and Beliefs— that occur in many different domains.
Representing these abstract concepts is sometimes called ontological engineering.

The general framework of concepts is called an upper ontology because of the convention of drawing
graphs with the general concepts at the top and the more specific concepts below them, as in Figure
12.1.
16
Categories and Objects
The organization of objects into categories is a vital part of knowledge representation. Although
interaction with the world takes place at the level of individual objects, much reasoning takes place
at the level of categories. For example, a shopper would normallyhave the goal of buying a basketball,
rather than a particular basketball such as BB9
There are two choices for representing categories in first-order logic: predicates and objects.
That is, we can use the predicate Basketball (b), or we can reify1 the category as an object,
Basketballs. We could then say Member(b, Basketballs ), which we will abbreviate as b∈
Basketballs, to saythat b is a member of the category of basketballs. We say Subset(Basketballs,
Balls), abbreviated as Basketballs ⊂ Balls, to say that Basketballs is a subcategory of Balls.
Categories serve to organize and simplify the knowledge base through inheritance. If we
say that all instances of the category Food are edible, and if we assert that Fruit is a subclass of
Food and Apples is a subclass of Fruit , then we can infer that every apple is edible. We say that
the individual apples inherit the property of edibility, in this case from their membership in the
Food category.
First-order logic makes it easy to state facts about categories, either by relating objects to categories
or by quantifying over their members. Here are some types of facts, with examples of each:
• An object is a member of a category.
BB9 ∈ Basketballs
• A category is a subclass of another category.
Basketballs ⊂ Balls
• All members of a category have some properties.
(x∈ Basketballs) ⇒ Spherical (x)
• Members of a category can be recognized by some properties.
Orange(x) ∧ Round (x) ∧ Diameter(x)=9.5 ∧ x∈ Balls ⇒ x∈ Basketballs
• A category as a whole has some properties.
Dogs ∈DomesticatedSpecies
Notice that because Dogs is a category and is a member of DomesticatedSpecies , the latter
must be a category of categories.

Categories can also be defined by providing necessary and sufficient conditions for membership.
For example, a bachelor is an unmarried adult male:
x∈ Bachelors ⇔ Unmarried(x) ∧ x∈ Adults ∧ x∈Males

Physical Composition
We use the general PartOf relation to say that one thing is part of another. Objects can be grouped
into part of hierarchies, reminiscent of the Subset hierarchy:
PartOf (Bucharest , Romania)
PartOf (Romania, EasternEurope)
PartOf (EasternEurope, Europe)
PartOf (Europe, Earth)
The PartOf relation is transitive and reflexive; that
is, PartOf (x, y) ∧ PartOf (y, z) ⇒ PartOf (x,
z) PartOf (x, x)
17
Therefore, we can conclude PartOf (Bucharest , Earth).

For example, if the apples are Apple1, Apple2, and Apple3, then
BunchOf ({Apple1,Apple2,Apple3})
denotes the composite object with the three apples as parts (not elements).

We can define BunchOf in terms of the PartOf relation. Obviously, each element of s is part of
BunchOf (s):
∀x x∈ s ⇒ PartOf (x, BunchOf (s))
Furthermore, BunchOf (s) is the smallest object satisfying this condition. In other words, BunchOf
(s) must be part of any object that has all the elements of s as parts:
∀ y [∀x x∈ s ⇒ PartOf (x, y)] ⇒ PartOf (BunchOf (s), y)
Measurements
In both scientific and commonsense theories of the world, objects have height, mass, cost, and so
on. The values that we assign for these properties are called measures.
Length(L1)=Inches(1.5)=Centimeters(3.81)
Conversion between units is done by equating multiples of one unit to another:
Centimeters(2.54 ×d)=Inches(d)
Similar axioms can be written for pounds and kilograms, seconds and days, and dollars and cents.
Measures can be used to describe objects as follows:
Diameter (Basketball12)=Inches(9.5)
ListPrice(Basketball12)=$(19)
d∈ Days ⇒ Duration(d)=Hours(24)

Time Intervals
Event calculus opens us up to the possibility of talking about time, and time intervals. We will
consider two kinds of time intervals: moments and extended intervals. The distinction is that only
moments have zero duration:
Partition({Moments,ExtendedIntervals}, Intervals )
i∈Moments ⇔ Duration(i)=Seconds(0)
The functions Begin and End pick out the earliest and latest moments in an interval, and the function
Time delivers the point on the time scale for a moment. The function Duration gives the difference
between the end time and the start time.
Interval (i) ⇒ Duration(i)=(Time(End(i)) − Time(Begin(i)))
Time(Begin(AD1900))=Seconds(0)
Time(Begin(AD2001))=Seconds(3187324800)
Time(End(AD2001))=Seconds(3218860800)
Duration(AD2001)=Seconds(31536000)

Two intervals Meet if the end time of the first equals the start time of the second. The complete set
of interval relations, as proposed by Allen (1983), is shown graphically in Figure 12.2 and logically
below:
18
Meet(i,j) ⇔ End(i)=Begin(j)
Before(i,j) ⇔ End(i) < Begin(j)
After (j,i) ⇔ Before(i, j)
During(i,j) ⇔ Begin(j) < Begin(i) < End(i) < End(j)
Overlap(i,j) ⇔ Begin(i) < Begin(j) < End(i) < End(j)
Begins(i,j) ⇔ Begin(i) = Begin(j)
Finishes(i,j) ⇔ End(i) = End(j)
Equals(i,j) ⇔ Begin(i) = Begin(j) ∧ End(i) = End(j)

EVENTS
Event calculus reifies fluents and events. The fluent At(Shankar , Berkeley) is an object that refers
to the fact of Shankar being in Berkeley, but does not by itself say anything about whether it is true.
To assert that a fluent is actually true at some point in time we use the predicate T, as in
T(At(Shankar, Berkeley), t).
Events are described as instances of event categories. The event E1 of Shankar flying from San
Francisco to Washington, D.C. is described as
E1 ∈ Flyings ∧ Flyer (E1, Shankar ) ∧ Origin(E1, SF) ∧ Destination(E1,DC)
we can define an alternative three-argument version of the category of flying events and say
E1 ∈ Flyings(Shankar , SF,DC)
We then use Happens(E1, i) to say that the event E1 took place over the time interval i, and we
say the same thing in functional form with Extent(E1)=i. We represent time intervals by a (start,
end)
pair of times; that is, i = (t1, t2) is the time interval that starts at t1 and ends at t2. The
complete set of predicates for one version of the event calculus is
T(f, t) Fluent f is true at time t
Happens(e, i) Event e happens over the time interval i
Initiates(e, f, t) Event e causes fluent f to start to hold at time t
Terminates(e, f, t) Event e causes fluent f to cease to hold at time t
Clipped(f, i) Fluent f ceases to be true at some point during time interval i
Restored (f, i) Fluent f becomes true sometime during time interval i
19
We assume a distinguished event, Start , that describes the initial state by saying which fluents are
initiated or terminated at the start time. We define T by saying that a fluent holds at a point in time
if the fluent was initiated by an event at some time in the past and was not made false (clipped) by
an intervening event. A fluent does not hold if it was terminated by an event and not made true
(restored) by another event. Formally, the axioms are:
Happens(e, (t1, t2)) ∧ Initiates(e, f, t1) ∧ ¬Clipped(f, (t1, t)) ∧ t1 < t ⇒T(f, t)
Happens(e, (t1, t2)) ∧ Terminates(e, f, t1)∧ ¬Restored (f, (t1, t)) ∧ t1 < t ⇒¬T(f,
t) where Clipped and Restored are defined by
Clipped(f, (t1, t2)) ⇔ ∃ e, t, t3 Happens(e, (t, t3)) ∧ t1 ≤ t < t2 ∧ Terminates(e,
f, t)
Restored (f, (t1, t2)) ⇔ ∃ e, t, t3 Happens(e, (t, t3)) ∧ t1 ≤ t < t2 ∧ Initiates(e,
f, t)

MENTAL EVENTS AND MENTAL OBJECTS


What we need is a model of the mental objects that are in someone’s head (or something’s
knowledge base) and of the mental processes that manipulate those mental objects. The model does
not have to be detailed. We do not have to be able to predict how many milliseconds it will take for
a particular agent to make a deduction. We will be happy just to be able to conclude that mother
knows whether or not she is sitting.
We begin with the propositional attitudes that an agent can have toward mental objects:
attitudes such as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these
attitudes do not behave like “normal” predicates. For example, suppose we try to assert that Lois
knows that Superman can fly:
Knows(Lois, CanFly(Superman))
One minor issue with this is that we normally think of CanFly(Superman) as a sentence, but here it
appears as a term. That issue can be patched up just be reifying CanFly(Superman); making it a
fluent. A more serious problem is that, if it is true that Superman is Clark Kent, then we must conclude
that Lois knows that Clark can fly:
(Superman = Clark) ∧ Knows(Lois , CanFly(Superman)) |= Knows(Lois, CanFly(Clark
)) Modal logic is designed to address this problem. Regular logic is concerned with a single
modality, the modality of truth, allowing us to express “P is true.” Modal logic includes special
modal operators that take sentences (rather than terms) as arguments. For example, “A knows
P” is represented with the notation KAP, where K is the modal operator for knowledge. It
takes two arguments, an agent (written as the subscript) and a sentence. The syntax of modal logic
is the same
as first-order logic, except that sentences can also be formed with modal operators.
In first-order logic a model contains a set of objects and an interpretation that maps each name
to the appropriate object, relation, or function. In modal logic we want to be able to consider both
the possibility that Superman’s secret identity is Clark and that it isn’t. Therefore, we will need a
more complicated model, one that consists of a collection of possible worlds rather than just one true
world. The worlds are connected in a graph by accessibility relations, one relation for each modal
operator.
We say that world w1 is accessible from world w0 with respect to the modal operator KA if everything
in w1 is consistent with what A knows in w0, and we write this as Acc(KA,w0,w1). In diagrams such
as Figure 12.4 we show accessibility as an arrow between possible worlds.
In general, a knowledge atom KAP is true in world w if and only if P is true in every world
accessible from w. The truth of more complex sentences is derived by recursive application of this
20
rule and the normal rules of first-order logic. That means that modal logic can be used to reason about
nested knowledge sentences: what one agent knows about another agent’s knowledge. For example,
we can say that, even though Lois doesn’t know whether Superman’s secret identity is Clark Kent,
she does know that Clark knows:
KLois [KClark Identity(Superman, Clark ) ∨ KClark¬Identity(Superman, Clark )]
Figure 12.4 shows some possible worlds for this domain, with accessibility relations for Lois
and Superman.

In the TOP-LEFT diagram, it is common knowledge that Superman knows his own identity,
and neither he nor Lois has seen the weather report. So in w0 the worlds w0 and w2 are accessible
to Superman; maybe rain is predicted, maybe not. For Lois all four worlds are accessible from
each
other; she doesn’t know anything about the report or if Clark is Superman. But she does know that
Superman knows whether he is Clark, because in every world that is accessible to Lois, either
Superman knows I, or he knows ¬I. Lois does not know which is the case, but either way she knows
Superman knows.
In the TOP-RIGHT diagram it is common knowledge that Lois has seen the weather report.
So in w4 she knows rain is predicted and in w6 she knows rain is not predicted. Superman does
not know the report, but he knows that Lois knows, because in every world that is accessible to him,
either she knows R or she knows ¬R.
In the BOTTOM diagram we represent the scenario where it is common knowledge that
Superman knows his identity, and Lois might or might not have seen the weather report. We represent
this by combining the two top scenarios, and adding arrows to show that Superman does not know
which scenario actually holds. Lois does know, so we don’t need to add any arrows for her. In w0
Superman still knows I but not R, and now he does not know whether Lois knows R. From what
Superman knows, he might be in w0 or w2, in which case Lois does not know whether R is true,or
he could be in w4, in which case she knows R, or w6, in which case she knows ¬R.

REASONING SYSTEMS FOR CATEGORIES


This section describes systems specially designed for organizing and reasoning with categories.
There are two closely related families of systems: semantic networks provide graphical aids for
21
visualizing a knowledge base and efficient algorithms for inferring properties of an object on the
basis of its category membership; and description logics provide a formal language for constructing

and combining category definitions and efficient algorithms for deciding subset and superset
relationships between categories.
SEMANTIC NETWORKS
There are many variants of semantic networks, but all are capable of representing individual objects,
categories of objects, and relations among objects. A typical graphical notation displays object or
category names in ovals or boxes, and connects them with labeled links. For example, Figure 12.5
has a MemberOf link between Mary and FemalePersons , corresponding to the logical assertion
Mary ∈FemalePersons ; similarly, the SisterOf link between Mary and John corresponds to
the
assertion SisterOf (Mary, John). We can connect categories using SubsetOf links, and so on.
We know that persons have female persons as mothers, so can we draw a HasMother link from
Persons to FemalePersons? The answer is no, because HasMother is a relation between a
person and his or her mother, and categories do not have mothers. For this reason, we have used a
special notation—the double-boxed link—in Figure 12.5. This link asserts that
∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y ∈ FemalePersons ]
We might also want to assert that persons have two legs—that is,
∀x x∈ Persons ⇒ Legs(x, 2)
The semantic network notation makes it convenient to perform inheritance reasoning. For
example, by virtue of being a person, Mary inherits the property of having two legs. Thus, to find
out how many legs Mary has, the inheritance algorithm follows the MemberOf link from Mary to
the category she belongs to, and then follows SubsetOf links up the hierarchy until it finds a
category for which there is a boxed Legs link—in this case, the Persons category.

Inheritance becomes complicated when an object can belong to more than one category or when a
category can be a subset of more than one other category; this is called multiple inheritance.
The drawback of semantic network notation, compared to first-order logic: the fact that links between
bubbles represent only binary relations.

For example, the sentence Fly(Shankar , NewYork, NewDelhi ,Yesterday) cannot be asserted
directly in a semantic network. Nonetheless, we can
obtain the effect of n-ary assertions by reifying the proposition itself as an event belonging to an
appropriate event category. Figure 12.6 shows the semantic network structure for this particular
event. Notice that the restriction to binary relations forces the creation of a rich ontology of reified

22
concepts.

One of the most important aspects of semantic networks is their ability to represent default values
for categories. Examining Figure 12.5 carefully, one notices that John has one leg, despite the fact
that he is a person and all persons have two legs. In a strictly logical KB, this would be a contradiction,
but in a semantic network, the assertion that all persons have two legs has only default status; that
is, a person is assumed to have two legs unless this is contradicted by more specific information.
EXAMPLE 1
Consider the crime example. The sentences in CNF are
¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal (x)
¬Missile(x) ∨ ¬Owns(Nono, x) ∨ Sells(West, x, Nono)
¬Enemy(x,America) ∨ Hostile(x)
¬Missile(x) ∨Weapon(x)
Owns(Nono,M1) Missile(M1)
American(West) Enemy(Nono,America)
We also include the negated goal ¬Criminal (West). The resolution proof is shown
in Figure 9.11.

23
Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is
characteristic of resolution on Horn clause knowledge bases. In fact, the clauses along
the main spine correspond exactly to the consecutive values of the goals variable in the
backward-chaining algorithm of Figure 9.6. This is because we always choose to
resolve with a clause whose positive literal unified with the leftmost literal of the
“current” clause on the spine; this is exactly what happens in backward chaining. Thus,
backward chaining is just a special case of resolution with a particular control strategy
to decide which resolution to perform next.

EXAMPLE 2
Our second example makes use of Skolemization and involves clauses that are not
definite clauses. This results in a somewhat more complex proof structure. In English,
the problem is as follows:
Everyone who loves all animals is loved by someone.
Anyone who kills an animal is loved by no one.
Jack loves all animals.
Either Jack or Curiosity killed the cat, who is named Tuna.
Did Curiosity kill the cat?

The resolution proof that Curiosity killed the cat is given in Figure 9.12. In English, the
proof could be paraphrased as follows:
Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus Jack
must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because anyone
who kills an animal is loved by no one, we know that no one loves Jack. On the other hand,
Jack loves all animals, so someone loves him; so we have a contradiction. Therefore,
Curiosity killed the cat.

24
The proof answers the question “Did Curiosity kill the cat?” but often we want to pose
more general questions, such as “Who killed the cat?” Resolution can do this, but it
takes a little
more work to obtain the answer. The goal is ∃w Kills(w, Tuna), which, when
negated, becomes ¬Kills(w, Tuna) in CNF. Repeating the proof in Figure 9.12 with
the new negated goal, we obtain a similar proof tree, but with the substitution
{w/Curiosity} in one of the steps. So, in this case, finding out who killed the cat is
just a matter of keeping track of the bindings for the query variables in the proof.

EXAMPLE 3
1. All people who are graduating are happy.
2. All happy people smile.
3. Someone is graduating.
4. Conclusion: Is someone smiling?

Solution:
Convert the sentences into predicate Logic
1. ∀x graduating(x) -> happy(x)
2. ∀x happy(x) -> smile(x)
3. ∃x graduating(x)
4. ∃x smile(x)

25
(vii) Convert to conjunct of disjuncts form.

26
EXAMPLE 4
Explain the unification algorithm used for reasoning under predicate logic with an example.
Consider the following facts
a. Team India
b. Team Australia
c. Final match between India and Australia
d. India scored 350 runs, Australia scored 350 runs, India lost 5 wickets, Australia lost 7 wickets.
e. The team which scored the maximum runs wins.
f. If the scores are same the team which lost minimum wickets wins the match.
Represent the facts in predicate, convert to clause form and prove by resolution “India wins the
match”.

27
28
EXAMPLE 5

31
33
EXAMPLE 6 & EXAMPLE 7

Difference between Predicate Logic and Propositional Logic

33

You might also like