Ai - Module 4 Notes
Ai - Module 4 Notes
ARTIFICIAL
INTELLIGENCE(BCS515B)
SESSION 8
2. A second drawback of data structures in programs (and of databases, for that matter) is the lack of any easy
way to say, for example, “There is a pit in [2,2] or [3,1]” or “If the wumpus is in [1,1] then he is not in [2,2].”
Programs can store a single value for each variable, and some systems allow the value to be “unknown,” but
they lack the expressiveness required to handle partial information.
Propositional logic is a declarative language because its semantics is based on a truth relation between
sentences and possible worlds. It also has sufficient expressive power to deal with partial information, using
disjunction and negation. Propositional logic has a third COMPOSITIONALITY property that is desirable in
representation languages, namely, compositionality.
In a compositional language, the meaning of a sentence is a function of the meaning of its parts. For example,
the meaning of “S1,4 ∧ S1,2” is related to the meanings of “S1,4” and “S1,2.” It would be very strange if
“S1,4” meant that there is a stench in square [1,4] and “S1,2” meant that there is a stench in square [1,2], but
“S1,4 ∧ S1,2” meant that France and Poland drew 1–1 in last week’s ice hockey qualifying match. Clearly,
non-compositionality makes life much more difficult for the reasoning system.
Wanner (1974) did a similar experiment and found that subjects made the right choice at chance level—
about 50% of the time—but remembered the content of what they read with better than 90% accuracy. This
suggests that people process the words to form some kind of representation.
More interesting is the case in which a concept is completely absent in a language. Speakers of the
Australian aboriginal language Guugu Yimithirr have no words for relative directions, such as front, back,
right, or left. Instead they use absolute directions, saying, for example, the equivalent of “I have a pain in my
north arm.” This difference in language makes a difference in behavior: Guugu Yimithirr speakers are better
at navigating in open terrain, while English speakers are better at placing the fork to the right of the plate.
Language also seems to influence thought through seemingly arbitrary grammatical features such as the
gender of nouns. For example, “bridge” is masculine in Spanish and feminine in German. Boroditsky (2003)
asked subjects to choose English adjectives to describe a photograph of a particular bridge. Spanish speakers
chose big, dangerous, strong, and towering, whereasGerman speakers chose beautiful, elegant, fragile, and
slender. Words can serve as anchor points that affect how we perceive the world.
Loftus and Palmer (1974) showed experimental subjects a movie of an auto accident. Subjects who were
asked “How fast were the cars going when they contacted each other?” reported an average of 32 mph,
while subjects who were asked the question with the word “smashed” instead of “contacted” reported
41mph for the same cars in the same movie.
In a first-order logic reasoning system that uses CNF, we can see that the linguistic form “¬(A ∨ B)” and
“¬A ∧ ¬B” are the same because we can look inside the system and see that the two sentences are stored as
the same canonical CNF form. Can we do that with the human brain? Until recently the answer was “no,”
but now it is “maybe.” Mitchell et al. (2008) put subjects in an fMRI (functional magnetic resonance
imaging) machine, showed them words such as “celery,” and imaged their brains. The researchers were then
able to train a computer program to predict, from a brain image, what word the subject had been presented
with. Given two choices (e.g., “celery” or “airplane”), the system predicts correctly 77% of the time.
Objects: people, houses, numbers, theories, Ronald McDonald, colors, baseball games, wars, centuries ...
PROPERTY
Relations: these can be unary relations or properties such as red, round, bogus, prime, multistoried ..., or
more general n-ary relations such as brother of, bigger than, inside, part of, has color, occurred after, owns,
comes between, ...
Functions: father of, best friend, third inning of, one more than, beginning of ... Indeed, almost any assertion
can be thought of as referring to objects and properties or relations. Some examples follow:
“One plus two equals three.” Objects: one, two, three, one plus two; Relation: equals; Function: plus. (“One
plus two” is a name for the object that is obtained by applying the function “plus” to the objects “one” and
“two.” “Three” is another name for this object.)
“Squares neighboring the wumpus are smelly.” Objects: wumpus, squares; Property: smelly; Relation:
neighboring.
“Evil King John ruled England in 1200.” Objects: John, England, 1200; Relation: ruled; Properties: evil,
king.
Propositional logic is a declarative language because its semantics is based on a truth relation between
sentences and possible words. It also has sufficient expressive power to deal with partial information using
disjunction and negation. The primary difference between propositional logic and first order logic lies in the
ontological commitment made by each language. Propositional logic assumes that there are facts that either
hold or do not hold in the world. Each fact can be in one of two states: true or false. First order logic
assumes that the world consists of objects with certain relations among them that do or do not hold.
The language of first-order logic, whose syntax and semantics we define in the next section, is built around
objects and relations. It has been so important to mathematics, philosophy, and artificial intelligence
precisely because those fields—and indeed, much of everyday human existence—can be usefully thought of
as dealing with objects and the relations among them.
First-order logic can also express facts about some or all of the objects in the universe. This enables one to
represent general laws or rules, such as the statement “Squares neighboring the wumpus are smelly.”
The ontological and epistemological commitments of five different logics are summarized in below table:
The five objects are, 1. Richard the Lionheart 2. His younger brother 3. The evil King John 4. The left
legs of Richard and John 5. A crown
The objects in the model may be related in various ways, In the figure Richard and John are brothers.
Formally speaking, a relation is just the set of tuples of objects that are related.
A tuple is a collection of Objects arranged in a fixed order and is written with angle brackets
surrounding the objects.
Thus, the brotherhood relation in this model is the set
{(Richard the Lionheart, King John), (King John, Richard the Lionheart)}
The crown is on King John’s head, so the “on head” relation contains just one tuple, (the crown, King
John).
The relation can be binary relation relating pairs of objects (Ex:- “Brother”) or unary relation
representing a common object (Ex:- “Person” representing both Richard and John)
Certain kinds of relationships are best considered as functions that relates an object to exactly one
object.
For Example:- each person has one left leg, so the model has a unary “left leg” function that includes
the following mappings
⟨Richard the Lionheart⟩ → Richard’s left leg
⟨King John⟩ → John’s left leg
o The choice of names is entirely up to the user o Each predicate and function symbol comes with
an arity
o Arity fixes the number of arguments.
The semantics must relate sentences to models in order to determine truth.
To do this, an interpretation is needed specifying exactly which objects, relations and functions are
referred to by the constant, predicate and function symbols.
One possible interpretation called as the intended interpretation- is as follows;
o Richard refers to Richard the Lionheart and John refers to the evil King John.
o Brother refers to the brotherhood relation, that is the set of tuples of objects given in equation
{(Richard the Lionheart, King John),(King John, Richard the Lionheart)}
o OnHead refers to the “on head” relation that holds between the crown and King John; Person,
King and Crown refer to the set of objects that are persons, kings and crowns.
o Leftleg refers to the “left leg” function, that is, the mapping given in
{(Richard the Lionheart, King John),(King John, Richard the Lionheart)}
There are many other possible interpretations relating these symbols to this particular model.
The truth of any sentence is determined by a model and an interpretation for the sentence symbols.
The syntax of the FOL with equality specified in BNF is as follows:
Figure 8.4. It shows that models vary in how many objects they contain—from one up to infinity—and in the
way the constant symbols map to objects. If there are two constant symbols and one object, then both
symbols must refer to the same object; but this can still happen even with more objects. When there are more
objects than constant symbols, some of the objects will have no names. Because the number of possible
models is unbounded, checking entailment by the enumeration of all possible models is not feasible for first-
order logic (unlike propositional logic). Even if the number ofobjects is restricted, the number of
combinations can be very large.
8.2.3 Terms
A Term is a logical expression that refers to an object
Constant symbol are therefore terms, but it is not always convenient to have a distinct symbol to name every
object.
For Example:- in English we might use the expression “King Johns left leg” rather than giving a name to his
leg.
A complex term is formed by a function symbol followed by a parenthesized list of terms as arguments to
the function symbol.
It is just like a complicated kind of name. It’s not a “subroutine call” that returns a value”.
The formal semantics of terms is straight forward, Consider a term f(t1……tn) Where f- some function in
the model (call it F) The argument terms – objects in the domain The term – object that is the value of the
function F applied to the domain
For Example:- suppose the LeftLeg function symbol refers to the function is,
(Richard the Lionheart) ->Richards left leg (King John ) ->Johns left leg John refers to King John,
then LeftLeg (John) refers to king Johns left leg.
In this way the Interpretation fixes the referent of every term.
An atomic sentence is formed from a predicate symbol followed by a parenthesized list of terms:
Brother (Richard, John)
This states that Richard the Lionheart is the brother of King John.
Atomic Sentences can have complex terms as arguments.
Thus, Married (Father (Richard), Mother (John)) states that Richard the Lionheart’s father is married to
King John’s mother.
An atomic sentence is true in a given model, under a given interpretation, if the relation referred to by
the predicate symbol holds among the objects referred to by the arguments.
Logical connectives can be used to construct more complex sentences, just as in propositional calculus.
The semantics of sentences formed with logical connectives is identical to that in the propositional case.
8.2.6 Quantifiers
Quantifiers are used to express properties of entire collections of objects, instead of enumerating the objects
by name if a logic that allows object is found.
The following are the types of standard quantifiers,
o Universal
o Existential
Universal quantification (∀ )
Rules such as “Squares neighboring the wumpus are smelly” and “All kings are persons” are the
bread and butter of first-order logic.
The second rule, “All kings are persons,” is written in first-order logic as
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for “all.”) Thus, the sentence
says,
“For all x, if x is a king, then x is a person.” The symbol x is called a variable.
A variable is a term all by itself, and as such can also serve as the argument of a function for example, LeftLeg(x). A
term with no variables is called a ground term.
We can extend the interpretation in five ways:
x → Richard the Lionheart,
x → King John,
x → Richard’s left leg,
x → John’s left leg,
x → the crown.
The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the original model if the sentence
King(x) ⇒ Person(x) is true under each of the five extended interpretations.
That is, the universally quantified sentence is equivalent to asserting the following five
sentences:
Richard the Lionheart is a king ⇒ Richard the Lionheart is a person.
King John is a king ⇒ King John is a person.
Richard’s left leg is a king ⇒ Richard’s left leg is a person.
Nested quantifiers:
1. The simplest case is where the quantifiers are of the same type. For example, “Brothers are siblings” can be
written as
2. Consecutive quantifiers of the same type can be written as one quantifier with several variables.
For example, to say that siblinghood is a symmetric relationship, we can write
3. In other cases we will have mixtures. “Everybody loves somebody” means that for every person, there is
someone that person loves:
4. On the other hand, to say “There is someone who is loved by everyone,” we write
5. The order of quantification is therefore very important. It becomes clearer if we insert parentheses.
says that everyone has a particular property, namely, the property
that they love someone.
6. On the other hand,
says that someone in the world has a particular property, namely the
property of being loved by everybody.
Canara Engineering College Module - 4 9
V SEMESTER ARTIFICIAL INTELLIGENCE(BCS515B)
7. Some confusion can arise when two quantifiers are used with the same variable name.
Consider the sentence
Here the x in Brother (Richard, x) is existentially quantified. The rule is that the variable belongs to the
innermost quantifier that mentions it; then it will not be subject to any other quantification. Another way to
think of it is this: ∃x Brother (Richard, x) is a sentence about Richard (that he has a brother), not about x; so
putting a ∃ x outside it has no effect. It could equally well have been written ∃ z Brother (Richard, z).
Because this can be a source of confusion, we will always use different variable names with nested
quantifiers.
1. everyone dislikes parsnips is the same as asserting there does not exist someone who likes them, and vice
versa:
2. “Everyone likes ice cream” means that there is no one who does not like ice cream:
3. Because ∀ is really a conjunction over the universe of objects and ∃ is a disjunction, it should not be
surprising that they obey De Morgan’s rules. The De Morgan rules for quantified and unquantified sentences
are as follows:
8.2.7 Equality
First-order logic includes one more way to make atomic sentences, other than using a predicate and terms as
described earlier. We can use the equality symbol to signify that two terms refer to the same object. For
example,
says that the object referred to by Father (John) and the object referred to by Henry are the same.
The equality symbol can be used to state facts about a given function, as we just did for the Father
symbol.
It can also be used with negation to insist that two terms are not the same object. To
say that Richard has at least two brothers, we would write
consider the extended interpretation in which both x and y are assigned to King John. The addition of
¬(x=y) rules out such models. The notation x ̸= y is sometimes used as an abbreviation for ¬(x=y).
2. Second, the sentence doesn’t rule out models in which Richard has many more brothers besides John and
Geoffrey.
Thus, the correct translation of “Richard’s brothers are John and Geoffrey” is as follows:
Figure 8.5 shows some of the models, ranging from the model with no tuples satisfying the relation to the
model with all tuples satisfying the relation. With two objects, there are four possible two-element tuples, so
there are 2^4 =16 different subsets of tuples that can satisfy the relation. Thus, there are 16 possible models
in all a lot fewer than the infinitely many models for the standard first-order semantics. On the other hand,
the database semantics requires definite knowledge of what the world contains.
TELL(KB, King(John)) .
TELL(KB, Person(Richard)) .
TELL(KB, ∀x King(x) ⇒ Person(x))
William” and rules such as “One’s grandmother is the mother of one’s parent.”
The objects in our domain are people.
There are two unary predicates, Male and Female.
Kinship relations—parenthood, brotherhood, marriage, and so on—are represented by binary predicates: Parent,
Sibling, Brother , Sister , Child , Daughter, Son, Spouse, Wife, Husband, Grandparent , Grandchild , Cousin,
Aunt, and Uncle.
The functions used are Mother and Father, because every person has exactly one of each of these (at least according
to nature’s design).
Examples:
0 is a constant symbol and for every object n, if n is a natural number, then S(n) is a natural number.
So the natural numbers are 0, S(0), S(S(0)) and so on.
The first of these axioms says that adding 0 to any natural number m gives m itself.
We can also write S(n) as n+ 1,
Sets:
The empty set is a constant written as {}. There is one unary predicate, Set, which is true of sets.
The binary predicates are x∈ s (x is a member of set s) and s1 ⊆ s2 (set s1 is a subset, not necessarily
proper, of set s2).
The binary functions are s1 ∩ s2 (the intersection of two sets), s1 ∪ s2 (the union of two sets), and {x|s}
(the set resulting from adjoining element x to set s).
One possible set of axioms is as follows:
Lists:
Recall that the wumpus agent receives a percept vector with five elements.
The corresponding first-order sentence stored in the knowledge base must include both the
percept and the time at which it occurred; otherwise, the agent will get confused about
Here, Percept is a binary predicate, and Stench and so on are constants placed in a list.
The actions in the wumpus world can be represented by logical terms:
which returns a binding list such as {a/Grab}. The agent program can then return Grab as the action to take.
The agent’s location changes over time, so we write At(Agent, s, t) to mean that the
agent is at square s at time t. Wecanfixthewumpus’s locationwith+t At(Wumpus, [2, 2], t).
We can then say that objects can only be at one location at a time:
1. Identify the task. The knowledge engineer must delineate the range of questions that the knowledge base
will support and the kinds of facts that will be available for each specific problem instance. For example,
does the wumpus knowledge base need to be able to choose actions or is it required to answer questions only
about the contents of the environment? Will the sensor facts include the current location? The task will
determine what knowledge must be represented in order to connect problem instances to answers.
2. Assemble the relevant knowledge. The knowledge engineer might already be an expert in the domain, or
might need to work with real experts to extract what they know—a process called knowledge acquisition.
At this stage, the knowledge is not represented formally. The idea is to understand the scope of the
knowledge base, as determined by the task, and to understand how the domain actually works. For the
wumpus world, which is defined by an artificial set of rules, the relevant knowledge is easy to identify.
(Notice, however, that the definition of adjacency was not supplied explicitly in the wumpus-world rules.)
For real domains, the issue of relevance can be quite difficult—for example, a system for simulating VLSI
designs might or might not need to take into account stray capacitances and skin effects.
3. Decide on a vocabulary of predicates, functions, and constants. That is, translate the important domain-level
concepts into logic-level names. This involves many questions of knowledge-engineering style. Like
programming style, this can have a significant impact on the eventual success of the project. For example,
should pits be represented by objects or by a unary predicate on squares? Should the agent’s orientation be a
function or a predicate? Should the wumpus’s location depend on time? Once the choices have been made,
the result is a vocabulary that is known as the ontology of the domain. The word ontology means a
particular theory of the nature of being or existence. The ontology determines what kinds of things exist, but
does not determine their specific properties and interrelationships.
4. Encode general knowledge about the domain. The knowledge engineer writes down the axioms for all the vocabulary
terms. This pins down (to the extent possible) the meaning of the terms, enabling the expert to check the content.
Often, this step reveals misconceptions or gaps in the vocabulary that must be fixed by returning to step 3 and iterating
through the process.
5. Encode a description of the specific problem instance. If the ontology is well thought out, this step will be easy. It will
involve writing simple atomic sentences about instances of concepts that are already part of the ontology. For a logical
agent, problem instances are supplied by the sensors, whereas a “disembodied” knowledge base is supplied with
additional sentences in the same way that traditional programs are supplied with input data.
6. Pose queries to the inference procedure and get answers. This is where the reward is: we can let the inference
procedure operate on the axioms and problem-specific facts to derive the facts we are interested in knowing. Thus, we
avoid the need for writing an application-specific solution algorithm.
7. Debug the knowledge base. Alas, the answers to queries will seldom be correct on the first try. More precisely, the
answers will be correct for the knowledge base as written, assuming that the inference procedure is sound, but they
will not be the ones that the user is expecting. For example, if an axiom is missing, some queries will not be
answerable from the knowledge base. A considerable debugging process could ensue. Missing axioms or axioms that
are too weak can be easily identified by noticing places where the chain of reasoning stops unexpectedly.
For example,
Inference:
The rule of Universal Instantiation (UI for short) says that we can infer any sentence obtained by substituting a
ground term (a term without variables) for the variable.
Let SUBST(θ,α) denote the result of applying the substitution θ to the sentence α.
Then the rule is written
For example, the three sentences given earlier are obtained with the substitutions {x/John}, {x/Richard}, and
{x/Father (John)}.
In the rule for Existential Instantiation, the variable is replaced by a single new constant symbol. The
formal statement is as follows: for any sentence α, variable v, and constant symbol k that does not appear
elsewhere in the knowledge base,
King(John),
Greedy(John),
Brother(Richard, John)
The new KB is proportionalized: Proposition symbols are
King(John), Greedy(John),Evil(John),King(Richard) and so on.
9.2.2
9.2.3
Given a sentence to be stored, it is possible to construct indices for all possible queries that unify with it.
For the fact Employs(IBM , Richard), the queries are
Employs(IBM , Richard) Does IBM employ Richard?
Employs(x, Richard ) Who employs Richard?
Employs(IBM , y) Whom does IBM employ?
Employs(x, y) Who employs whom?