Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr.
Thyagaraju G S, Professor, HOD-CSE, SDMIT.
AI_Module5
Syllabus :
Inference in First Order Logic: Backward Chaining, Resolution
Classical Planning: Definition of Classical Planning, Algorithms for Planning as
State-Space Search, Planning Graphs
Chapter 9-9.4, 9.5
Chapter 10- 10.1,10.2,10.3
Topics:
1. Inference in First Order Logic
1. Backward Chaining,
2. Resolution
2. Classical Planning
1. Definition of Classical Planning
2. Algorithms for Planning as State Space Search
3. Planning Graphs
4. Logic Programming
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
5.1.1 Backward Chaining
Backward chaining is a reasoning method that starts with the goal and works
backward through the inference rules to find out whether the goal can be
satisfied by the known facts.
It's essentially goal-driven reasoning, where the system seeks to prove the
hypothesis by breaking it down into subgoals and verifying if the premises
support them.
Example : Consider the following knowledge base representing a simple
diagnostic system:
1. If a patient has a fever, it might be a cold.
2. If a patient has a sore throat, it might be strep throat.
3. If a patient has a fever and a sore throat, they should see a doctor.
Given the facts:
• The patient has a fever.
• The patient has a sore throat.
Backward chaining would proceed as follows:
• Start with the goal: Should the patient see a doctor?
• Check the third rule: Does the patient have a cold and a sore throat? Yes.
• Check the first and second rules: Does the patient have a fever and sore
throat? Yes.
• The goal is satisfied: The patient should see a doctor.
Backward chaining is useful when there is a specific goal to be achieved, and the
system can efficiently backtrack through the inference rules to determine
whether the goal can be satisfied.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
[Link] Backward Chaining: Algorithm
These algorithms work backward from the goal, chaining through rules to find
known facts that support the proof.
Source Book: Stuart J. Russell and Peter Norvig, Artificial Intelligence, 3rd Edition, Pearson,2015
[Link] Overview of the Algorithm
1. Goal:
The purpose of the algorithm is to determine whether a query
o
(goal) can be derived from a given knowledge base (KB).
2. Process:
o It uses backward chaining, meaning it starts with the goal and
works backward by looking for rules or facts in the knowledge
base that could satisfy the goal.
o The algorithm returns substitutions (values or variables) that make
the query true.
3. Key Components:
o FOL-BC-ASK: This is the main function that starts the backward-
chaining process by calling FOL-BC-OR.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
FOL-BC-OR: This function checks whether the goal can be
o
satisfied by any rule in the KB. It iterates over applicable rules and
tries to unify the goal with the rule’s conclusions.
o FOL-BC-AND: This function handles multiple sub-goals. It
ensures that all sub-goals are satisfied for the main goal to be true.
4. Key Terminology:
o FOL-BC-ASK: Entry point for the algorithm.
o FOL-BC-OR: Handles rules and checks if the goal is satisfied by
any rule.
o FOL-BC-AND: Ensures all sub-goals are satisfied.
o FETCH-RULES-FOR-GOAL: Retrieves applicable rules for a
goal.
o UNIFY: Matches terms by finding substitutions.
o Standardize Variables: Ensures variable names are unique to avoid
conflict.
o θ: The substitution carried into the current function call.
o θ′: A substitution produced by solving the first sub-goal in FOL-BC-
AND.
o θ′′: A substitution produced by solving the remaining sub-goals
using the updated θ′.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Detailed Algorithm Steps
Step 1: FOL-BC-ASK(KB, query):
Start with the query and call FOL-BC-OR.
Example: For Ancestor(John, Sam), call:
o FOL-BC-OR(KB, Ancestor(John, Sam), { }).
Step 2: FOL-BC-OR(KB, goal, θ):
Fetch all rules from the KB that could produce the goal.
For each rule:
1. Standardize Variables: Make rule variables unique to avoid
conflicts.
2. Unify rhs and goal: Match the conclusion of the rule (rhs)
with the current goal using Unify. This updates θ.
3. Call FOL-BC-AND: Recursively evaluate the conditions (lhs) of
the rule with the updated θ.
Yield θ′: Each substitution that satisfies the rule is yielded back to the
caller.
Step 3: FOL-BC-AND(KB, goals, θ):
Handles multiple sub-goals (goals) produced from the rule's conditions.
1. If goals is empty, yield θ because all sub-goals are satisfied.
2. Otherwise:
o Split goals into first and rest.
o Call FOL-BC-OR for the first goal.
o For each result (θ′) from FOL-BC-OR:
Recursively solve rest using FOL-BC-AND with the
updated θ′.
Yield θ′′, the result of solving all sub-goals.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Step-by-Step Explanation with Example
Let’s use the following Knowledge Base (KB) and query.
Knowledge Base:
1. Parent(x, y) ⇒ Ancestor(x, y) (Rule 1)
2. Parent(x, z) ∧ Ancestor(z, y) ⇒ Ancestor(x, y) (Rule 2)
3. Parent(John, Mary) (Fact)
4. Parent(Mary, Sam) (Fact)
Query: Ancestor(John, Sam)
Execution Steps
Step 1: FOL-BC-ASK
Query: Ancestor(John, Sam)
Calls: FOL-BC-OR(KB, Ancestor(John, Sam), { }).
Step 2: FOL-BC-OR
Goal: Ancestor(John, Sam)
Fetch rules for Ancestor:
1. Rule 1: Parent(x, y) ⇒ Ancestor(x, y)
2. Rule 2: Parent(x, z) ∧ Ancestor(z, y) ⇒ Ancestor(x, y)
Case 1: Use Rule 1
lhs = Parent(x, y), rhs = Ancestor(x, y).
Unify Ancestor(John, Sam) with Ancestor(x, y):
o Substitution: θ = {x=John, y=Sam}.
Sub-goal: Parent(John, Sam).
Step 3: FOL-BC-AND
Goals: [Parent(John, Sam)]
Calls: FOL-BC-OR(KB, Parent(John, Sam), {x=John, y=Sam}).
Step 4: FOL-BC-OR
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Goal: Parent(John, Sam)
Check the KB:
o Facts: Parent(John, Mary) (no match for Sam).
o Rule 1 fails.
Case 2: Use Rule 2
lhs = Parent(x, z) ∧ Ancestor(z, y), rhs = Ancestor(x, y).
Unify Ancestor(John, Sam) with Ancestor(x, y):
o Substitution: θ = {x=John, y=Sam}.
Sub-goals:
o goals = [Parent(John, z), Ancestor(z, Sam)].
Step 5: FOL-BC-AND
Goals: [Parent(John, z), Ancestor(z, Sam)].
1. First sub-goal (Parent(John, z)):
o Calls: FOL-BC-OR(KB, Parent(John, z), θ).
o Matches: Parent(John, Mary).
o Substitution: {z=Mary}.
o Update θ′: {x=John, y=Sam, z=Mary}.
2. Second sub-goal (Ancestor(z, Sam)):
o Calls: FOL-BC-OR(KB, Ancestor(Mary, Sam), θ′).
o Unify with Rule 1: Parent(x, y) ⇒ Ancestor(x, y).
o Sub-goal: Parent(Mary, Sam).
Step 6: FOL-BC-AND
Goal: [Parent(Mary, Sam)].
Matches fact: Parent(Mary, Sam).
Substitution: {x=Mary, y=Sam}.
Satisfies all sub-goals.
Final Result
Combine all substitutions:
o {x=John, y=Sam, z=Mary}.
The query Ancestor(John, Sam) is true.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Source Book: Stuart J. Russell and Peter Norvig, Artificial Intelligence, 3rd Edition, Pearson,2015
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
5.1.2 Resolution
Resolution is a fundamental inference rule used in automated theorem proving
and logic programming. It is based on the principle of proof by contradiction.
Resolution combines logical sentences in the form of clauses to derive new
sentences.
The resolution rule states that if there are two clauses that contain
complementary literals (one positive, one negative) then these literals can be
resolved, leading to a new clause that is inferred from the original clauses.
Example1:
Consider two logical statements:
1. P∨Q
2. ¬P∨R
Applying resolution: Resolve the statements by eliminating P:
• P∨Q
• ¬P∨R
Resolving P and ¬P: Q∨R
The resulting statement Q∨R is a new clause inferred from the original two.
Resolution is a key component of logical reasoning in FOL, especially in tasks like
automated theorem proving and knowledge representation.
Example2:
Clause 1: (P∨Q∨R)
Clause 2:(¬P∨¬Q∨S)
To resolve these clauses, we look for complementary literals. In this case, P and
¬P are complementary.
So, we can resolve these two clauses by removing the complementary literals
and combining the remaining literals:
Resolving P and ¬P gives: (Q∨R)∨(¬Q∨S)
Resolving Q and ¬Q gives (RVS)
This is the resolvent.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Conjunctive Normal Form
A formula is in CNF if it is a conjunction (AND) of clauses, where each clause is
a disjunction (OR) of literals.
CNF Examples
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Proof By Resolution Process includes the following steps in general
1. Initial Set of Clauses (Knowledge Base)
2. Convert the Statement into Clausal Form
3. Skolemization
4. Standardize Variables
5. Unification
6. Resolution Rule
7. Iterative Application
Resolution in First Order Logic (FOL) is a proof technique used in automated
reasoning to determine the validity of a statement. It is based on the principle of
refutation, where we attempt to derive a contradiction from a set of clauses.
Key Steps in Resolution in FOL:
1. Convert the Statement into Clausal Form:
o The statement and its negation are converted into conjunctive
normal form (CNF).
o CNF is a conjunction of disjunctions where each disjunction is
called a clause.
o Example: (A∨¬B)∧(¬A∨C).
2. Skolemization:
o Eliminate existential quantifiers by replacing them with Skolem
functions or constants.
o This step ensures the formula becomes purely universal.
3. Standardize Variables:
o Rename variables so that no two clauses share the same variable
names.
4. Unification:
o Unification is the process of making two literals identical by
finding a substitution for their variables.
o Example: P(x) and P(a) can be unified with the substitution x=a.
5. Resolution Rule:
o The resolution rule combines two clauses that contain
complementary literals (e.g., P(x) and ¬P(x) to produce a new
clause without those literals.
o Example:
Clause 1: P(x)∨Q(x),
Clause 2: ¬P(a)∨R(x),
Resolvent: Q(a)∨R(x).
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
6. Iterative Application:
o Apply the resolution rule repeatedly to derive new clauses.
o If the empty clause □is derived, it indicates a contradiction,
proving the original statement.
Example: Proving Validity
Suppose we have the following knowledge base:
1. P(a)∨Q(b)
2. ¬Q(b)∨R(c)
3. ¬R(c)
We want to prove ¬P(a).
Steps:
1. Negate the statement to be proven and add it to the clauses:
o ¬P(a) becomes P(a).
2. Apply the resolution rule:
o Combine P(a)∨Q(b).
o Combine Q(b) and ¬Q(b)∨R(c) .
o Combine R(c) and ¬R(c) to get □ empty clause.
Since the empty clause □ is derived, the original statement ¬P(a) is valid.
Applications of Resolution in FOL:
Automated theorem proving.
Reasoning in expert systems.
Artificial intelligence, particularly for tasks involving logical inference.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example 1:
Let's consider a simplified example of a knowledge base for the Wumpus World
scenario and demonstrate proof by resolution to establish the unsatisfiability of
a certain statement.
In Wumpus World, an agent explores a grid containing a Wumpus (a monster),
pits, and gold. Apply the resolution to prove P[1,2].
Knowledge Base (KB)
1. W[1,1] ∨ P[1,2]
2. ¬W[1,1]∨¬P[1,2]
3. B[1,2]⇒P[1,2]
4. ¬B[1,2]⇒¬P[1,2]
Convert the Knowledge Base (KB) into CNF
Negated Conclusion:
Let's say we want to prove the negation of the statement: ¬PitIn[1,2]
Apply Resolution:
1. W[1,1] ∨ P[1,2] , ¬P[1,2] resolves into W[1,1]
2. ¬W[1,1]∨¬P[1,2], W[1,1] resolves into ¬P[1,2]
3. ¬B[1,2] ∨ P[1,2] , ¬P[1,2] resolves into ¬B[1,2]
4. B[1,2] ∨¬P[1,2], ¬B[1,2] resolves into ¬P[1,2]
Applying resolution, we end up with: ¬P[1,2] , Which is not empty and also
there is not further any clauses to continue. This gives conclusion that our
negation conclusion is True and P[1,2] is False for the given knowledge base.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example 2:
A grammar for conjunctive normal form
Conjunctive normal form for first-order logic:
As in the propositional case, first-order resolution requires that sentences be in
conjunctive normal form (CNF)—that is, a conjunction of clauses, where each
clause is a disjunction of literals.
Literals can contain variables, which are assumed to be universally quantified.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
For example, the sentence
• ∀ x American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) ⇒ Criminal(x)
becomes, in CNF,
¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal(x)
Every sentence of first-order logic can be converted into an inferentially
equivalent CNF sentence. The procedure for conversion to CNF is similar to the
propositional case, The principal difference arises from the need to eliminate
existential quantifiers.
We illustrate the procedure by translating the sentence
“Everyone who loves all animals is loved by someone,”
or
∀ x [∀ y Animal(y) ⇒ Loves(x, y)] ⇒ [∃ y Loves(y, x)] .
Steps
• Eliminate implications: ∀ x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃ y
Loves(y, x)] .
• Move ¬ inwards: In addition to the usual rules for negated connectives,
we need rules for negated quantifiers. Thus, we have
• ¬∀ x p becomes ∃ x ¬p
• ¬∃ x p becomes ∀ x ¬p .
• Our sentence goes through the following transformations:
• ∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)] .
• ∀ x [∃ y ¬¬Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] .
• ∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] .
• Standardize variables: For sentences like (∃ x P(x))∨(∃ x Q(x)) which use
the same variable name twice, change the name of one of the variables.
This avoids confusion later when we drop the quantifiers. Thus, we have
• ∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ z Loves(z, x)] .
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
• Skolemize: Skolemization is the process of removing existential
quantifiers by elimination. Translate ∃ x P(x) into P(A), where A is a new
constant.
• Example :
• ∀ x [Animal(A) ∧ ¬Loves(x, A)] ∨ Loves(B, x) ,
• ∀ x [Animal(F(x)) ∧ ¬Loves(x, F(x))] ∨ Loves(G(z), x) . Here F
and G are Skolem functions.
• Drop universal quantifiers: At this point, all remaining variables must be
universally quantified. Moreover, the sentence is equivalent to one in
which all the universal quantifiers have been moved to the left. We can
therefore drop the universal quantifiers:
• [Animal(F(x)) ∧ ¬Loves(x, F(x))] ∨ Loves(G(z), x) .
• Distribute ∨ over ∧:
[Animal(F(x)) ∨ Loves(G(z), x)] ∧ [¬Loves(x, F(x)) ∨ Loves(G(z), x)] .
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
The resolution inference rule
• Two clauses, which are assumed to be standardized apart so that they
share no variables, can be resolved if they contain complementary
literals.
• Propositional literals are complementary if one is the negation of the
other;
• first-order literals are complementary if one unifies with the negation of
the other.
• Thus We have
Example:
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Another Example :
Suppose Curiosity did not kill Tuna. We know that either Jack or
Curiosity did; thus Jack must have. Now, Tuna is a cat and cats are
animals, so Tuna is an animal. Because anyone who kills an animal is
loved by no one, we know that no one loves Jack. On the other hand,
Jack loves all animals, so someone loves him; so we have a
contradiction. Therefore, Curiosity killed the cat.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Summary
1. Forward chaining starts with known facts and moves forward to reach
conclusions,
2. Backward chaining starts with the goal and moves backward to verify if
the goal can be satisfied, and
3. Resolution is an inference rule used to derive new clauses by combining
existing ones.
These techniques are essential for reasoning and inference in First-Order Logic
systems.
Note : The empty clause derived always implies the assumed
negation(contradiction) is false.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Completeness of resolution
Resolution is a method in logic that can prove whether a set of statements is
unsatisfiable. If the statements are unsatisfiable (i.e., there’s no way they can all
be true at once), resolution will eventually find a contradiction.
Unsatisfiable set of statements: Means the statements can't all be true
together.
Contradiction: A clear proof that the statements conflict with each other.
Key Idea
If a set of statements is unsatisfiable, resolution can always derive a contradiction,
proving unsatisfiability. This doesn’t mean resolution finds all logical
consequences—it’s focused on checking contradictions.
Steps in Proving Completeness
1. Transforming to Clausal Form:
Any logical statement can be converted into a standard form called
Conjunctive Normal Form (CNF). This is the foundation for using
resolution.
2. Using Herbrand's Theorem:
Herbrand's theorem says if the set of statements is unsatisfiable, there’s a
specific subset of ground instances (statements without variables) that’s
also unsatisfiable.
3. Applying Ground Resolution:
For these ground instances, propositional resolution (which works with
statements without variables) can find the contradiction.
4. Lifting to First-Order Logic:
A "lifting lemma" proves that if there’s a resolution proof for the ground
instances, there’s also one for the original statements with variables. This
ensures resolution works for first-order logic, not just simple ground
statements.
Structure of a completeness proof for resolution is illustrated in the figure
below:
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
What Are Ground Terms and Herbrand's Universe?
Ground terms: Statements with no variables, created by substituting
constants or functions.
Herbrand Universe: A collection of all possible ground terms that can
be built from the constants and functions in the given statements.
Saturation: Generating all possible combinations of ground terms in the
statements.
Herbrand's theorem ensures we only need to check a finite subset of these terms
to find a contradiction.
Why is the Lifting Lemma Important?
The lifting lemma connects proofs for ground terms to proofs for first-order logic. It "lifts"
results from simpler cases (propositional logic) to more general cases (with variables). This
step is essential to show resolution's power in first-order logic.
The Conclusion
If a set of statements is unsatisfiable:
Resolution finds a contradiction using a finite number of steps.
This proof works for both simple ground statements and complex first-
order logic.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
This makes resolution a powerful tool in automated theorem proving!
The Lifting Lemma Explained
The lifting lemma is a principle that allows us to "lift" a resolution proof from
specific ground instances (statements without variables) to general first-order
logic (statements with variables). Here's how it works:
C₁ and C₂: Two clauses that do not share variables.
C′₁ and C′₂: Ground instances of C₁ and C₂ (created by substituting
variables with constants or terms).
C′: A resolvent (a result of applying the resolution rule) of C′₁ and C′₂.
The lemma states:
There exists a clause C such that:
1. C is a resolvent of C₁ and C₂ (it works at the variable level).
2. C′ is a ground instance of C.
In simpler terms, if resolution works for specific ground instances, we can
always find a corresponding proof for the original first-order clauses.
Example
Let’s illustrate with an example:
1. Original clauses with variables:
o C1=¬P(x,F(x,A))∨¬Q(x,A)∨R(x,B)
o C2=¬N(G(y),z)∨P(H(y),z)
2. Ground instances (after substituting variables with specific terms):
o C′1=¬P(H(B),F(H(B),A))∨¬Q(H(B),A)∨R(H(B),B)
C′2=¬N(G(B),F(H(B),A))∨P(H(B),F(H(B),A))
3. Resolvent of the ground instances:
o C′=¬N(G(B),F(H(B),A))∨¬Q(H(B),A)∨R(H(B),B)
4. Lifted clause (with variables):
o C=¬N(G(y),F(H(y),A))∨¬Q(H(y),A)∨R(H(y),B)
Here, C′ is a ground instance of C, showing how the lifting lemma bridges
ground-level proofs to general first-order logic.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
This lemma is crucial because it ensures that resolution proofs for specific cases
(ground terms) can be generalized to more complex first-order logic, making the
method powerful and versatile.
Handling Equality in Inference Systems
So far, inference methods don't naturally handle statements like x=y. To deal
with equality, we can take one of three approaches.
1. Axiomatizing Equality
We write rules (axioms) in the knowledge base that define how equality works.
These rules must express:
Reflexivity: x=x
Symmetry: x=y⇒y=x
Transitivity: x=y∧y=z⇒x=z
Additionally, we add rules to allow substitution of equal terms in predicates and
functions. For example:
x=y⇒(P(x)⇔P(y)) (for predicates P)
w=y∧x=z⇒F(w,x)=F(y,z)(for functions F)
Using these axioms, standard inference methods like resolution can handle
equality reasoning (e.g., solving equations). However, this approach can
generate many unnecessary conclusions, making it inefficient.
2. Demodulation: Adding Inference Rules
Instead of axioms, we can add specific inference rules like demodulation to
handle equality.
How it works:
If x=y (a unit clause) and a clause α contains x, we replace x with y in α.
Demodulation simplifies expressions in one direction (e.g., x+0=x allows
x+0 to simplify to x, but not vice versa).
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example:
Given:
Father(Father(x))=PaternalGrandfather(x)
Birthdate(Father(Father(Bella)), 1926)
We use demodulation to derive:
Birthdate(PaternalGrandfather(Bella), 1926)
3. Paramodulation
A more general rule, paramodulation, extends demodulation to handle cases
where equalities are part of more complex clauses.
How it works:
If x=y appears as part of a clause and a term z in another clause unifies
with x, substitute y for x in z.
Formal Rule:
For any terms x, y, and z:
o If z appears in a clause mmm and x unifies with z,
o Replace x with y in m, while preserving other parts of the clause.
Summary
Axiomatization defines equality with explicit rules but can be inefficient.
Demodulation simplifies terms by replacing variables with their equal
counterparts in one direction.
Paramodulation generalizes equality handling for complex clauses.
These methods provide efficient ways to incorporate equality reasoning into
inference systems.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
More formally we have
Resolution Strategies
Resolution inference is guaranteed to find a proof if one exists, but some
strategies can make the process more efficient. Below are key strategies and their
applications.
Unit Preference
Focuses on resolving clauses where one is a unit clause (a single literal).
Resolving a unit clause (e.g., P) with a longer clause (e.g., ¬P∨¬Q∨R)
results in a shorter clause (¬Q∨R).
This strategy, first applied in 1964, dramatically improved the efficiency
of propositional inference.
Unit Resolution: A restricted form of this strategy, requiring every
resolution step to involve a unit clause.
o Complete for Horn Clauses: Proofs resemble forward chaining.
o Incomplete in General: Not suitable for all forms of knowledge
bases.
Example: The OTTER theorem prover employs a best-first search with
a heuristic that assigns “weights” to clauses, favoring shorter ones (e.g.,
unit clauses).
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Set of Support
Restricts resolutions to involve at least one clause from a predefined set
of support.
The set of support typically includes the negated query or clauses likely
to lead to a proof.
Resolutions add their results to this set, significantly reducing the search
space if the set is small.
This strategy is complete if the remaining sentences in the knowledge
base are satisfiable.
Advantages:
o Generates goal-directed proof trees, which are easier for humans
to interpret.
Input Resolution
In this strategy, resolutions always involve one of the original input
clauses (from the knowledge base or the query).
Example: Modus Ponens in Horn knowledge bases is an input resolution
strategy, as it combines an implication from the KB with other sentences.
Linear Resolution: A generalization where PPP and QQQ can be
resolved if PPP is either an input clause or an ancestor of QQQ in the
proof tree.
o Complete for Linear Resolution: Particularly useful in structured
proofs.
Subsumption
Eliminates redundant sentences in the knowledge base that are
subsumed by more general sentences.
Example: If P(x) is in the KB, there’s no need to add P(A) or P(A)∨Q(B).
Benefits:
o Reduces the size of the knowledge base.
o Keeps the search space manageable.
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Applications of Resolution Theorem Provers
Resolution theorem provers are widely used in the synthesis and verification of
both hardware and software systems.
1. Hardware Design and Verification
Axioms describe the interactions between signals and circuit components.
Example: Logical reasoners have verified entire CPUs, including timing
properties.
AURA Theorem Prover: Used to design highly compact circuits.
2. Software Verification and Synthesis
Similar to reasoning about actions, axioms define the preconditions and
effects of program statements.
Algorithm Synthesis:
o Deductive synthesis constructs programs to meet specific criteria.
o Although fully automated synthesis is not yet practical for general-
purpose programming, hand-guided synthesis has successfully
created sophisticated algorithms.
Verification Tools: Systems like the SPIN model checker are used to
verify programs such as:
o Remote spacecraft control systems.
o Algorithms like RSA encryption and Boyer–Moore string
matching.
Summary
Resolution strategies like unit preference, set of support, input resolution, and
subsumption improve proof efficiency by focusing on relevance, reducing
redundancy, and constraining the search space. Applications in hardware and
software demonstrate their importance in real-world problem-solving.