EBR 2
EXPLANATION BASED SPECIALIZATION:
In 1987, Minton and Carbonell introduced Explanation-Based Specialization (EBS) as a learning
method, implemented through the PRODIGY system, to address limitations in
Explanation-Based Generalization (EBG). PRODIGY, a general problem solver with learning
modules, overcomes shortcomings by learning from multi-object concept scenarios through a
detailed illustration of every goal concept.
PRODIGY's architecture incorporates a unified control structure for inference rule and operator
search based on the problem solver of operators. The system consists of an Explanation-Based
Learning (EBL) module, a derived module for recalling solutions to similar problems, and an
experimental learning module to refine incomplete or incorrect domain theories. The abstraction
generator and abstraction level model enable multi-level plan capability by dividing the domain
theory into abstraction levels.
The learning process in PRODIGY involves four goal concepts: success, failure, the only
choice, and restriction of goals. Upon receiving a goal and an example, PRODIGY decomposes
the goal backwardly, analyzes the problem-solving track, and explains why the instance satisfies
the goal concept. Control rules are then learned based on the specific scenario: success,
failure, unique choice, or restriction of goals.
The process of explanation in PRODIGY involves a detailed illustration obtained by searching
for proof in the knowledge base using training examples. The goal concepts are specialized
through a bottom-up proof tree, resulting in control rules that can be used for subsequent
searches.
PRODIGY's knowledge base includes domain-level axioms dealing with domain rules and
construct-level axioms containing inference rules in problem-solving. While PRODIGY does not
explicitly emphasize generalization, its domain rules, especially those related to
problem-solving, inherently contain generalization.
In conclusion, Explanation-Based Specialization (EBS) in PRODIGY offers a comprehensive
learning method that overcomes limitations in Explanation-Based Generalization. Through
detailed goal concept illustrations, specialized control rules are derived, enhancing the dynamic
capabilities of the problem solver in addressing various scenarios and refining domain theories.
LOGIC PROGRAM FOR EXPLANATION BASED GENERALIZATION:
In the realm of Explanation-Based Generalization (EBG), establishing an explanation structure
for training examples is crucial, resembling a theorem proving problem. EBG extends the
resolution theory of Horn clauses, treating generalization as an extension of standard theorem
proving. The operational principle of EBG relies on logic programming languages like Prolog,
based on the resolution principle. This principle involves creating resolvents from sets of
clauses, guided by a deductive rule. The unification algorithm, fundamental in realizing EBG, is
a key process in creating explanation structures and facilitating generalization.
In Turbo Prolog, which is based on the resolution principle, unification involves processes like
matching free variables, handling constants, and unifying compound terms. The initial step in
EBG is to generate an explanation structure for training examples by searching and matching
the domain theory represented as rules and facts in Prolog. The unification algorithm is pivotal
in this process, serving as the foundation for both EBG realization and subsequent
generalization.
Implementing EBG in Prolog involves explicitly executing the unification algorithm, where
predicates are defined to handle term unification and term table unification. The domain theory,
stored as an inner database in Turbo Prolog, is easily retrievable for creating explanation
structures. The explanation and generalization in EBG occur in two steps, with the explanation
established first and then transformed into generalization. The interchangeability of these steps
allows the system to proof training examples against goal concepts, constructing both the
generalized explanation structure and the instance-specific explanation simultaneously.
The process of goal concept regression involves substituting constants with variables and
addressing new term unification. While regression by predicate is defined, challenges remain in
increasing its universality and handling unbounded variables effectively. Despite the two steps of
EBG being executed independently, the proof tree and each path are preserved, highlighting the
intertwined nature of explanation and generalization in this learning paradigm.
SOAR BASED ON MEMORY CHUNKS:
**SOAR System and Memory Chunks:**
In the late 1950s, the concept of memory chunks emerged from neuron simulation, using
signals to mark experiences. By the early 1980s, Newell and Rosenbloom proposed that
memory chunks could enhance system performance by serving as the simulation foundation for
human actions, specifically in problem-solving tasks.
**SOAR System Development:**
In 1986, J.E. Laird, Paul S. Rosenbloom, and A. Newell introduced the SOAR system, focusing
on learning general control knowledge under external expert guidance. The system's
architecture includes production memory and a decision process, utilizing production rules for
control decisions.
**Processing Configuration:**
SOAR's processing configuration involves production memory containing rules for
decision-making. The first step includes detailed refinement of rules in working memory to
determine priorities and context changes. The second step involves deciding the segment and
goal for revision in the context stack.
**Expert Guidance and Learning Mechanism:**
When the system encounters problem-solving difficulties, it seeks guidance from experts. Expert
supervision can be direct, involving spreading operators and current status for expert
evaluation, or simple and intuitionistic. In the latter, the problem is decomposed into inner
presentations based on a tree's grammatical structure, seeking advice from experts similar to
initial problems.
**Memory Chunk Learning:**
Memory chunks, crucial for learning in SOAR, use working-memory-elements to collect
conditions. When a sub-goal is created, its current statuses are stored in working memory
elements, and the solution operators are deleted after the sub-goal is solved, forming a
generative production rule known as a memory chunk.
**Application and Learning Methods:**
Memory chunks are applied to similar sub-goals of initial problems, showcasing the system's
ability to transfer learned knowledge. The formation of memory chunks relies on the explanation
of sub-goals. SOAR's learning strategy involves converting expert instructions or simple
problems into a machine-executable format, incorporating experiences gained from solving
various problems through analogy learning. In essence, SOAR's learning approach is a
comprehensive combination of several learning methods.
OPERAIZATIONAL:
**Operationalization Process:**
Operationalization involves translating non-operational expressions into operational ones,
transforming advice or concepts into terms accessible to an agent. It is a crucial aspect in the
implementation of Explanation-Based Generalization (EBG) and serves as an input
requirement.
**Operationality Criterion in EBG:**
The operationality criterion in EBG, as highlighted by Mitchell, Keller, and Kedar-Cabelli,
emphasizes representing concept descriptions as predicates of training examples. Initially, the
criterion was static, leading to the enumeration of operable predicates. However, there's a shift
towards defining the operationality criterion dynamically, introducing theorem prover
mechanisms for flexibility.
**Dynamic Definition Challenges:**
Implementing dynamic operationality criteria faces challenges in systems like Turbo Prolog,
which lacks support for meta-level programming. The introduction of inference mechanisms and
meta-rules requires careful consideration, pushing towards temporarily defining operationality
criteria as static.
**Keller's Definition:**
Keller defines operationality criterion based on a concept description, a performance system
utilizing it for improvement, and performance objectives. The concept description is deemed
operable if it is usable and effective, allowing the execution system to benefit and improve
performance according to specified objectives.
**Lack of Precise Definitions:**
The lack of a precise definition of operationality criterion across systems is evident. Many
systems assumed it to be independent and static, but advancements, as seen in EBG,
acknowledge the need for dynamism in operationality criteria.
**EBG Practical Testing:**
EBG, proposed by Mitchell et al., serves as an intuitionist approach to test the practicality of
explanations explicitly through operationality criteria. However, criticisms from DeJong and
Mooney suggest that the criterion may not cover a sufficient range of predicates, only allowing
direct assessment of new knowledge without evaluating its usefulness.
**Dynamic Operationality in META-LEX and PRODIGY:**
META-LEX and PRODIGY contribute to the evolution of operationality criteria. META-LEX
considers the learning environment, making the cost of testing a property dynamic based on the
system's knowledge. PRODIGY further improves the definition of operationality criteria,
emphasizing the dynamic nature of assessing knowledge usefulness.
EBL WITH IMPERFECT DOMAIN THEORY:
**Imperfect Domain Theory in EBL:**
In Explanation-Based Learning (EBL), a significant challenge is the requirement for a complete
and accurate domain theory. However, in real-world applications, achieving such perfection is
often difficult. Imperfections in domain theories can manifest as incompleteness, incorrectness,
or being too complex for available resources.
**Challenges of Imperfection:**
1. *Incomplete Domain Theory:* Lack of rules and knowledge in the domain theory results in an
inability to provide explanations for training examples.
2. *Incorrect Domain Theory:* Unreasonable rules in the domain theory lead to incorrect
explanations.
3. *Intractable Domain Theory:* Complexity may render the domain theory too intricate for the
available resources, hindering the creation of an explanation tree.
**Addressing Imperfections with Inverting Resolution:**
To tackle the problem of imperfect domain theory, attempts are made in exploring inverting
resolution and deep knowledge-based approaches within EBL.
**Deep Knowledge-Based Approach:**
*Background:* The domain theory for fault diagnosis is often imperfect, prompting the need for
knowledge refinement. A learning model focusing on malfunction diagnosis in distillation
columns, based on deep knowledge, is proposed.
*Learning Model Construction:*
1. **Instance Presentation:** An instance is presented by the environment.
2. **Explanation of Instances:** EBL is employed to explain the instance. Success leads to
extension, while failure proceeds to hypothesis generation.
3. **Hypothesis Generation:** The system attempts to confirm knowledge absence. A
hypothesis is created to remedy the absence, replacing goal and data ends accordingly.
4. **Hypothesis Confirmation:** Deep knowledge base is used to search for relationships
between the goal-end and data-end of the hypothesis. Success leads to extension, failure
returns to hypothesis generation.
5. **Extension:** Extended hypotheses are obtained, varying constants and acquiring more
generalized concepts.
**Conclusion:**
The deep knowledge-based approach involves refining domain knowledge dynamically,
addressing imperfections through hypothesis generation, confirmation, and extension. This
method allows for adaptability in the face of incomplete or incorrect domain theories, enhancing
the learning process in EBL.