About the Book
One of the most essential properties of any intelligent entity is the ability to learn. "Explanation based learning" is one recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalised into a form that can later be used to solve conceptually similar problems. But in the solution of any specific task, those aspects that in general can be manifested and arbitrary number of times will be represented by a "fixed" number of occurrences. Quite often this number must be generalised if the underlying concept is to be correctly acquired. A number of explanation-based generalisation algorithms have been developed. Unfortunately, most do not alter the structure of their explanation of the specific problem's solution; hence they do not incorporate any additional objects of inference rules into the concepts they learn. Instead, these algorithms generalise by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the "structure" of explanations be generalise.
Generalising structure can involve generalising such things as the number involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve differing numbers of guests. Two theories of extending explanations during the generalisation process have been developed, and computer implementations have been created to computationally test these approaches. The PHYSICS 101 system utilises characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system and its successor BAGGER2 implement domain-independent approaches to generalising explantions structures. This book describes all three of these systems, presents the details of their algorithms, and discusses several examples of learning by each. It also presents an empirical analysis of explanation-based learning.
These computer experiments demonstrate the value of generalising explanation structures in particular, and of explanation-based learning in general. They also demonstrate the advantages of learning by observing the intelligent behaviour of external agents. The book's conclusion discusses several open research issues in generalising the structure of explantions and related approaches to this problem. This research brings machine learning closer to its goal of being able to acquire all of the knowledge inherent in the solution to a specific problem.
Table of Contents:
Part 1: The need for generalising explantion structures; overview of this book - chapter summaries, relevance to research areas outside machine learning; explanation-based learning - a brief history, the standard method, additional research issues. Part 2 Learning in mathematically-based domains: the PHYSICS 101 system - the learning model, terminology, other approaches to learning in mathematical domains; solving problems - initial knowledge of the system, schema-based problem solving, choosing the initial equation, transformating an expression into an acceptable form; building explanations - a sample problem, verifying a teacher's solution, explaning solutions, understanding obstacles, constructing the cancellation graph - algorithmic details; generalising solutions - the result of standard explanation-based learning, using the cancellation graph to guide generalisation, learning special-case schemata, performance analysis. Part 3 A domain-independent approach: the BAGGER system - some sample learning episodes, situaiton calculus, sequential rules, representing sequential knowledge; generalising - the BAGGER generalisation algorithm, problem solving in BAGGER, simplifying the antecedents in sequential rules, two examples; extending BAGGER - algorithmic details and correctness proof, the circuit implementation domain revisited, learning from multiple examples, problem solving with rules acquired by BAGGER2, improving the efficiency of the rules BAGGER2 learns, learning about wagons, comparing BAGGER and BAGGER2. Part 4 An empirical analysis of explanation-based learning: introduction; experimental methodology; experiments - comparison of the two training strategies, effect of increased problem complexity, operationality versus generality, time spent learning, clearing blocks, rule access strategies, estimating the performance of the non-learning system, empirical study of BAGGER2. Part 5: Contributions; relation to other work - other explanation-based approaches, related work in similarity-based learning, related work in automatic programming; some open research issues - deciding when to learn, improving what is learned, extending what can be learned, additional issues. Appendices: Additional PHYSICS 101 examples - overview, learning about energy conservation, learning about the sum of internal forces, using the new force law to learn about momentum, attempting to learn from a two-ball collision; additional BAGGER examples - overview, more tower-building rules, clearing an object, setting table; BAGGER's initial inference rules - notation, rules; statistics from experiments - description, statistics.