Generalizing the Structure of Explanations in Explanation-based Learning
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Computing and Information Technology > Computer science > Artificial intelligence > Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)
Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)

Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)


     0     
5
4
3
2
1



Out of Stock


Notify me when this book is in stock
X
About the Book

One of the most essential properties of any intelligent entity is the ability to learn. "Explanation based learning" is one recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalised into a form that can later be used to solve conceptually similar problems. But in the solution of any specific task, those aspects that in general can be manifested and arbitrary number of times will be represented by a "fixed" number of occurrences. Quite often this number must be generalised if the underlying concept is to be correctly acquired. A number of explanation-based generalisation algorithms have been developed. Unfortunately, most do not alter the structure of their explanation of the specific problem's solution; hence they do not incorporate any additional objects of inference rules into the concepts they learn. Instead, these algorithms generalise by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the "structure" of explanations be generalise. Generalising structure can involve generalising such things as the number involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve differing numbers of guests. Two theories of extending explanations during the generalisation process have been developed, and computer implementations have been created to computationally test these approaches. The PHYSICS 101 system utilises characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system and its successor BAGGER2 implement domain-independent approaches to generalising explantions structures. This book describes all three of these systems, presents the details of their algorithms, and discusses several examples of learning by each. It also presents an empirical analysis of explanation-based learning. These computer experiments demonstrate the value of generalising explanation structures in particular, and of explanation-based learning in general. They also demonstrate the advantages of learning by observing the intelligent behaviour of external agents. The book's conclusion discusses several open research issues in generalising the structure of explantions and related approaches to this problem. This research brings machine learning closer to its goal of being able to acquire all of the knowledge inherent in the solution to a specific problem.

Table of Contents:
Part 1: The need for generalising explantion structures; overview of this book - chapter summaries, relevance to research areas outside machine learning; explanation-based learning - a brief history, the standard method, additional research issues. Part 2 Learning in mathematically-based domains: the PHYSICS 101 system - the learning model, terminology, other approaches to learning in mathematical domains; solving problems - initial knowledge of the system, schema-based problem solving, choosing the initial equation, transformating an expression into an acceptable form; building explanations - a sample problem, verifying a teacher's solution, explaning solutions, understanding obstacles, constructing the cancellation graph - algorithmic details; generalising solutions - the result of standard explanation-based learning, using the cancellation graph to guide generalisation, learning special-case schemata, performance analysis. Part 3 A domain-independent approach: the BAGGER system - some sample learning episodes, situaiton calculus, sequential rules, representing sequential knowledge; generalising - the BAGGER generalisation algorithm, problem solving in BAGGER, simplifying the antecedents in sequential rules, two examples; extending BAGGER - algorithmic details and correctness proof, the circuit implementation domain revisited, learning from multiple examples, problem solving with rules acquired by BAGGER2, improving the efficiency of the rules BAGGER2 learns, learning about wagons, comparing BAGGER and BAGGER2. Part 4 An empirical analysis of explanation-based learning: introduction; experimental methodology; experiments - comparison of the two training strategies, effect of increased problem complexity, operationality versus generality, time spent learning, clearing blocks, rule access strategies, estimating the performance of the non-learning system, empirical study of BAGGER2. Part 5: Contributions; relation to other work - other explanation-based approaches, related work in similarity-based learning, related work in automatic programming; some open research issues - deciding when to learn, improving what is learned, extending what can be learned, additional issues. Appendices: Additional PHYSICS 101 examples - overview, learning about energy conservation, learning about the sum of internal forces, using the new force law to learn about momentum, attempting to learn from a two-ball collision; additional BAGGER examples - overview, more tower-building rules, clearing an object, setting table; BAGGER's initial inference rules - notation, rules; statistics from experiments - description, statistics.


Best Sellers


Product Details
  • ISBN-13: 9780273088172
  • Publisher: Financial Times Prentice Hall
  • Publisher Imprint: Financial Times Prentice Hall
  • Height: 244 mm
  • Series Title: Research Notes in Artificial Intelligence
  • Width: 169 mm
  • ISBN-10: 0273088173
  • Publisher Date: /02/1990
  • Binding: Paperback
  • Language: English
  • Weight: 372 gr


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)
Financial Times Prentice Hall -
Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Generalizing the Structure of Explanations in Explanation-based Learning: (Research Notes in Artificial Intelligence)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    Fresh on the Shelf


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!