In this paper, we address an issue that arises when the background knowledge used by explanationbased learning is incorrect. In particular, we consider the problems that can be caused by a domain theory that may be overly specific. Under this condition, generalizations formed by explanation-based learning will make errors of omission when they are relied upon to make predictions or explanations. We describe a technique for detecting errors of omission, assigning blame for the error of omission to an inference rule in the domain theory, and revising the domain theory to accommodate new examples.
Michael J. Pazzani