Knowledge acquisition (KA) plays an important role in building knowledge based systems (KBS). However, evaluating different KA techniques has been difficult because of the costs of using human expertise in experimental studies. In this paper, we first address the problem of evaluating knowledge acquisition methods. Then, we develop an analysis of the types of errors a human expert makes in building a KBS. Our analysis suggests that a simulation of the key factors in building a KBS is possible. We demonstrate the approach by evaluating three variants of a practically successful KA methodology, namely Ripple Down Rules (RDR). The experimental results provide some fundamental insights into this family of KA techniques and suggest various hints for improvement.
Tri M. Cao, Paul Compton