Understanding answers to open-ended explanation questions is important in intelligent tutoring systems. Existing systems use natural language techniques in essay analysis, but revert to scripted interaction with short-answer questions during remediation, making adapting dialogue to individual students difficult. We describe a corpus study that shows that there is a relationship between the types of faulty answers and the remediation strategies that tutors use; that human tutors respond differently to different kinds of correct answers; and that re-stating correct answers is associated with improved learning. We describe a design for a diagnoser based on this study that supports remediation in open-ended questions and provides an analysis of natural language answers that enables adaptive generation of tutorial feedback for both correct and faulty answers.
Myroslava Dzikovska, Gwendolyn E. Campbell, Charle