We explore the frequency and impact of misunderstandings in an existing corpus of tutorial dialogues in which a student appears to get an interpretation that is not in line with what the system developers intended. We found that this type of error is frequent, regardless of whether student input is typed or spoken, and that it does not respond well to general misconception repair strategies. Further we found that it is feasible to detect misunderstandings and suggest alternative strategies for repairing them that we intend to test in the future. Keywords. misunderstandings, tutorial dialogue, dialogue systems
Pamela W. Jordan, Diane J. Litman, Michael Lipschu