This paper analyzes the impact of several lexical and grammatical features in automated assessment of students' finegrained understanding of tutored concepts. Truly effective dialog and pedagogy in Intelligent Tutoring Systems is only achievable when systems are able to understand the detailed relationships between a student's answer and the desired conceptual understanding. We describe a new method for recognizing whether a student's response entails that they understand the concepts being taught. We discuss the need for a finer-grained analysis of answers and describe a new representation for reference answers that addresses these issues, breaking them into detailed facets and annotating their relationships to the student's answer more precisely. Human annotation at this detailed level still results in substantial inter-annotator agreement, 86.0% with a Kappa statistic of 0.724. We present our approach to automatically assess student answers, which involves train...
Rodney D. Nielsen, Wayne Ward, James H. Martin