Many software systems would significantly improve performance if they could interpret the nonverbal cues in their user’s interactions as humans normally do. Currently, Intelligent Tutoring Systems (ITSs) (and other software systems) are unable to use nonverbal cues to interpret student’s responses to instructional material as can human tutors. We believe that this capability is essential to adapt teaching strategy to the needs of the learner. An experiment was performed aimed at identifying what kinds of gestures are being used by students in a human-tohuman learning context. We have identified a range of gestures being used in one-to-one tutoring environments and a dependency of gesture use on students’ skill level. As a result, we suggest how the student model in an ITS should reflect this dependency. These results are applicable to HCI in general.