We investigated the potential of automatic detection of a learner’s affective states from posture patterns and dialogue features obtained from an interaction with AutoTutor, an intelligent tutoring system with conversational dialogue. Training and validation data were collected from the sensors in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Machine learning experiments with several standard classifiers indicated that the dialogue and posture features could individually discriminate between the affective states of boredom, confusion, flow (engagement), and frustration. Our results also indicate that a combination of the dialogue and posture features does improve classification accuracy. However, the incremental gains associated with the combination of the two sensors were not sufficient to exhibit superadditivity (i.e., performance superior to an additive combination of individual channel...
Sidney K. D'Mello, Arthur C. Graesser