This paper describes our work in usage pattern analysis and development of a latent semantic analysis framework for interpreting multimodal user input consisting speech and pen ge...
This paper describes an e-learning interface with multiple tutoring character agents. The character agents use eye movement information to facilitate empathy-relevant reasoning and...
Hua Wang, Jie Yang, Mark H. Chignell, Mitsuru Ishi...
In this paper we present a novel system for driver-vehicle interaction which combines speech recognition with facialexpression recognition to increase intention recognition accura...
Conversations abound with uncertainties of various kinds. Treating conversation as inference and decision making under uncertainty, we propose a task independent, multimodal archi...
While current eye-based interfaces offer enormous potential for efficient human-computer interaction, they also manifest the difficulty of inferring intent from user eye movements...