We present a generic approach to multimodal fusion which we call context based multimodal integration. Key to this approach is that every multimodal input event is interpreted and...
Many user interfaces, from graphic design programs to navigation aids in cars, share a virtual space with the user. Such applications are often ideal candidates for speech interfa...
We motivate an approach to evaluating the utility of lifelike interface agents that is based on human eye movements rather than questionnaires. An eye tracker is employed to obtai...
Helmut Prendinger, Chunling Ma, Jin Yingzi, Arturo...
This paper provides a new fully automatic framework to analyze facial action units, the fundamental building blocks of facial expression enumerated in Paul Ekman’s Facial Action...
We present ongoing work on a project for automatic recognition of spontaneous facial actions. Spontaneous facial expressions differ substantially from posed expressions, similar t...
Bjorn Braathen, Marian Stewart Bartlett, Gwen Litt...