In this paper we introduce a multi-modal database for the analysis of human interaction, in particular mimicry, and elaborate on the theoretical hypotheses of the relationship betw...
Xiaofan Sun, Jeroen Lichtenauer, Michel Fran&ccedi...
We present a gestural interface for entering text on a mobile device via continuous movements, with control based on feedback from a probabilistic language model. Text is represent...
We present a novel multimodal interface which permits users to draw or paint using coordinated gestures of hand and mouth. A headworn camera captures an image of the mouth and the...
Understanding human-human interaction is fundamental to the long-term pursuit of powerful and natural multimodal interfaces. Nonverbal communication, including body posture, gestu...
Matthew Turk, Jeremy N. Bailenson, Andrew C. Beall...
One of the implicit assumptions of multi-modal interfaces is that human-computer interaction is significantly facilitated by providing multiple input and output modalities. Surpri...