Sciweavers

395 search results - page 3 / 79
» When do we interact multimodally
Sort
View
ACMDIS
2008
ACM
13 years 9 months ago
Exploring true multi-user multimodal interaction over a digital table
True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design spa...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
CHI
1995
ACM
13 years 11 months ago
A Generic Platform for Addressing the Multimodal Challenge
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech and direct manipulation. The flexibility they offer results in an incr...
Laurence Nigay, Joëlle Coutaz
CHI
2004
ACM
14 years 7 months ago
ICARE: a component-based approach for the design and development of multimodal interfaces
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech, gesture and eye gaze tracking. The flexibility they offer results in ...
Jullien Bouchet, Laurence Nigay
ICCV
2003
IEEE
14 years 23 days ago
The Catchment Feature Model for Multimodal Language Analysis
The Catchment Feature Model (CFM) addresses two questions in multimodal interaction: how do we bridge video and audio processing with the realities of human multimodal communicati...
Francis K. H. Quek
CHI
2005
ACM
14 years 7 months ago
Children's and adults' multimodal interaction with 2D conversational agents
Few systems combine both Embodied Conversational Agents (ECAs) and multimodal input. This research aims at modeling the behavior of adults and children during their multimodal int...
Jean-Claude Martin, Stéphanie Buisine