Sciweavers

LREC
2008

Talking and Looking: the SmartWeb Multimodal Interaction Corpus

14 years 28 days ago
Talking and Looking: the SmartWeb Multimodal Interaction Corpus
Nowadays portable devices such as smart phones can be used to capture the face of a user simultaneously with the voice input. Server based or even embedded dialogue system might utilize this additional information to detect whether the speaking user addresses the system or other parties or whether the listening user is focused on the display or not. Depending on these findings the dialogue system might change its strategy to interact with the user improving the overall communication between human and system. To develop and test methods for On/Off-Focus detection a multimodal corpus of user
Florian Schiel, Hannes Mögele
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where LREC
Authors Florian Schiel, Hannes Mögele
Comments (0)