Sciweavers

54 search results - page 7 / 11
» Distributed speech processing in miPad's multimodal user int...
Sort
View
ICMI
2007
Springer
262views Biometrics» more  ICMI 2007»
14 years 1 months ago
Automated generation of non-verbal behavior for virtual embodied characters
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to ...
Werner Breitfuss, Helmut Prendinger, Mitsuru Ishiz...
MM
2005
ACM
97views Multimedia» more  MM 2005»
14 years 1 months ago
Recognition of hands-free speech and hand pointing action for conversational TV
In this paper, we propose a structure and components of a conversational television set(TV) to which we can ask anything on the broadcasted contents and receive the interesting in...
Yasuo Ariki, Tetsuya Takiguchi, Atsushi Sako
SAMT
2007
Springer
138views Multimedia» more  SAMT 2007»
14 years 1 months ago
A Constraint-Based Graph Visualisation Architecture for Mobile Semantic Web Interfaces
Abstract. Multimodal and dialogue-based mobile interfaces to the Semantic Web offer access to complex knowledge and information structures. We explore more fine-grained co-ordina...
Daniel Sonntag, Philipp Heim
IJMHCI
2011
249views more  IJMHCI 2011»
13 years 2 months ago
3D Talking-Head Interface to Voice-Interactive Services on Mobile Phones
We present a novel framework for easy creation of interactive, platform-independent voice-services with an animated 3D talking-head interface, on mobile phones. The framework supp...
Jirí Danihelka, Roman Hak, Lukas Kencl, Jir...
ICMI
2004
Springer
116views Biometrics» more  ICMI 2004»
14 years 26 days ago
Towards integrated microplanning of language and iconic gesture for multimodal output
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Stefan Kopp, Paul Tepper, Justine Cassell