Sciweavers

21 search results - page 2 / 5
» Towards Programming Multimodal Dialogues
Sort
View
IWANN
2009
Springer
14 years 2 months ago
Integrating Graph-Based Vision Perception to Spoken Conversation in Human-Robot Interaction
In this paper we present the integration of graph-based visual perception to spoken conversation in human-robot interaction. The proposed architecture has a dialogue manager as the...
Wendy Aguilar, Luis A. Pineda
ICMI
2004
Springer
196views Biometrics» more  ICMI 2004»
14 years 25 days ago
Evaluation of spoken multimodal conversation
Spoken multimodal dialogue systems in which users address faceonly or embodied interface agents have been gaining ground in research for some time. Although most systems are still...
Niels Ole Bernsen, Laila Dybkjær
TASLP
2008
112views more  TASLP 2008»
13 years 7 months ago
A Study in Efficiency and Modality Usage in Multimodal Form Filling Systems
The usage patterns of speech and visual input modes are investigated as a function of relative input mode efficiency for both desktop and personal digital assistant (PDA) working ...
Manolis Perakakis, Alexandros Potamianos
IJCNN
2008
IEEE
14 years 1 months ago
Cognitive learning and the multimodal memory game: Toward human-level machine learning
— Machine learning has made great progress during the last decades and is being deployed in a wide range of applications. However, current machine learning techniques are far fro...
Byoung-Tak Zhang
KI
2007
Springer
14 years 1 months ago
Semantic Graph Visualisation for Mobile Semantic Web Interfaces
Information visualisation benefits from the Semantic Web: multimodal mobile interfaces to the Semantic Web offer access to complex knowledge and information structures. Natural l...
Daniel Sonntag, Philipp Heim