Sciweavers

395 search results - page 17 / 79
» When do we interact multimodally
Sort
View
NLP
2000
13 years 11 months ago
Dialogues for Embodied Agents in Virtual Environments
This paper is a progress report on our research, design, and implementation of a virtual reality environment where users (visitors, customers) can interact with agents that help th...
Rieks op den Akker, Anton Nijholt
ICMI
2004
Springer
189views Biometrics» more  ICMI 2004»
14 years 26 days ago
A multimodal learning interface for sketch, speak and point creation of a schedule chart
We present a video demonstration of an agent-based test bed application for ongoing research into multi-user, multimodal, computer-assisted meetings. The system tracks a two perso...
Edward C. Kaiser, David Demirdjian, Alexander Grue...
LREC
2008
100views Education» more  LREC 2008»
13 years 9 months ago
An Evaluation of Spoken and Textual Interaction in the RITEL Interactive Question Answering System
The RITEL project aims to integrate a spoken language dialogue system and an open-domain information retrieval system in order to enable human users to ask a general question and ...
Dave Toney, Sophie Rosset, Aurélien Max, Ol...
AIS
2006
Springer
13 years 7 months ago
Meetings and meeting modeling in smart environments
In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-l...
Anton Nijholt, Rieks op den Akker, Dirk Heylen
ICRA
2008
IEEE
141views Robotics» more  ICRA 2008»
14 years 1 months ago
Tracking interacting targets with laser scanner via on-line supervised learning
— Successful multi-target tracking requires locating the targets and labeling their identities. For the laser based tracking system, the latter becomes significantly more challen...
Xuan Song, Jinshi Cui, Xulei Wang, Huijing Zhao, H...