Sciweavers

372 search results - page 18 / 75
» Situated Multimodal Documents
Sort
View
CHI
2009
ACM
14 years 8 months ago
A biologically inspired approach to learning multimodal commands and feedback for human-robot interaction
In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approac...
Anja Austermann, Seiji Yamada
ICALT
2006
IEEE
14 years 1 months ago
Vicarious Learning and Multimodal Dialogue
Vicarious Learning is learning from watching others learn. We believe that this is a powerful model for computer-based learning. Learning episodes can be captured and replayed to ...
John Lee
FGR
2004
IEEE
216views Biometrics» more  FGR 2004»
13 years 11 months ago
Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed ...
Jeffrey F. Cohn, Lawrence Ian Reed, Tsuyoshi Moriy...
IJCAI
1989
13 years 9 months ago
Bidirectional Use of Knowledge in the Multi-modal NL Access System XTRA
The acceptability and effectiveness of an expert system is critically dependent on its user interface. Natural language could be a well-suited communicative medium; however, curre...
Jürgen Allgayer, Roman M. Jansen-Winkeln, Car...
ICAC
2005
IEEE
14 years 1 months ago
PICCIL: Interactive Learning to Support Log File Categorization
Motivated by the real-world application of categorizing system log messages into defined situation categories, this paper describes an interactive text categorization method, PICC...
David Loewenstern, Sheng Ma, Abdi Salahshour