Sciweavers

247 search results - page 28 / 50
» Model-Based Design of Speech Interfaces
Sort
View
AVI
2006
13 years 9 months ago
Enabling interaction with single user applications through speech and gestures on a multi-user tabletop
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with art...
Edward Tse, Chia Shen, Saul Greenberg, Clifton For...
ATAL
2009
Springer
14 years 2 months ago
Increasing the expressiveness of virtual agents: autonomous generation of speech and gesture for spatial description tasks
Embodied conversational agents are required to be able to express themselves convincingly and autonomously. Based on an empirial study on spatial descriptions of landmarks in dire...
Kirsten Bergmann, Stefan Kopp
CHI
2006
ACM
14 years 8 months ago
Speech pen: predictive handwriting based on ambient multimodal recognition
It is tedious to handwrite long passages of text by hand. To make this process more efficient, we propose predictive handwriting that provides input predictions when the user writ...
Kazutaka Kurihara, Masataka Goto, Jun Ogata, Takeo...
ICMI
2004
Springer
263views Biometrics» more  ICMI 2004»
14 years 1 months ago
Analysis of emotion recognition using facial expressions, speech and multimodal information
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although ...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta...
CHI
2000
ACM
14 years 2 days ago
Does computer-generated speech manifest personality? an experimental test of similarity-attraction
This study examines whether people would interpret and respond to paralinguistic personality cues in computergenerated speech in the same way as they do human speech. Participants...
Clifford Nass, Kwan Min Lee