Sciweavers

95 search results - page 5 / 19
» Speech and sketching for multimodal design
Sort
View
ACMDIS
2008
ACM
13 years 10 months ago
Exploring true multi-user multimodal interaction over a digital table
True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design spa...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
SIGIR
1999
ACM
14 years 5 days ago
SCAN: Designing and Evaluating User Interfaces to Support Retrieval From Speech Archives
Previous examinations of search in textual archives have assumed that users first retrieve a ranked set of documents relevant to their query, and then visually scan through these ...
Steve Whittaker, Julia Hirschberg, John Choi, Dona...
TOCHI
1998
112views more  TOCHI 1998»
13 years 7 months ago
The Integrality of Speech in Multimodal Interfaces
A framework of complementary behavior has been proposed which maintains that direct manipulation and speech interfaces have reciprocal strengths and weaknesses. This suggests that...
Michael A. Grasso, David S. Ebert, Timothy W. Fini...
AVI
2006
13 years 9 months ago
Enabling interaction with single user applications through speech and gestures on a multi-user tabletop
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with art...
Edward Tse, Chia Shen, Saul Greenberg, Clifton For...
SE
2007
13 years 9 months ago
Development issues for speech-enabled mobile applications
: Developing a speech-based application for mobile devices requires work upfront, since mobile devices and speech recognition systems vary dramatically in their capabilities. While...
Werner Kurschl, Stefan Mitsch, Rene Prokop, Johann...