Sciweavers

54 search results - page 5 / 11
» Distributed speech processing in miPad's multimodal user int...
Sort
View
ICMI
2004
Springer
162views Biometrics» more  ICMI 2004»
14 years 26 days ago
When do we interact multimodally?: cognitive load and multimodal communication patterns
Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal in...
Sharon L. Oviatt, Rachel Coulston, Rebecca Lunsfor...
ASSETS
2010
ACM
13 years 7 months ago
Introducing multimodal paper-digital interfaces for speech-language therapy
After a stroke or brain injury, it may be more difficult to understand language and communicate with others. Speechlanguage therapy may help an individual regain language and cope...
Anne Marie Piper, Nadir Weibel, James D. Hollan
ICMI
2005
Springer
162views Biometrics» more  ICMI 2005»
14 years 1 months ago
Distributed pointing for multimodal collaboration over sketched diagrams
A problem faced by groups that are not co-located but need to collaborate on a common task is the reduced access to the rich multimodal communicative context that they would have ...
Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, D...
UIST
1992
ACM
13 years 11 months ago
Tools for Building Asynchronous Servers to Support Speech and Audio Applications
Distributed clientisewer models are becoming increasingly prevalent in multimedia systems and advanced user interface design. A multimedia application, for example, may play and r...
Barry Arons
HICSS
2007
IEEE
154views Biometrics» more  HICSS 2007»
14 years 1 months ago
Gulliver-A Framework for Building Smart Speech-Based Applications
Speech recognition has matured over the past years to the point that companies can seriously consider its use. However, from a developer’s perspective we observe that speech inp...
Werner Kurschl, Stefan Mitsch, Rene Prokop, Johann...