Sciweavers

247 search results - page 15 / 50
» Model-Based Design of Speech Interfaces
Sort
View
IUI
2005
ACM
14 years 1 months ago
Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application
Our goal is to automatically recognize and enroll new vocabulary in a multimodal interface. To accomplish this our technique aims to leverage the mutually disambiguating aspects o...
Edward C. Kaiser
NORDICHI
2004
ACM
14 years 1 months ago
Adaptivity in speech-based multilingual e-mail client
In speech interfaces users must be aware what can be done with the system – in other words, the system must provide information to help the users to know what to say. We have ad...
Esa-Pekka Salonen, Mikko Hartikainen, Markku Turun...
MHCI
2009
Springer
14 years 2 months ago
Contextual push-to-talk: a new technique for reducing voice dialog duration
We present a technique in which physical controls have both normal and voice-enabled activation styles. In the case of the latter, knowledge of which physical control was activate...
Garrett Weinberg
JCP
2008
112views more  JCP 2008»
13 years 7 months ago
Speech Displaces the Graphical Crowd
Developers of visual Interface Design Environments (IDEs), like Microsoft Visual Studio and Java NetBeans, are competing in producing pretty crowded graphical interfaces in order t...
Mohammad M. Alsuraihi, Dimitris I. Rigas
ICMCS
2000
IEEE
145views Multimedia» more  ICMCS 2000»
14 years 3 days ago
Talking Heads and Synthetic Speech: An Architecture for Supporting Electronic Commerce
Facial animation has been combined with text-to-speech synthesis to create innovative multimodal interfaces. In this paper, we present an architecture for this multimodal interfac...
Jörn Ostermann, David R. Millen