Sciweavers

95 search results - page 14 / 19
» Speech and sketching for multimodal design
Sort
View
UIST
1992
ACM
13 years 12 months ago
Tools for Building Asynchronous Servers to Support Speech and Audio Applications
Distributed clientisewer models are becoming increasingly prevalent in multimedia systems and advanced user interface design. A multimedia application, for example, may play and r...
Barry Arons
IUI
2003
ACM
14 years 1 months ago
Affective multi-modal interfaces: the case of McGurk effect
This study is motivated by the increased need to understand human response to video-links, 3G telephony and avatars. We focus on response of participants to audiovisual presentati...
Azra N. Ali, Philip H. Marsden
MM
2005
ACM
97views Multimedia» more  MM 2005»
14 years 1 months ago
Recognition of hands-free speech and hand pointing action for conversational TV
In this paper, we propose a structure and components of a conversational television set(TV) to which we can ask anything on the broadcasted contents and receive the interesting in...
Yasuo Ariki, Tetsuya Takiguchi, Atsushi Sako
JCP
2008
112views more  JCP 2008»
13 years 7 months ago
Speech Displaces the Graphical Crowd
Developers of visual Interface Design Environments (IDEs), like Microsoft Visual Studio and Java NetBeans, are competing in producing pretty crowded graphical interfaces in order t...
Mohammad M. Alsuraihi, Dimitris I. Rigas
CSCW
2008
ACM
13 years 9 months ago
Supporting medical conversations between deaf and hearing individuals with tabletop displays
This paper describes the design and evaluation of Shared Speech Interface (SSI), an application for an interactive multitouch tabletop display designed to facilitate medical conve...
Anne Marie Piper, James D. Hollan