Sciweavers

67 search results - page 10 / 14
» Authoring pervasive multimodal user interfaces
Sort
View
WWW
2006
ACM
14 years 7 months ago
DiTaBBu: automating the production of time-based hypermedia content
We present DiTaBBu, Digital Talking Books Builder, a framework for automatic production of time-based hypermedia for the Web, focusing on the Digital Talking Books domain. Deliver...
Carlos Duarte, Luís Carriço, Rui Lop...
WWW
2004
ACM
14 years 7 months ago
An xpath-based discourse analysis module for spoken dialogue systems
This paper describes an XPath-based discourse analysis module for Spoken Dialogue Systems that allows the dialogue author to easily manipulate and query both the user input's...
Giuseppe Di Fabbrizio, Charles Lewis
SIGCOMM
2010
ACM
13 years 7 months ago
NeuroPhone: brain-mobile phone interface using a wireless EEG headset
Neural signals are everywhere just like mobile phones. We propose to use neural signals to control mobile phones for hands-free, silent and effortless human-mobile interaction. Un...
Andrew T. Campbell, Tanzeem Choudhury, Shaohan Hu,...
CHI
2006
ACM
14 years 7 months ago
Feeling what you hear: tactile feedback for navigation of audio graphs
Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical ...
Steven A. Wall, Stephen A. Brewster
IUI
2009
ACM
14 years 3 months ago
Modality effects on cognitive load and performance in high-load information presentation
In this study, we argue that modality planning in multimodal presentation systems needs to consider the modality characteristics at not only the presentational level but also the ...
Yujia Cao, Mariët Theune, Anton Nijholt