Sciweavers

395 search results - page 35 / 79
» When do we interact multimodally
Sort
View
HRI
2010
ACM
14 years 3 months ago
Judging a bot by its cover: an experiment on expectation setting for personal robots
—Managing user expectations of personal robots becomes particularly challenging when the end-user just wants to know what the robot can do, and neither understands nor cares abou...
Steffi Paepcke, Leila Takayama
ICAT
2006
IEEE
14 years 2 months ago
Manipulation of Field of View for Hand-Held Virtual Reality
Today, hand-held computing and media devices are commonly used in our everyday lives. This paper assesses the viability of hand-held devices as effective platforms for “virtual r...
Jane Hwang, Jaehoon Jung, Gerard Jounghyun Kim
JCP
2008
112views more  JCP 2008»
13 years 8 months ago
Speech Displaces the Graphical Crowd
Developers of visual Interface Design Environments (IDEs), like Microsoft Visual Studio and Java NetBeans, are competing in producing pretty crowded graphical interfaces in order t...
Mohammad M. Alsuraihi, Dimitris I. Rigas
PAMI
2002
98views more  PAMI 2002»
13 years 8 months ago
Extraction of Visual Features for Lipreading
The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information...
Iain Matthews, Timothy F. Cootes, J. Andrew Bangha...
CHI
2010
ACM
14 years 3 months ago
Using reinforcement to strengthen users' secure behaviors
Users have a strong tendency toward dismissing security dialogs unthinkingly. Prior research has shown that users' responses to security dialogs become significantly more tho...
Ricardo Villamarín-Salomón, Jos&eacu...