Sciweavers

395 search results - page 22 / 79
» When do we interact multimodally
Sort
View
CW
2003
IEEE
14 years 25 days ago
Disappearing Computers, Social Actors and Embodied Agents
Presently, there are user interfaces that allow multimodal interactions. Many existing research and prototype systems introduced embodied agents, assuming that they allow a more n...
Anton Nijholt
ICMI
2007
Springer
115views Biometrics» more  ICMI 2007»
14 years 1 months ago
Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization
This paper explores using non-linguistic vocalization as an additional modality to augment digital pen input on a tablet computer. We investigated this through a set of novel inte...
Susumu Harada, T. Scott Saponas, James A. Landay
ICCV
2005
IEEE
14 years 9 months ago
Real-Time Interactively Distributed Multi-Object Tracking Using a Magnetic-Inertia Potential Model
This paper breaks with the common practice of using a joint state space representation and performing the joint data association in multi-object tracking. Instead, we present an i...
Dan Schonfeld, Magdi A. Mohamed, Wei Qu
CSCW
1996
ACM
13 years 11 months ago
Piazza: A Desktop Environment Supporting Impromptu and Planned Interactions
Much of the support for communication across distributed communities has focused on meetings and intentional contact. However, most interactions within co-located groups occur whe...
Ellen Isaacs, John C. Tang, Trevor Morris
IWC
1998
63views more  IWC 1998»
13 years 7 months ago
Interaction in the large
Most work in HCI focuses on interaction in the small: where tasks take a few minutes or hours and individual actions receive feedback within seconds. In contrast, many collaborati...
Alan J. Dix, Devina Ramduny, Julie Wilkinson