Sciweavers

395 search results - page 40 / 79
» When do we interact multimodally
Sort
View
CORR
2008
Springer
115views Education» more  CORR 2008»
13 years 9 months ago
Playing With Population Protocols
Population protocols have been introduced as a model of sensor networks consisting of very limited mobile agents with no control over their own movement: A collection of anonymous ...
Olivier Bournez, Jérémie Chalopin, J...
CHI
2009
ACM
14 years 9 months ago
Remote impact: shadowboxing over a distance
Florian `Floyd' Mueller Distance Lab Horizon Scotland The Enterprise Park Forres, Moray IV36 2AB UK floyd@exertioninterfaces.com Stefan Agamanolis Distance Lab Horizon Scotlan...
Florian Mueller, Stefan Agamanolis, Martin R. Gibb...
IUI
2004
ACM
14 years 2 months ago
Identifying adaptation dimensions in digital talking books
We have developed an automatic DTB production platform [3], which is capable of flexibly generating different user interfaces for talking books. DTBs are built from digital copies ...
Carlos Duarte, Luís Carriço
ICMI
2005
Springer
136views Biometrics» more  ICMI 2005»
14 years 2 months ago
Contextual recognition of head gestures
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...
HCI
2009
13 years 6 months ago
A Similarity Measure for Vision-Based Sign Recognition
When we encounter an English word that we do not understand, we can look it up in a dictionary. However, when an American Sign Language (ASL) user encounters an unknown sign, looki...
Haijing Wang, Alexandra Stefan, Vassilis Athitsos