Sciweavers

304 search results - page 57 / 61
» A Multi-Context System Computing Modalities
Sort
View
HUC
2010
Springer
13 years 8 months ago
Augmenting on-screen instructions with micro-projected guides: when it works, and when it fails
We present a study that evaluates the effectiveness of augmenting on-screen instructions with micro-projection for manual task guidance unlike prior work, which replaced screen in...
Stephanie Rosenthal, Shaun K. Kane, Jacob O. Wobbr...
EICS
2009
ACM
14 years 2 months ago
The tradeoff between spatial jitter and latency in pointing tasks
Interactive computing systems frequently use pointing as an input modality, while also supporting other forms of input such as alphanumeric, voice, gesture, and force. We focus on...
Andriy Pavlovych, Wolfgang Stürzlinger
KDD
2010
ACM
272views Data Mining» more  KDD 2010»
13 years 11 months ago
Beyond heuristics: learning to classify vulnerabilities and predict exploits
The security demands on modern system administration are enormous and getting worse. Chief among these demands, administrators must monitor the continual ongoing disclosure of sof...
Mehran Bozorgi, Lawrence K. Saul, Stefan Savage, G...
GIS
2008
ACM
13 years 9 months ago
Combining 3-D geovisualization with force feedback driven user interaction
We describe a prototype software system for investigating novel human-computer interaction techniques for 3-D geospatial data. This system, M4-Geo (Multi-Modal Mesh Manipulation o...
Adam Faeth, Michael Oren, Chris Harding
CHI
2005
ACM
14 years 8 months ago
Children's and adults' multimodal interaction with 2D conversational agents
Few systems combine both Embodied Conversational Agents (ECAs) and multimodal input. This research aims at modeling the behavior of adults and children during their multimodal int...
Jean-Claude Martin, Stéphanie Buisine