Sciweavers

73 search results - page 12 / 15
» Learning to predict where humans look
Sort
View
SIGCSE
2006
ACM
150views Education» more  SIGCSE 2006»
14 years 1 months ago
Pedagogical techniques supported by the use of student devices in teaching software engineering
This paper describes our experiences in promoting a learning environment where active student involvement and interaction, as well as openness to diversity of ideas are supported ...
Valentin Razmov, Richard J. Anderson
IROS
2007
IEEE
146views Robotics» more  IROS 2007»
14 years 2 months ago
Capturing robot workspace structure: representing robot capabilities
— Humans have at some point learned an abstraction of the capabilities of their arms. By just looking at the scene they can decide which places or objects they can easily reach a...
Franziska Zacharias, Christoph Borst, Gerd Hirzing...
CVPR
2010
IEEE
14 years 4 months ago
Segmenting Video Into Classes of Algorithm-Suitability
Given a set of algorithms, which one(s) should you apply to, i) compute optical flow, or ii) perform feature matching? Would looking at the sequence in question help you decide? I...
Oisin Mac Aodha, Gabriel Brostow, marc Pollefeys
ISWC
2003
IEEE
14 years 29 days ago
Unsupervised, Dynamic Identification of Physiological and Activity Context in Wearable Computing
Context-aware computing describes the situation where a wearable / mobile computer is aware of its user’s state and surroundings and modifies its behavior based on this informat...
Andreas Krause, Daniel P. Siewiorek, Asim Smailagi...
ICRA
2008
IEEE
148views Robotics» more  ICRA 2008»
14 years 2 months ago
Visual saliency model for robot cameras
— Recent years have seen an explosion of research on the computational modeling of human visual attention in task free conditions, i.e., given an image predict where humans are l...
Nicholas J. Butko, Lingyun Zhang, Garrison W. Cott...