Sciweavers

2593 search results - page 161 / 519
» Learning Visual Attributes
Sort
View
AAAI
2004
13 years 11 months ago
VModel: A Visual Qualitative Modeling Environment for Middle-School Students
Learning how to create, test, and revise models is a central skill in scientific reasoning. We argue that qualitative modeling provides an appropriate level of representation for ...
Kenneth D. Forbus, Karen Carney, Bruce L. Sherin, ...
ICRA
2003
IEEE
141views Robotics» more  ICRA 2003»
14 years 3 months ago
Visual transformations in gesture imitation: what you see is what you do
We propose an approach for a robot to imitate the gestures of a human demonstrator. Our framework consists solely of two components: a Sensory-Motor Map (SMM) and a View-Point Tra...
Manuel Cabido-Lopes, José Santos-Victor
AROBOTS
1998
111views more  AROBOTS 1998»
13 years 10 months ago
Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction
This paper discusses the emergence of sensorimotor coordination for ESCHeR, a 4DOF redundant foveated robot-head, by interaction with its environment. A feedback-error-learning(FEL...
Luc Berthouze, Yasuo Kuniyoshi
ISVC
2010
Springer
13 years 8 months ago
Egocentric Visual Event Classification with Location-Based Priors
We present a method for visual classification of actions and events captured from an egocentric point of view. The method tackles the challenge of a moving camera by creating defor...
Sudeep Sundaram, Walterio W. Mayol-Cuevas
ICML
2010
IEEE
13 years 11 months ago
Deep networks for robust visual recognition
Deep Belief Networks (DBNs) are hierarchical generative models which have been used successfully to model high dimensional visual data. However, they are not robust to common vari...
Yichuan Tang, Chris Eliasmith