Sciweavers

ICRA
2009
IEEE

From pixels to objects: Enabling a spatial model for humanoid social robots

14 years 7 months ago
From pixels to objects: Enabling a spatial model for humanoid social robots
— This work adds the concept of object to an existent low-level attention system of the humanoid robot iCub. The objects are defined as clusters of SIFT visual features. When the robot first encounters an unknown object, found to be within a certain (small) distance from its eyes, it stores a cluster of the features present within an interval about that distance, using depth perception. Whenever a previously stored object crosses the robot’s field of view again, it is recognized, mapped into an egocentrical frame of reference, and gazed at. This mapping is persistent, in the sense that its identification and position are kept even if not visible by the robot. Features are stored and recognized in a bottom-up way. Experimental results on the humanoid robot iCub validate this approach. This work creates the foundation for a way of linking the bottom-up attention system with top-down, object-oriented information provided by humans.
Dario Figueira, Manuel Lopes, Rodrigo M. M. Ventur
Added 23 May 2010
Updated 23 May 2010
Type Conference
Year 2009
Where ICRA
Authors Dario Figueira, Manuel Lopes, Rodrigo M. M. Ventura, Jonas Ruesch
Comments (0)