Sciweavers

ICRA
2008
IEEE

Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

14 years 5 months ago
Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub
Abstract— This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robot’s decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space.
Jonas Ruesch, Manuel Lopes, Alexandre Bernardino,
Added 30 May 2010
Updated 30 May 2010
Type Conference
Year 2008
Where ICRA
Authors Jonas Ruesch, Manuel Lopes, Alexandre Bernardino, Jonas Hörnstein, José Santos-Victor, Rolf Pfeifer
Comments (0)