This paper presents a computational self-organizing model of multi-modal information, inspired from cortical maps. It shows how the organization in a map can be influenced by the ...
Audition is one of our most important modalities and is widely used to communicate and sense the environment around us. We present an auditory robotic...
For this special session of EU projects in the area of NeuroIT, we will review the progress of the MirrorBot project with special emphasis on its relation to reinforcement learning...
Cornelius Weber, David Muse, Mark Elshaw, Stefan W...
This paper explores the hypothesis that pointing gesture recognition can be learned using a reward based system. An experiment with two four-legged robots is presented. One of the...