—In this paper, we study how a humanoid robot can learn affordance relations in his environment through its own interactions in an unsupervised way. Specifically, we developed a simple tapping behavior on the iCub humanoid robot simulator and allowed the robot to interact with a set of objects with different types and sizes positioned within its reach. The interaction schema is as follows: an object is put in the visual field of the robot, the robot then focuses on the object and applies its tapping behavior. The robot records its initial and final percept of its view obtained from a range camera, in the form a feature vector. The difference between initial features and final features are considered as effect features. The effect features are clustered using Kohonen’s self-organizing maps to generate a set of effect categories in an unsupervised way. Then, we used the RelieF feature extraction method to determine the most relevant features and a multi-class support vetor machin...