Recent advances in gesture recognition made the problem of controlling a humanoid robot in the most natural possible way an interesting challenge. Learning from Demonstration field takes strong advantage from this kind of interaction since users who have no robotics knowledge are allowed to teach new tasks to robots easier than ever before. In this work we present a cheap and easy way to implement humanoid robot along with a visual interaction interface allowing users to control it. The visual system is based on the Microsoft Kinect’s RGB-D camera. Users can deal with the robot just by standing in front of the depth camera and mimicking a particular task they want to be performed by the robot. Our framework is cheap, easy to reproduce, and does not strictly depend on the particular underlying sensor or gesture recognition system.