We report results of an interdisciplinary project which aims at endowing a real robot system with the capacity for learning by goaldirected imitation. The control architecture is biologically inspired as it reflects recent experimental findings in action observation/execution studies. We test its functionality in variations of an imitation paradigm in which the artefact has to reproduce the observed or inferred end state of a grasping-placing sequence displayed by a human model.