It is possible to model avatars that learn to simulate object manipulations and other complex actions. A number of applications may benefit from this technique including safety, ergonomics, film animation and many others. Current techniques control avatars manually, scripting what they can do by imposing constraints on their physical and cognitive model. In this paper we show how avatars in a controlled environment can learn behaviors as compositions of simple actions. The avatar learning process is described in detail for a generic behavior and tested in simple experiments. Local and global metrics are introduced to optimize the selection of a set of actions from the learnt pool. The performance for the learnt tasks is qualitatively compared with a human performance.