In sensorimotor behaviour often a great movement execution variability is combined with a relatively low error in reaching the intended goal. This phenomenon can especially be observed if the limb chain under regard has redundant degrees of freedom. Such a redundancy, however, is a pre-requisit of movement optimization, because without variability changes in movement execution are impossible. It is, therefore, suggested, that, given a fitness criterion, a related optimal movement trajectory can be learned by an genetic algorithm. However, precise reaching must also be learned. This requires to establish at least an internal inverse model of the (forward) "tool transformation" governing the physical behaviour of the limb chain. Learning of an inverse model can be performed best applying the so called autoimitation algorithm, a non-supervised learning mechanism equivalent to (modified) Hebbian learning. The paper shows theoretically, how these two learning algorithms can be com...