Abstract— We propose to improve the locomotive performance of humanoid robots by using approximated biped stepping and walking dynamics with reinforcement learning (RL). Although RL is a useful non-linear optimizer, it is usually difficult to apply RL to real robotic systems - due to the large number of iterations required to acquire suitable policies. In this study, we first approximated the dynamics by using data from a real robot, and then applied the estimated dynamics in RL in order to improve stepping and walking policies. Gaussian processes were used to approximate the dynamics. By using Gaussian processes, we could estimate a probability distribution of a target function with a given covariance function. Thus, RL can take the uncertainty of the approximated dynamics into account throughout the learning process. We show that we can improve stepping and walking policies by using a RL method with the approximated models both in simulated and real environments. Experimental val...
Jun Morimoto, Christopher G. Atkeson, Gen Endo, Go