Sciweavers

IBERAMIA
2010
Springer

Dynamic Reward Shaping: Training a Robot by Voice

13 years 10 months ago
Dynamic Reward Shaping: Training a Robot by Voice
Reinforcement Learning is commonly used for learning tasks in robotics, however, traditional algorithms can take very long training times. Reward shaping has been recently used to provide domain knowledge with extra rewards to converge faster. The reward shaping functions are normally defined in advance by the user and are static. This paper introduces a dynamic reward shaping approach, in which these extra rewards are not consistently given, can vary with time and may sometimes be contrary to what is needed for achieving a goal. In the experiments, a user provides verbal feedback while a robot is performing a task which is translated into additional rewards. It is shown that we can still guarantee convergence as long as most of the shaping rewards given per state are consistent with the goals and that even with fairly noisy interaction the system can still produce faster convergence times than traditional reinforcement learning techniques.
Ana C. Tenorio-Gonzalez, Eduardo F. Morales, Luis
Added 25 Jan 2011
Updated 25 Jan 2011
Type Journal
Year 2010
Where IBERAMIA
Authors Ana C. Tenorio-Gonzalez, Eduardo F. Morales, Luis Villaseñor Pineda
Comments (0)