Sciweavers

ICANN
2010
Springer

Exploring Continuous Action Spaces with Diffusion Trees for Reinforcement Learning

14 years 19 days ago
Exploring Continuous Action Spaces with Diffusion Trees for Reinforcement Learning
We propose a new approach for reinforcement learning in problems with continuous actions. Actions are sampled by means of a diffusion tree, which generates samples in the continuous action space and organizes them in a hierarchical tree structure. In this tree, each subtree holds a subset of the action samples and thus holds knowledge about a subregion of the action space. Additionally, we store the expected long-term return of the samples of a subtree in the subtree's root. Thus, the diffusion tree integrates both, a sampling technique and a means for representing acquired knowledge in a hierarchical fashion. Sampling of new action samples is done by recursively walking down the tree. Thus, information about subregions stored in the roots of all subtrees of a branching point can be used to direct the search and to generate new samples in promising regions. This facilitates control of the sample distribution, which allows for informed sampling based on the acquired knowledge, e.g....
Christian Vollmer, Erik Schaffernicht, Horst-Micha
Added 09 Nov 2010
Updated 09 Nov 2010
Type Conference
Year 2010
Where ICANN
Authors Christian Vollmer, Erik Schaffernicht, Horst-Michael Gross
Comments (0)