Sciweavers

AIPS
2011

Sample-Based Planning for Continuous Action Markov Decision Processes

13 years 4 months ago
Sample-Based Planning for Continuous Action Markov Decision Processes
In this paper, we present a new algorithm that integrates recent advances in solving continuous bandit problems with sample-based rollout methods for planning in Markov Decision Processes (MDPs). Our algorithm, Hierarchical Optimistic Optimization applied to Trees (HOOT) addresses planning in continuous-action MDPs. Empirical results are given that show that the performance of our algorithm meets or exceeds that of a similar discrete action planner by eliminating the problem of manual discretization of the action space.
Christopher R. Mansley, Ari Weinstein, Michael L.
Added 24 Aug 2011
Updated 24 Aug 2011
Type Journal
Year 2011
Where AIPS
Authors Christopher R. Mansley, Ari Weinstein, Michael L. Littman
Comments (0)