Sciweavers


Publication

Monte Carlo Value Iteration for Continuous-State POMDPs

13 years 7 months ago
Monte Carlo Value Iteration for Continuous-State POMDPs
Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot’s state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and prelimi- nary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty.
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
Added 06 May 2011
Updated 06 May 2011
Type Conference
Year 2010
Where WAFR
Authors Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
Comments (0)