Sciweavers

ICRA
2007
IEEE

A formal framework for robot learning and control under model uncertainty

14 years 6 months ago
A formal framework for robot learning and control under model uncertainty
— While the Partially Observable Markov Decision Process (POMDP) provides a formal framework for the problem of robot control under uncertainty, it typically assumes a known and stationary model of the environment. In this paper, we study the problem of finding an optimal policy for controlling a robot in a partially observable domain, where the model is not perfectly known, and may change over time. We present an algorithm called MEDUSA which incrementally learns a POMDP model using queries, while still optimizing a reward function. We demonstrate effectiveness of the approach for a simple scenario, where a robot seeking a person has minimal a priori knowledge of its own sensor model, as well as where the person is located.
Robin Jaulmes, Joelle Pineau, Doina Precup
Added 03 Jun 2010
Updated 03 Jun 2010
Type Conference
Year 2007
Where ICRA
Authors Robin Jaulmes, Joelle Pineau, Doina Precup
Comments (0)