Sciweavers

98 search results - page 10 / 20
» Using Rewards for Belief State Updates in Partially Observab...
Sort
View
ICRA
2007
IEEE
126views Robotics» more  ICRA 2007»
14 years 1 months ago
A formal framework for robot learning and control under model uncertainty
— While the Partially Observable Markov Decision Process (POMDP) provides a formal framework for the problem of robot control under uncertainty, it typically assumes a known and ...
Robin Jaulmes, Joelle Pineau, Doina Precup
AIPS
2003
13 years 9 months ago
Synthesis of Hierarchical Finite-State Controllers for POMDPs
We develop a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) in which a policy is represented as a hierarchical finite-state control...
Eric A. Hansen, Rong Zhou
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
14 years 2 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...
NIPS
2007
13 years 9 months ago
What makes some POMDP problems easy to approximate?
Point-based algorithms have been surprisingly successful in computing approximately optimal solutions for partially observable Markov decision processes (POMDPs) in high dimension...
David Hsu, Wee Sun Lee, Nan Rong
AAAI
2006
13 years 9 months ago
Incremental Least Squares Policy Iteration for POMDPs
We present a new algorithm, called incremental least squares policy iteration (ILSPI), for finding the infinite-horizon stationary policy for partially observable Markov decision ...
Hui Li, Xuejun Liao, Lawrence Carin