Sciweavers

231 search results - page 21 / 47
» Active Learning in Partially Observable Markov Decision Proc...
Sort
View
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 6 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
FLAIRS
2001
13 years 10 months ago
Probabilistic Planning for Behavior-Based Robots
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot control. We show how to use POMDPs differently, namely for sensorplanning in the ...
Amin Atrash, Sven Koenig
ICRA
2010
IEEE
101views Robotics» more  ICRA 2010»
13 years 7 months ago
Multirobot coordination by auctioning POMDPs
— We consider the problem of task assignment and execution in multirobot systems, by proposing a procedure for bid estimation in auction protocols. Auctions are of interest to mu...
Matthijs T. J. Spaan, Nelson Gonçalves, Jo&...
CORR
2011
Springer
219views Education» more  CORR 2011»
13 years 3 months ago
Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing
Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing informa...
Kian Hsiang Low, John M. Dolan, Pradeep K. Khosla
GECCO
2009
Springer
162views Optimization» more  GECCO 2009»
13 years 6 months ago
Uncertainty handling CMA-ES for reinforcement learning
The covariance matrix adaptation evolution strategy (CMAES) has proven to be a powerful method for reinforcement learning (RL). Recently, the CMA-ES has been augmented with an ada...
Verena Heidrich-Meisner, Christian Igel