Sciweavers

74 search results - page 4 / 15
» Empirical Distributions of Beliefs Under Imperfect Observati...
Sort
View
AAAI
2008
13 years 9 months ago
A Variance Analysis for POMDP Policy Evaluation
Partially Observable Markov Decision Processes have been studied widely as a model for decision making under uncertainty, and a number of methods have been developed to find the s...
Mahdi Milani Fard, Joelle Pineau, Peng Sun
IJAR
2008
114views more  IJAR 2008»
13 years 7 months ago
A definition of subjective possibility
: Based on the setting of exchangeable bets, this paper proposes a subjectivist view of numerical possibility theory. It relies on the assumption that when an agent constructs a pr...
Didier Dubois, Henri Prade, Philippe Smets
ATAL
2008
Springer
13 years 9 months ago
Value-based observation compression for DEC-POMDPs
Representing agent policies compactly is essential for improving the scalability of multi-agent planning algorithms. In this paper, we focus on developing a pruning technique that...
Alan Carlin, Shlomo Zilberstein
RSS
2007
176views Robotics» more  RSS 2007»
13 years 8 months ago
Active Policy Learning for Robot Planning and Exploration under Uncertainty
Abstract— This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested i...
Ruben Martinez-Cantin, Nando de Freitas, Arnaud Do...
AAAI
2007
13 years 9 months ago
Scaling Up: Solving POMDPs through Value Based Clustering
Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an opti...
Yan Virin, Guy Shani, Solomon Eyal Shimony, Ronen ...