Sciweavers

AAAI
2010
14 years 1 months ago
Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization
Partially Observable Markov Decision Processes (POMDPs) are a well-established and rigorous framework for sequential decision-making under uncertainty. POMDPs are well-known to be...
Georgios Theocharous, Sridhar Mahadevan
ATAL
2008
Springer
14 years 1 months ago
Not all agents are equal: scaling up distributed POMDPs for agent networks
Many applications of networks of agents, including mobile sensor networks, unmanned air vehicles, autonomous underwater vehicles, involve 100s of agents acting collaboratively und...
Janusz Marecki, Tapana Gupta, Pradeep Varakantham,...
ATAL
2008
Springer
14 years 1 months ago
Exploiting locality of interaction in factored Dec-POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute an expressive framework for multiagent planning under uncertainty, but solving them is provabl...
Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Wh...
ATAL
2008
Springer
14 years 1 months ago
The permutable POMDP: fast solutions to POMDPs for preference elicitation
The ability for an agent to reason under uncertainty is crucial for many planning applications, since an agent rarely has access to complete, error-free information about its envi...
Finale Doshi, Nicholas Roy
FLAIRS
2008
14 years 1 months ago
State Space Compression with Predictive Representations
Current studies have demonstrated that the representational power of predictive state representations (PSRs) is at least equal to the one of partially observable Markov decision p...
Abdeslam Boularias, Masoumeh T. Izadi, Brahim Chai...
AAAI
2007
14 years 1 months ago
Purely Epistemic Markov Decision Processes
Planning under uncertainty involves two distinct sources of uncertainty: uncertainty about the effects of actions and uncertainty about the current state of the world. The most wi...
Régis Sabbadin, Jérôme Lang, N...
AAAI
2008
14 years 1 months ago
A Variance Analysis for POMDP Policy Evaluation
Partially Observable Markov Decision Processes have been studied widely as a model for decision making under uncertainty, and a number of methods have been developed to find the s...
Mahdi Milani Fard, Joelle Pineau, Peng Sun
AAAI
2007
14 years 1 months ago
Scaling Up: Solving POMDPs through Value Based Clustering
Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an opti...
Yan Virin, Guy Shani, Solomon Eyal Shimony, Ronen ...
ATAL
2006
Springer
14 years 3 months ago
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms ha...
Pradeep Varakantham, Ranjit Nair, Milind Tambe, Ma...
ATAL
2006
Springer
14 years 3 months ago
Decentralized planning under uncertainty for teams of communicating agents
Decentralized partially observable Markov decision processes (DEC-POMDPs) form a general framework for planning for groups of cooperating agents that inhabit a stochastic and part...
Matthijs T. J. Spaan, Geoffrey J. Gordon, Nikos A....