Sciweavers

ICTAI
2005
IEEE
14 years 5 months ago
Planning with POMDPs Using a Compact, Logic-Based Representation
Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a...
Chenggang Wang, James G. Schmolze
IROS
2006
IEEE
121views Robotics» more  IROS 2006»
14 years 5 months ago
Planning and Acting in Uncertain Environments using Probabilistic Inference
— An important problem in robotics is planning and selecting actions for goal-directed behavior in noisy uncertain environments. The problem is typically addressed within the fra...
Deepak Verma, Rajesh P. N. Rao
ICANN
2007
Springer
14 years 5 months ago
Solving Deep Memory POMDPs with Recurrent Policy Gradients
Abstract. This paper presents Recurrent Policy Gradients, a modelfree reinforcement learning (RL) method creating limited-memory stochastic policies for partially observable Markov...
Daan Wierstra, Alexander Förster, Jan Peters,...
ECML
2007
Springer
14 years 5 months ago
Safe Q-Learning on Complete History Spaces
In this article, we present an idea for solving deterministic partially observable markov decision processes (POMDPs) based on a history space containing sequences of past observat...
Stephan Timmer, Martin Riedmiller
ATAL
2007
Springer
14 years 5 months ago
Subjective approximate solutions for decentralized POMDPs
A problem of planning for cooperative teams under uncertainty is a crucial one in multiagent systems. Decentralized partially observable Markov decision processes (DECPOMDPs) prov...
Anton Chechetka, Katia P. Sycara
ATAL
2007
Springer
14 years 5 months ago
Graphical models for online solutions to interactive POMDPs
We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear t...
Prashant Doshi, Yifeng Zeng, Qiongyu Chen
ATAL
2007
Springer
14 years 5 months ago
Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the signi...
Pradeep Varakantham, Janusz Marecki, Yuichi Yabu, ...
ICRA
2008
IEEE
167views Robotics» more  ICRA 2008»
14 years 5 months ago
An approximate algorithm for solving oracular POMDPs
Abstract— We propose a new approximate algorithm, LAJIV (Lookahead J-MDP Information Value), to solve Oracular Partially Observable Markov Decision Problems (OPOMDPs), a special ...
Nicholas Armstrong-Crews, Manuela M. Veloso
ATAL
2009
Springer
14 years 6 months ago
Lossless clustering of histories in decentralized POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute a generic and expressive framework for multiagent planning under uncertainty. However, plannin...
Frans A. Oliehoek, Shimon Whiteson, Matthijs T. J....
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
14 years 6 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...