Sciweavers

332 search results - page 22 / 67
» Ranking policies in discrete Markov decision processes
Sort
View
AAAI
2008
13 years 10 months ago
Towards Faster Planning with Continuous Resources in Stochastic Domains
Agents often have to construct plans that obey resource limits for continuous resources whose consumption can only be characterized by probability distributions. While Markov Deci...
Janusz Marecki, Milind Tambe
AIPS
2003
13 years 9 months ago
Synthesis of Hierarchical Finite-State Controllers for POMDPs
We develop a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) in which a policy is represented as a hierarchical finite-state control...
Eric A. Hansen, Rong Zhou
PRIMA
2007
Springer
14 years 2 months ago
Multiagent Planning with Trembling-Hand Perfect Equilibrium in Multiagent POMDPs
Multiagent Partially Observable Markov Decision Processes are a popular model of multiagent systems with uncertainty. Since the computational cost for finding an optimal joint pol...
Yuichi Yabu, Makoto Yokoo, Atsushi Iwasaki
IJCAI
2003
13 years 9 months ago
Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings
The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision proces...
Ranjit Nair, Milind Tambe, Makoto Yokoo, David V. ...
JAIR
2006
122views more  JAIR 2006»
13 years 8 months ago
Solving Factored MDPs with Hybrid State and Action Variables
Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automa...
Branislav Kveton, Milos Hauskrecht, Carlos Guestri...