Sciweavers

499 search results - page 8 / 100
» Model Minimization in Markov Decision Processes
Sort
View
AIPS
2004
13 years 9 months ago
Optimal Resource Allocation and Policy Formulation in Loosely-Coupled Markov Decision Processes
The problem of optimal policy formulation for teams of resource-limited agents in stochastic environments is composed of two strongly-coupled subproblems: a resource allocation pr...
Dmitri A. Dolgov, Edmund H. Durfee
NIPS
2003
13 years 8 months ago
Linear Program Approximations for Factored Continuous-State Markov Decision Processes
Approximate linear programming (ALP) has emerged recently as one of the most promising methods for solving complex factored MDPs with finite state spaces. In this work we show th...
Milos Hauskrecht, Branislav Kveton
AAAI
2007
13 years 9 months ago
Purely Epistemic Markov Decision Processes
Planning under uncertainty involves two distinct sources of uncertainty: uncertainty about the effects of actions and uncertainty about the current state of the world. The most wi...
Régis Sabbadin, Jérôme Lang, N...
ECAI
2010
Springer
13 years 8 months ago
On Finding Compromise Solutions in Multiobjective Markov Decision Processes
A Markov Decision Process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagen...
Patrice Perny, Paul Weng
ECML
2005
Springer
14 years 1 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup