Sciweavers

683 search results - page 80 / 137
» Coarticulation in Markov Decision Processes
Sort
View
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 7 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
QEST
2010
IEEE
13 years 7 months ago
Reasoning about MDPs as Transformers of Probability Distributions
We consider Markov Decision Processes (MDPs) as transformers on probability distributions, where with respect to a scheduler that resolves nondeterminism, the MDP can be seen as ex...
Vijay Anand Korthikanti, Mahesh Viswanathan, Gul A...
GLOBECOM
2008
IEEE
14 years 3 months ago
Foresighted Resource Reciprocation Strategies in P2P Networks
—We consider peer-to-peer (P2P) networks, where multiple peers are interested in sharing content. While sharing resources, autonomous and self-interested peers need to make decis...
Hyunggon Park, Mihaela van der Schaar
CPAIOR
2008
Springer
13 years 10 months ago
Amsaa: A Multistep Anticipatory Algorithm for Online Stochastic Combinatorial Optimization
The one-step anticipatory algorithm (1s-AA) is an online algorithm making decisions under uncertainty by ignoring future non-anticipativity constraints. It makes near-optimal decis...
Luc Mercier, Pascal Van Hentenryck
NIPS
2008
13 years 10 months ago
MDPs with Non-Deterministic Policies
Markov Decision Processes (MDPs) have been extensively studied and used in the context of planning and decision-making, and many methods exist to find the optimal policy for probl...
Mahdi Milani Fard, Joelle Pineau