Sciweavers

135 search results - page 16 / 27
» Bounded Parameter Markov Decision Processes
Sort
View
JAIR
2008
107views more  JAIR 2008»
13 years 7 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
CORR
2012
Springer
286views Education» more  CORR 2012»
12 years 3 months ago
A Faster Algorithm for Solving One-Clock Priced Timed Games
One-clock priced timed games is a class of two-player, zero-sum, continuous-time games that was defined and thoroughly studied in previous works. We show that One-clock priced ti...
Thomas Dueholm Hansen, Rasmus Ibsen-Jensen, Peter ...
JMLR
2010
189views more  JMLR 2010»
13 years 2 months ago
Adaptive Step-size Policy Gradients with Average Reward Metric
In this paper, we propose a novel adaptive step-size approach for policy gradient reinforcement learning. A new metric is defined for policy gradients that measures the effect of ...
Takamitsu Matsubara, Tetsuro Morimura, Jun Morimot...
UAI
2004
13 years 9 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 5 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...