Sciweavers

201 search results - page 11 / 41
» Solving Concurrent Markov Decision Processes
Sort
View
AAAI
2007
13 years 9 months ago
Scaling Up: Solving POMDPs through Value Based Clustering
Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an opti...
Yan Virin, Guy Shani, Solomon Eyal Shimony, Ronen ...
ICML
2007
IEEE
14 years 8 months ago
Automatic shaping and decomposition of reward functions
This paper investigates the problem of automatically learning how to restructure the reward function of a Markov decision process so as to speed up reinforcement learning. We begi...
Bhaskara Marthi
CORR
2012
Springer
286views Education» more  CORR 2012»
12 years 3 months ago
A Faster Algorithm for Solving One-Clock Priced Timed Games
One-clock priced timed games is a class of two-player, zero-sum, continuous-time games that was defined and thoroughly studied in previous works. We show that One-clock priced ti...
Thomas Dueholm Hansen, Rasmus Ibsen-Jensen, Peter ...
AIPS
1998
13 years 8 months ago
Solving Stochastic Planning Problems with Large State and Action Spaces
Planning methods for deterministic planning problems traditionally exploit factored representations to encode the dynamics of problems in terms of a set of parameters, e.g., the l...
Thomas Dean, Robert Givan, Kee-Eung Kim
UAI
2004
13 years 8 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...