Sciweavers

369 search results - page 3 / 74
» Global Optimization for Value Function Approximation
Sort
View
PKDD
2009
Springer
152views Data Mining» more  PKDD 2009»
14 years 2 months ago
Feature Selection for Value Function Approximation Using Bayesian Model Selection
Abstract. Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unkown value function can be obtained, is one of th...
Tobias Jung, Peter Stone
AAAI
2008
13 years 9 months ago
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are us...
Hirotaka Hachiya, Takayuki Akiyama, Masashi Sugiya...
AAAI
2006
13 years 8 months ago
Functional Value Iteration for Decision-Theoretic Planning with General Utility Functions
We study how to find plans that maximize the expected total utility for a given MDP, a planning objective that is important for decision making in high-stakes domains. The optimal...
Yaxin Liu, Sven Koenig
SIAMCO
2002
121views more  SIAMCO 2002»
13 years 7 months ago
Consistent Approximations and Approximate Functions and Gradients in Optimal Control
As shown in [7], optimal control problems with either ODE or PDE dynamics can be solved efficiently using a setting of consistent approximations obtained by numerical discretizati...
Olivier Pironneau, Elijah Polak
NIPS
1993
13 years 8 months ago
Using Local Trajectory Optimizers to Speed Up Global Optimization in Dynamic Programming
Dynamic programming provides a methodology to develop planners and controllers for nonlinear systems. However, general dynamic programming is computationally intractable. We have ...
Christopher G. Atkeson