Sciweavers

509 search results - page 28 / 102
» Using Learning for Approximation in Stochastic Processes
Sort
View
127
Voted
NIPS
1998
15 years 3 months ago
Learning Nonlinear Dynamical Systems Using an EM Algorithm
The Expectation Maximization EM algorithm is an iterative procedure for maximum likelihood parameter estimation from data sets with missing or hidden variables 2 . It has been app...
Zoubin Ghahramani, Sam T. Roweis
ILP
2003
Springer
15 years 7 months ago
Graph Kernels and Gaussian Processes for Relational Reinforcement Learning
RRL is a relational reinforcement learning system based on Q-learning in relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no ...
Thomas Gärtner, Kurt Driessens, Jan Ramon
NIPS
2008
15 years 3 months ago
Local Gaussian Process Regression for Real Time Online Model Learning
Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by...
Duy Nguyen-Tuong, Matthias Seeger, Jan Peters
142
Voted
JAIR
2008
107views more  JAIR 2008»
15 years 2 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
ICML
2006
IEEE
15 years 8 months ago
Automatic basis function construction for approximate dynamic programming and reinforcement learning
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov Decision Process (MDP). Our work builds on results ...
Philipp W. Keller, Shie Mannor, Doina Precup