Sciweavers

109 search results - page 8 / 22
» Policy teaching through reward function learning
Sort
View
AUSAI
1999
Springer
13 years 12 months ago
Q-Learning in Continuous State and Action Spaces
Abstract. Q-learning can be used to learn a control policy that maximises a scalar reward through interaction with the environment. Qlearning is commonly applied to problems with d...
Chris Gaskett, David Wettergreen, Alexander Zelins...
ATAL
2010
Springer
13 years 8 months ago
Cultivating desired behaviour: policy teaching via environment-dynamics tweaks
In this paper we study, for the first time explicitly, the implications of endowing an interested party (i.e. a teacher) with the ability to modify the underlying dynamics of the ...
Zinovi Rabinovich, Lachlan Dufton, Kate Larson, Ni...
ACL
2009
13 years 5 months ago
Reinforcement Learning for Mapping Instructions to Actions
In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function tha...
S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer,...
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 5 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
FLAIRS
2004
13 years 9 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, ...
Mehran Asadi, Manfred Huber