Sciweavers

495 search results - page 83 / 99
» Constructing States for Reinforcement Learning
Sort
View
ICML
2010
IEEE
13 years 8 months ago
Inverse Optimal Control with Linearly-Solvable MDPs
We present new algorithms for inverse optimal control (or inverse reinforcement learning, IRL) within the framework of linearlysolvable MDPs (LMDPs). Unlike most prior IRL algorit...
Dvijotham Krishnamurthy, Emanuel Todorov
IJCNN
2006
IEEE
14 years 1 months ago
Knowledge Representation and Possible Worlds for Neural Networks
— The semantics of neural networks can be analyzed mathematically as a distributed system of knowledge and as systems of possible worlds expressed in the knowledge. Learning in a...
Michael J. Healy, Thomas P. Caudell
IJAR
2008
119views more  IJAR 2008»
13 years 7 months ago
Adapting Bayes network structures to non-stationary domains
When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process stru...
Søren Holbech Nielsen, Thomas D. Nielsen
ATAL
2010
Springer
13 years 8 months ago
Linear options
Learning, planning, and representing knowledge in large state t multiple levels of temporal abstraction are key, long-standing challenges for building flexible autonomous agents. ...
Jonathan Sorg, Satinder P. Singh
SARA
2005
Springer
14 years 1 months ago
Feature-Discovering Approximate Value Iteration Methods
Sets of features in Markov decision processes can play a critical role ximately representing value and in abstracting the state space. Selection of features is crucial to the succe...
Jia-Hong Wu, Robert Givan