Sciweavers

113 search results - page 4 / 23
» Learning Representation and Control in Continuous Markov Dec...
Sort
View
NIPS
2001
13 years 9 months ago
Predictive Representations of State
We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded...
Michael L. Littman, Richard S. Sutton, Satinder P....
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
14 years 2 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...
PAMI
2007
186views more  PAMI 2007»
13 years 7 months ago
Value-Directed Human Behavior Analysis from Video Using Partially Observable Markov Decision Processes
—This paper presents a method for learning decision theoretic models of human behaviors from video data. Our system learns relationships between the movements of a person, the co...
Jesse Hoey, James J. Little
ATAL
2008
Springer
13 years 9 months ago
Controlling deliberation in a Markov decision process-based agent
Meta-level control manages the allocation of limited resources to deliberative actions. This paper discusses efforts in adding meta-level control capabilities to a Markov Decision...
George Alexander, Anita Raja, David J. Musliner
AAAI
2006
13 years 9 months ago
Hard Constrained Semi-Markov Decision Processes
In multiple criteria Markov Decision Processes (MDP) where multiple costs are incurred at every decision point, current methods solve them by minimising the expected primary cost ...
Wai-Leong Yeow, Chen-Khong Tham, Wai-Choong Wong