Sciweavers

231 search results - page 34 / 47
» Active Learning in Partially Observable Markov Decision Proc...
Sort
View
HICSS
2003
IEEE
123views Biometrics» more  HICSS 2003»
14 years 24 days ago
Issues in Rational Planning in Multi-Agent Settings
We adopt the decision-theoretic principle of expected utility maximization as a paradigm for designing autonomous rational agents operating in multi-agent environments. We use the...
Piotr J. Gmytrasiewicz
CDC
2008
IEEE
118views Control Systems» more  CDC 2008»
14 years 2 months ago
A density projection approach to dimension reduction for continuous-state POMDPs
Abstract— Research on numerical solution methods for partially observable Markov decision processes (POMDPs) has primarily focused on discrete-state models, and these algorithms ...
Enlu Zhou, Michael C. Fu, Steven I. Marcus
ACL
2008
13 years 9 months ago
Mixture Model POMDPs for Efficient Handling of Uncertainty in Dialogue Management
In spoken dialogue systems, Partially Observable Markov Decision Processes (POMDPs) provide a formal framework for making dialogue management decisions under uncertainty, but effi...
James Henderson, Oliver Lemon
CSL
2010
Springer
13 years 7 months ago
Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems
This paper describes a statistically motivated framework for performing real-time dialogue state updates and policy learning in a spoken dialogue system. The framework is based on...
Blaise Thomson, Steve Young
ICRA
2010
IEEE
133views Robotics» more  ICRA 2010»
13 years 6 months ago
Variable resolution decomposition for robotic navigation under a POMDP framework
— Partially Observable Markov Decision Processes (POMDPs) offer a powerful mathematical framework for making optimal action choices in noisy and/or uncertain environments, in par...
Robert Kaplow, Amin Atrash, Joelle Pineau