Sciweavers

98 search results - page 4 / 20
» Using Rewards for Belief State Updates in Partially Observab...
Sort
View
CSL
2007
Springer
13 years 7 months ago
Partially observable Markov decision processes for spoken dialog systems
In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence ...
Jason D. Williams, Steve Young
CSL
2010
Springer
13 years 7 months ago
Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems
This paper describes a statistically motivated framework for performing real-time dialogue state updates and policy learning in a spoken dialogue system. The framework is based on...
Blaise Thomson, Steve Young
AAAI
2010
13 years 9 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
14 years 2 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...
ICML
2006
IEEE
14 years 8 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey