Sciweavers

423 search results - page 44 / 85
» Multi-objective Model Checking of Markov Decision Processes
Sort
View
ATAL
2006
Springer
14 years 13 days ago
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms ha...
Pradeep Varakantham, Ranjit Nair, Milind Tambe, Ma...
UAI
2000
13 years 10 months ago
Approximately Optimal Monitoring of Plan Preconditions
Monitoring plan preconditions can allow for replanning when a precondition fails, generally far in advance of the point in the plan where the precondition is relevant. However, mo...
Craig Boutilier
ICASSP
2008
IEEE
14 years 3 months ago
Bayesian update of dialogue state for robust dialogue systems
This paper presents a new framework for accumulating beliefs in spoken dialogue systems. The technique is based on updating a Bayesian Network that represents the underlying state...
Blaise Thomson, Jost Schatzmann, Steve Young
JMLR
2010
125views more  JMLR 2010»
13 years 3 months ago
Variational methods for Reinforcement Learning
We consider reinforcement learning as solving a Markov decision process with unknown transition distribution. Based on interaction with the environment, an estimate of the transit...
Thomas Furmston, David Barber
IROS
2006
IEEE
121views Robotics» more  IROS 2006»
14 years 2 months ago
Planning and Acting in Uncertain Environments using Probabilistic Inference
— An important problem in robotics is planning and selecting actions for goal-directed behavior in noisy uncertain environments. The problem is typically addressed within the fra...
Deepak Verma, Rajesh P. N. Rao