Sciweavers

1138 search results - page 42 / 228
» Feature Markov Decision Processes
Sort
View
ACL
2000
14 years 9 days ago
Spoken Dialogue Management Using Probabilistic Reasoning
Spoken dialogue managers have benefited from using stochastic planners such as Markov Decision Processes (MDPs). However, so far, MDPs do not handle well noisy and ambiguous speec...
Nicholas Roy, Joelle Pineau, Sebastian Thrun
JAIR
2008
104views more  JAIR 2008»
13 years 11 months ago
On the Qualitative Comparison of Decisions Having Positive and Negative Features
Making a decision is often a matter of listing and comparing positive and negative arguments. In such cases, the evaluation scale for decisions should be considered bipolar, that ...
Didier Dubois, Hélène Fargier, Jean-...
ICML
2004
IEEE
14 years 11 months ago
Utile distinction hidden Markov models
This paper addresses the problem of constructing good action selection policies for agents acting in partially observable environments, a class of problems generally known as Part...
Daan Wierstra, Marco Wiering
SIGMETRICS
2000
ACM
105views Hardware» more  SIGMETRICS 2000»
14 years 3 months ago
Using the exact state space of a Markov model to compute approximate stationary measures
We present a new approximation algorithm based on an exact representation of the state space S, using decision diagrams, and of the transition rate matrix R, using Kronecker algeb...
Andrew S. Miner, Gianfranco Ciardo, Susanna Donate...
UAI
2000
14 years 8 days ago
PEGASUS: A policy search method for large MDPs and POMDPs
We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a mo...
Andrew Y. Ng, Michael I. Jordan