Sciweavers

1138 search results - page 37 / 228
» Feature Markov Decision Processes
Sort
View
JAIR
2008
107views more  JAIR 2008»
13 years 11 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
ATAL
2004
Springer
14 years 4 months ago
Interactive POMDPs: Properties and Preliminary Results
This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision process...
Piotr J. Gmytrasiewicz, Prashant Doshi
CORR
2010
Springer
112views Education» more  CORR 2010»
13 years 11 months ago
Efficient Approximation of Optimal Control for Markov Games
The success of probabilistic model checking for discrete-time Markov decision processes and continuous-time Markov chains has led to rich academic and industrial applications. The ...
Markus Rabe, Sven Schewe, Lijun Zhang
ICASSP
2009
IEEE
14 years 5 months ago
Experimenting with a global decision tree for state clustering in automatic speech recognition systems
In modern automatic speech recognition systems, it is standard practice to cluster several logical hidden Markov model states into one physical, clustered state. Typically, the cl...
Jasha Droppo, Alex Acero
ICTAI
1996
IEEE
14 years 3 months ago
Incremental Markov-Model Planning
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. T...
Richard Washington