Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision process...
The success of probabilistic model checking for discrete-time Markov decision processes and continuous-time Markov chains has led to rich academic and industrial applications. The ...
In modern automatic speech recognition systems, it is standard practice to cluster several logical hidden Markov model states into one physical, clustered state. Typically, the cl...
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. T...