Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for so...
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Partially observable Markov decision processes (POMDPs) have been
successfully applied to various robot motion planning tasks under uncertainty.
However, most existing POMDP algo...
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
Verification of reachability properties for probabilistic systems is usually based on variants of Markov processes. Current methods assume an exact model of the dynamic behavior a...
We present a novel mixed-state dynamic Bayesian network (DBN) framework for modeling and classifying timeseries data such as object trajectories. A hidden Markov model (HMM) of di...
Vladimir Pavlovic, Brendan J. Frey, Thomas S. Huan...