Sciweavers

62 search results - page 3 / 13
» Probabilistic inference for solving discrete and continuous ...
Sort
View
AAAI
1997
13 years 8 months ago
Model Minimization in Markov Decision Processes
Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for so...
Thomas Dean, Robert Givan
AIPS
2006
13 years 9 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht

Publication
273views
13 years 2 months ago
Monte Carlo Value Iteration for Continuous-State POMDPs
Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algo...
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
AI
2008
Springer
13 years 7 months ago
Reachability analysis of uncertain systems using bounded-parameter Markov decision processes
Verification of reachability properties for probabilistic systems is usually based on variants of Markov processes. Current methods assume an exact model of the dynamic behavior a...
Di Wu, Xenofon D. Koutsoukos
CVPR
1999
IEEE
14 years 9 months ago
Time-Series Classification Using Mixed-State Dynamic Bayesian Networks
We present a novel mixed-state dynamic Bayesian network (DBN) framework for modeling and classifying timeseries data such as object trajectories. A hidden Markov model (HMM) of di...
Vladimir Pavlovic, Brendan J. Frey, Thomas S. Huan...