Sciweavers

260 search results - page 16 / 52
» Quasi-Deterministic Partially Observable Markov Decision Pro...
Sort
View
AIPS
2003
13 years 9 months ago
Synthesis of Hierarchical Finite-State Controllers for POMDPs
We develop a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) in which a policy is represented as a hierarchical finite-state control...
Eric A. Hansen, Rong Zhou
ICML
2006
IEEE
14 years 8 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey
ICASSP
2011
IEEE
12 years 11 months ago
Learning and inference algorithms for partially observed structured switching vector autoregressive models
We present learning and inference algorithms for a versatile class of partially observed vector autoregressive (VAR) models for multivariate time-series data. VAR models can captu...
Balakrishnan Varadarajan, Sanjeev Khudanpur
UAI
2000
13 years 9 months ago
PEGASUS: A policy search method for large MDPs and POMDPs
We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a mo...
Andrew Y. Ng, Michael I. Jordan
ICTAI
2005
IEEE
14 years 1 months ago
Planning with POMDPs Using a Compact, Logic-Based Representation
Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a...
Chenggang Wang, James G. Schmolze