Sciweavers

169 search results - page 11 / 34
» Planning with Continuous Actions in Partially Observable Env...
Sort
View
JAIR
2008
130views more  JAIR 2008»
13 years 7 months ago
Online Planning Algorithms for POMDPs
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP i...
Stéphane Ross, Joelle Pineau, Sébast...
ATAL
2006
Springer
13 years 11 months ago
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms ha...
Pradeep Varakantham, Ranjit Nair, Milind Tambe, Ma...
ECCV
2008
Springer
14 years 9 months ago
Latent Pose Estimator for Continuous Action Recognition
Recently, models based on conditional random fields (CRF) have produced promising results on labeling sequential data in several scientific fields. However, in the vision task of c...
Huazhong Ning, Wei Xu, Yihong Gong, Thomas S. Huan...
ATAL
2008
Springer
13 years 9 months ago
The permutable POMDP: fast solutions to POMDPs for preference elicitation
The ability for an agent to reason under uncertainty is crucial for many planning applications, since an agent rarely has access to complete, error-free information about its envi...
Finale Doshi, Nicholas Roy
IJRR
2010
162views more  IJRR 2010»
13 years 6 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been app...
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun...