Sciweavers

102 search results - page 18 / 21
» MDPs with Non-Deterministic Policies
Sort
View
AAAI
2006
13 years 9 months ago
Decision Making in Uncertain Real-World Domains Using DT-Golog
DTGolog, a decision-theoretic agent programming language based on the situation calculus, was proposed to ease some of the computational difficulties associated with Markov Decisi...
Mikhail Soutchanski, Huy Pham, John Mylopoulos
NIPS
1998
13 years 8 months ago
Risk Sensitive Reinforcement Learning
In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with re...
Ralph Neuneier, Oliver Mihatsch
JAIR
2008
126views more  JAIR 2008»
13 years 7 months ago
Optimal and Approximate Q-value Functions for Decentralized POMDPs
Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent ...
Frans A. Oliehoek, Matthijs T. J. Spaan, Nikos A. ...
ATAL
2009
Springer
14 years 2 months ago
Constraint-based dynamic programming for decentralized POMDPs with structured interactions
Decentralized partially observable MDPs (DEC-POMDPs) provide a rich framework for modeling decision making by a team of agents. Despite rapid progress in this area, the limited sc...
Akshat Kumar, Shlomo Zilberstein
IROS
2006
IEEE
121views Robotics» more  IROS 2006»
14 years 1 months ago
Planning and Acting in Uncertain Environments using Probabilistic Inference
— An important problem in robotics is planning and selecting actions for goal-directed behavior in noisy uncertain environments. The problem is typically addressed within the fra...
Deepak Verma, Rajesh P. N. Rao