Sciweavers

767 search results - page 65 / 154
» Semantic-Based Planning of Process Models
Sort
View
JAIR
2011
144views more  JAIR 2011»
13 years 2 months ago
Non-Deterministic Policies in Markovian Decision Processes
Markovian processes have long been used to model stochastic environments. Reinforcement learning has emerged as a framework to solve sequential planning and decision-making proble...
Mahdi Milani Fard, Joelle Pineau
IROS
2008
IEEE
123views Robotics» more  IROS 2008»
14 years 2 months ago
Learning predictive terrain models for legged robot locomotion
— Legged robots require accurate models of their environment in order to plan and execute paths. We present a probabilistic technique based on Gaussian processes that allows terr...
Christian Plagemann, Sebastian Mischke, Sam Prenti...
PRIMA
2007
Springer
14 years 1 months ago
Multiagent Planning with Trembling-Hand Perfect Equilibrium in Multiagent POMDPs
Multiagent Partially Observable Markov Decision Processes are a popular model of multiagent systems with uncertainty. Since the computational cost for finding an optimal joint pol...
Yuichi Yabu, Makoto Yokoo, Atsushi Iwasaki
ATAL
2003
Springer
14 years 29 days ago
Transition-independent decentralized markov decision processes
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of m...
Raphen Becker, Shlomo Zilberstein, Victor R. Lesse...
AAAI
1996
13 years 9 months ago
Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations
: Partially-observable Markov decision processes provide a very general model for decision-theoretic planning problems, allowing the trade-offs between various courses of actions t...
Craig Boutilier, David Poole