Sciweavers

92 search results - page 17 / 19
» Game-based Abstraction for Markov Decision Processes
Sort
View
QEST
2006
IEEE
14 years 3 months ago
Compositional Performability Evaluation for STATEMATE
Abstract— This paper reports on our efforts to link an industrial state-of-the-art modelling tool to academic state-of-the-art analysis algorithms. In a nutshell, we enable timed...
Eckard Böde, Marc Herbstritt, Holger Hermanns...
AI
2006
Springer
14 years 1 months ago
Belief Selection in Point-Based Planning Algorithms for POMDPs
Abstract. Current point-based planning algorithms for solving partially observable Markov decision processes (POMDPs) have demonstrated that a good approximation of the value funct...
Masoumeh T. Izadi, Doina Precup, Danielle Azar
FLAIRS
2004
13 years 11 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, ...
Mehran Asadi, Manfred Huber
ICRA
2010
IEEE
143views Robotics» more  ICRA 2010»
13 years 8 months ago
Apprenticeship learning via soft local homomorphisms
Abstract— We consider the problem of apprenticeship learning when the expert’s demonstration covers only a small part of a large state space. Inverse Reinforcement Learning (IR...
Abdeslam Boularias, Brahim Chaib-draa
PKDD
2010
Springer
129views Data Mining» more  PKDD 2010»
13 years 8 months ago
Smarter Sampling in Model-Based Bayesian Reinforcement Learning
Abstract. Bayesian reinforcement learning (RL) is aimed at making more efficient use of data samples, but typically uses significantly more computation. For discrete Markov Decis...
Pablo Samuel Castro, Doina Precup