Sciweavers

91 search results - page 15 / 19
» Magnifying-Lens Abstraction for Markov Decision Processes
Sort
View
JSAC
2011
82views more  JSAC 2011»
13 years 2 months ago
Optimal Cognitive Access of Markovian Channels under Tight Collision Constraints
Abstract—The problem of cognitive access of channels of primary users by a secondary user is considered. The transmissions of primary users are modeled as independent continuous-...
Xin Li, Qianchuan Zhao, Xiaohong Guan, Lang Tong
AI
2006
Springer
13 years 11 months ago
Belief Selection in Point-Based Planning Algorithms for POMDPs
Abstract. Current point-based planning algorithms for solving partially observable Markov decision processes (POMDPs) have demonstrated that a good approximation of the value funct...
Masoumeh T. Izadi, Doina Precup, Danielle Azar
FLAIRS
2004
13 years 9 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, ...
Mehran Asadi, Manfred Huber
ICRA
2010
IEEE
143views Robotics» more  ICRA 2010»
13 years 6 months ago
Apprenticeship learning via soft local homomorphisms
Abstract— We consider the problem of apprenticeship learning when the expert’s demonstration covers only a small part of a large state space. Inverse Reinforcement Learning (IR...
Abdeslam Boularias, Brahim Chaib-draa
PKDD
2010
Springer
129views Data Mining» more  PKDD 2010»
13 years 6 months ago
Smarter Sampling in Model-Based Bayesian Reinforcement Learning
Abstract. Bayesian reinforcement learning (RL) is aimed at making more efficient use of data samples, but typically uses significantly more computation. For discrete Markov Decis...
Pablo Samuel Castro, Doina Precup