Sciweavers

FUZZIEEE
2007
IEEE

Fuzzy Approximation for Convergent Model-Based Reinforcement Learning

14 years 5 months ago
Fuzzy Approximation for Convergent Model-Based Reinforcement Learning
— Reinforcement learning (RL) is a learning control paradigm that provides well-understood algorithms with good convergence and consistency properties. Unfortunately, these algorithms require that process states and control actions take only discrete values. Approximate solutions using fuzzy representations have been proposed in the literature for the case when the states and possibly the actions are continuous. However, the link between these mainly heuristic solutions and the larger body of work on approximate RL, including convergence results, has not been made explicit. In this paper, we propose a fuzzy approximation structure for the Q-value iteration algorithm, and show that the resulting algorithm is convergent. The proof is based on an extension of previous results in approximate RL. We then propose a modified, serial version of the algorithm that is guaranteed to converge at least as fast as the original algorithm. An illustrative simulation example is also provided.
Lucian Busoniu, Damien Ernst, Bart De Schutter, Ro
Added 02 Jun 2010
Updated 02 Jun 2010
Type Conference
Year 2007
Where FUZZIEEE
Authors Lucian Busoniu, Damien Ernst, Bart De Schutter, Robert Babuska
Comments (0)