This paper presents a direct reinforcement learning algorithm, called Finite-Element Reinforcement Learning, in the continuous case, i.e. continuous state-space and time. The evaluation of the value function enables the generation of an optimal policy for reinforcement control problems, such as target or obstacle problems, viability problems or optimization problems. We propose a continuous formalism for the studying of reinforcement learning using the continuous optimal control framework, then we state the associated Hamilton-JacobiBellman equation. First, we propose to approximate the value function by a numerical scheme based on a nite-element method. This generates a discrete Markov Decision Process, with nite state and control spaces, which can be solved by Dynamic Programming. The computation of this approximation scheme, in reinforcement learning terminology, belongs to the class of indirect learning methods. Then we present our direct learning algorithm which approximates the ...