Abstract. In this paper we present a Reinforcement Learning (RL) approach with the capability to train neural adaptive controllers for complex control problems without expensive online exploration. The basis of the neural controller is a Neural fitted Q-Iteration (NFQ). This network is trained with data from the example set enriched with artificial data. With this training scheme, unlike most other existing approaches, the controller is able to learn offline on observed training data of an already closed-loop controlled process with often sparse and uninformative training samples. The suggested neural controller is evaluated on a modified and advanced cartpole simulator and a combustion control of a real waste-incineration plant and can successfully demonstrate its superiority.