In this paper we investigate the relation between transfer learning in reinforcement learning with function approximation and supervised learning with concept drift. We present a new incremental relational regression tree algorithm that is capable of dealing with concept drift through tree restructuring and show that it enables a reinforcement learner, more precisely a Q-learner, to transfer knowledge from one task to another by recycling those parts of the generalized Q-function that still hold interesting information for the new task. We illustrate the performance of the algorithm in experiments with both supervised learning tasks with concept drift and reinforcement learning tasks that allow the transfer of knowledge from easier, related tasks.