Reinforcement learning problems are commonly tackled with temporal difference methods, which use dynamic programming and statistical sampling to estimate the long-term value of taking each action in each state. In most problems of realworld interest, learning this value function requires a function approximator, which represents the mapping from stateaction pairs to values via a concise, parameterized function and uses supervised learning methods to set its parameters. Function approximators make it possible to use temporal difference methods on large problems but, in practice, the feasibility of doing so depends on the ability of the human designer to select an appropriate representation for the value function. My thesis presents a new approach to function approximation that automates some of these difficult design choices by coupling temporal difference methods with policy search methods such as evolutionary computation. It also presents a particular implementation which combines N...