Helicopter hovering is an important challenge problem in the field of reinforcement learning. This paper considers several neuroevolutionary approaches to discovering robust controllers for a generalized version of the problem used in the 2008 Reinforcement Learning Competition, in which wind in the helicopter’s environment varies from run to run. We present the simple model-free strategy that won first place in the competition and also describe several more complex model-based approaches. Our empirical results demonstrate that neuroevolution is effective at optimizing the weights of multi-layer perceptrons, that linear regression is faster and more effective than evolution for learning models, and that model-based approaches can outperform the simple modelfree strategy, especially if prior knowledge is used to aid model learning. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning; I.2.9 [Artificial Intelligence]: Robotics General Terms Algorithms, Ex...