Compared to their ancestors in the early 1970s, present day computer games are of incredible complexity and show magnificent graphical performance. However, in programming intelligent opponents, the game industry still applies techniques developed some 30 years ago. In this paper, we investigate whether opponent programming can be treated as a problem of behavior learning. To this end, we assume the behavior of game characters to be a function that maps the current game state onto a reaction. We will show that neural networks architectures are well suited to learn such functions and by means of a popular commercial game we demonstrate that agent behaviors can be learned from observation. 1 Context, Motivation, and Overview Modern computer games create complex and dynamic virtual worlds which offer numerous possibilities for interaction and are displayed using incredible computer graphics. Professional game development therefore has become expensive and time consuming and involves whol...