Real-time obstacle avoidance and low-level navigation is a fundamental problem for autonomous animated creatures. Here we present an ethologically inspired approach to this problem in which the creature renders the scene from its viewpoint (i.e. synthetic vision) and uses the resulting image to recover a gross measure of motion energy as well as other key features of its immediate environment, which are then used to guide movement. By combining this form of synthetic vision with an ethologically inspired model of action-selection, we are able to demonstrate robust obstacle avoidance and low-level navigation in Silas T. Dog, a virtual dog built by the author, when he is placed in complex scenery such as the “Doom” environment.