This paper presents a simple and efficient method of modeling synthetic vision, memory, and learning for autonomous animated characters in real-time virtual environments. The model is efficient in terms of both storage requirements and update times, and can be flexibly combined with a variety of higher-level reasoning modules or complex memory rules. The design is inspired by research in motion planning, control, and sensing for autonomous mobile robots. We apply this framework to the problem of quickly synthesizing from navigation goals the collision-free motions for animated human figures in changing virtual environments. We combine a low-level path planner, a pathfollowing controller, and cyclic motion capture data to generate the underlying animation. Graphics rendering hardware is used to simulate the visual perception of a character, providing a feedback loop to the overall navigation strategy. The synthetic vision and memory update rules can handle dynamic environments where ob...
James J. Kuffner Jr., Jean-Claude Latombe