Current Virtual Reality technologies provide many ways to interact with Virtual Humans. Most of those techniques, however are limited to synthetic elements and require cumbersome sensors. We have combined a real-time simulation and rendering platform with a real-time, noninvasive vision based recognition system to investigate interactions in a mixed environment with real and synthetic elements. In this paper we present the resulting system, the example of a checkers game between a real person and an autonomous virtual human to demonstrate its performance.