Many mixed-reality systems require real-time composition of virtual objects with real video. Such composition requires some description of the virtual and real scene geometries and calibration information for the real camera. Once these descriptions are available, they can be used to perform many types of visual simulation including virtual object placement, occlusion culling, texture extraction, collision detection and reverse and reillumination methods. In this paper we present a demonstration where we rapidly register prefabricated virtual models to a videoed scene. Using this registration information we were able to augment animated virtual avatars to create a novel mixed reality system. Rather than build a single monolithic system, we briefly introduce our lightweight modelling tool, the Mixed-Reality Toolkit (MRT) which enables rapid reconfiguration of scene objects without performing a full reconstruction. We also generalise to outline some initial requirements for a Mixed Real...
Russell M. Freeman, Anthony Steed, Bin Zhou