Abstract. Mixed Reality (MR) applications tend to focus on the accuracy of registration between the virtual and real objects of a scene, while paying relatively little attention to the representation of the luminance range in the merged video output. In this paper, we propose a means to partially address this deficiency by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). First, we construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. Second, with the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.
Yunjun Zhang, Charles E. Hughes