In image-guided neurosurgery, "mixed reality" merging has been used to merge video images with an underlying computer model. We have developed methods to map intra-operative endoscopic video to 3D surfaces derived from preoperative scans for enhanced visualization during surgery. We acquired CT images of a brain phantom, and digitized endoscopic video images from a tracked neuroendoscope. Registration of the phantom and CT images was accomplished using markers that could be identified in both spaces. The endoscopic images were corrected for radial lens distortion, and mapped onto surfaces extracted from the CT images via a ray-traced texture-mapping algorithm. The localization accuracy of the
Damini Dey, Piotr J. Slomka, David G. Gobbi, Terry