Hybrid geometry- and image-based modeling and rendering systems use photographs taken of a real-world environment and mapped onto the surfaces of a 3D model to achieve photorealism and visual complexity in synthetic images rendered from arbitrary viewpoints. A primary challenge in these systems is to develop algorithms that map the pixels of each photograph efficiently onto the appropriate surfaces of a 3D model, a classical visible surface determination problem. This paper describes an object-space algorithm for computing a visibility map for a set of polygons for a given camera viewpoint. The algorithm traces pyramidal beams from each camera viewpoint through a spatial data structure representing a polyhedral convex decomposition of space containing cell, face, edge, and vertex adjacencies. Beam intersections are computed only for the polygonal faces on the boundary of each traversed cell, and thus the algorithm is output-sensitive. The algorithm also supports efficient determinatio...
Thomas A. Funkhouser