We present an image synthesis methodology and a system built around it. Given a sparse set of photographs taken from unknown viewpoints, the system generates images from new, different viewpoints with correct perspective, and handles occlusion. It achieves this without requiring any knowledge about the 3-D structure of the scene nor the intrinsic camera parameters. The photo-realistic rendering process is polygon based and can be potentially implemented as real time texture mapping. The system is robust to noise by taking advantage of duplicate information from multiple views. We present results on several example scenes.
Qian Chen, Gérard G. Medioni