A goal of image-based rendering is to synthesize as realistically as possible man made and natural objects. This paper presents a method for image-based modeling and rendering of objects with arbitrary (possibly anisotropic and spatially varying) BRDFs. An object is modeled by sampling the surface's incident light field to reconstruct a non-parametric apparent BRDF at each visible point on the surface. This can be used to render the object from the same viewpoint but under arbitrarily specified illumination. We demonstrate how these object models can be embedded in synthetic scenes and rendered under global illumination which captures the interreflections between real and synthetic objects. We also show how these image-based models can be automatically composited onto video footage with dynamic illumination so that the effects (shadows and shading) of the lighting on the composited object match those of the scene.
Melissa L. Koudelka, Peter N. Belhumeur, Sebastian