Light fields are known for their potential in generating 3D reconstructions of a scene from novel viewpoints without need for a model of the scene. Reconstruction of novel views, however, often leads to ghosting artefacts, which can be relieved by correcting for the depth of objects within the scene using disparity compensation. Unfortunately, reconstructions from this disparity information suffer from a lack of information on the orientation and smoothness of the underlying surfaces. In this paper, we present a novel representation of the surfaces present in the scene using a planar patch approach. We then introduce a reconstruction algorithm designed to exploit this patch information to produce visually superior reconstructions at higher resolutions. Experimental results demonstrate the effectiveness of this reconstruction technique using high quality patch data when compared to traditional reconstruction methods.
Adam Bowen, Andrew Mullins, Roland G. Wilson, Nasi