In this paper, we address the problem of finding depth from defocus in a fundamentally new way. Most previous methods have used an approximate model in which blurring is shift invariant and pixel area is negligible. Our model avoids these assumptions. We consider the area in the scene whose radiance is recorded by a pixel on the sensor, and relate the size and shape of that area to the scene’s position with respect to the plane of focus. This is the notion of reverse projection, which allows us to illustrate that, when out of focus, neighboring pixels will record light from overlapping regions in the scene. This overlap results in a measurable change in the correlation between the pixels’ intensity values. We demonstrate that this relationship can be characterized in such a way as to recover depth from defocused images. Experimental results show the ability of this relationship to accurately predict depth from correlation measurements.
Scott McCloskey, Michael S. Langer, Kaleem Siddiqi