Sciweavers

IJCV
2011

Recovering Occlusion Boundaries from an Image

13 years 7 months ago
Recovering Occlusion Boundaries from an Image
Occlusion reasoning is a fundamental problem in computer vision. In this paper, we propose an algorithm to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Rather than viewing the problem as one of pure image processing, our approach employs cues from an estimated surface layout and applies Gestalt grouping principles using a conditional random field (CRF) model. We propose a hierarchical segmentation process, based on agglomerative merging, that re-estimates boundary strength as the segmentation progresses. Our experiments on the Geometric Context dataset validate our choices for features, our iterative refinement of classifiers, and our CRF model. In experiments on the Berkeley Segmentation Dataset, PASCAL VOC 2008, and LabelMe, we also show that the trained algorithm generalizes to other datasets and can be used as an object boundary predictor with figure/ground labels.
Derek Hoiem, Alexei A. Efros, Martial Hebert
Added 14 May 2011
Updated 14 May 2011
Type Journal
Year 2011
Where IJCV
Authors Derek Hoiem, Alexei A. Efros, Martial Hebert
Comments (0)