posted on 2007-01-01, 00:00authored byDerek Hoiem, Andrew N. Stein, Alexei A Efros, Martial Hebert
Occlusion reasoning, necessary for tasks such as navigation
and object search, is an important aspect of everyday
life and a fundamental problem in computer vision. We
believe that the amazing ability of humans to reason about
occlusions from one image is based on an intrinsically 3D
interpretation. In this paper, our goal is to recover the
occlusion boundaries and depth ordering of free-standing
structures in the scene. Our approach is to learn to identify
and label occlusion boundaries using the traditional edge
and region cues together with 3D surface and depth cues.
Since some of these cues require good spatial support (i.e.,
a segmentation), we gradually create larger regions and use
them to improve inference over the boundaries. Our experiments
demonstrate the power of a scene-based approach to
occlusion reasoning.