posted on 2011-12-01, 00:00authored bySantosh K. Divvala, Alexei Efros, Martial Hebert, Svetlana Lazebnik
The amount of labeled training data required for image interpretation tasks is a major drawback of current methods. How can we use the gigantic collection of unlabeled images available on the web to aid these tasks? In this paper, we present a simple approach based on the notion of patch-based context to extract useful priors for regions within a query image from a large collection of (6 million) unlabeled images. This contextual prior over image classes acts as a non-redundant complimentary source of knowledge that helps in disambiguating the confusions within the predictions of local region-level features. We demonstrate our approach on the challenging tasks of region classification and surface layout estimation.