posted on 1981-01-01, 00:00authored byThomas Stepleton, Tai Sing Lee
A number of recent systems for unsupervised feature-based learning of object models take advantage of co-occurrence: broadly, they search for clusters of discriminative features that tend to coincide across multiple still images or video frames. An intuition behind these efforts is that regularly co-occurring image features are likely to refer to physical traits of the same object, while features that do not often co-occur are more likely to belong to different objects. In this paper we discuss a refinement to these techniques in which multiple segmentations establish meaningful contexts for co-occurrence, or limit the spatial regions in which two features are deemed to co-occur. This approach can reduce the variety of image data necessary for model learning and simplify the incorporation of less discriminative features into the model.