Carnegie Mellon University
Browse
- No file added yet -

Visual Learning of Statistical Relations Among Non-adjacent Features: Evidence for Structural Encoding.

journal contribution
posted on 2011-04-01, 00:00 authored by Elan Barenholtz, Michael J. Tarr

Recent results suggest that observers can learn, unsupervised, the co-occurrence of independent shape features in viewed patterns (e.g., Fiser & Aslin, 2001). A critical question with regard to these findings is whether learning is driven by a structural, rule-based encoding of spatial relations between distinct features or by a pictorial, template-like encoding, in which spatial configurations of features are embedded in a 'holistic' fashion. In two experiments, we test whether observers can learn combinations of features when the paired features are separated by an intervening spatial 'gap', in which other, unrelated features can appear. This manipulation both increases task difficulty and makes it less likely that the feature-combinations are encoded simply as larger unitary features. Observers exhibited learning consistent with earlier studies, suggesting that unsupervised learning of compositional structure is based on the explicit encoding of spatial relations between separable visual features. More generally, these results provide support for compositional structure in visual representation.

History

Date

2011-04-01

Usage metrics

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC