Dissociations in performance on novel versus irregular items: single-route demonstrations with input gain in localist and distributed models.
Four pairs of connectionist simulations are presented in which quasi-regular mappings are computed using localist and distributed representations. In each simulation, a control parameter termed input gain was modulated over the only level of representation that mapped inputs to outputs. Input gain caused both localist and distributed models to shift between regularity-based and item-based modes of processing. Performance on irregular items was selectively impaired in the regularity-based modes, whereas performance on novel items was selectively impaired in the item-based modes. Thus, the models exhibited double dissociations without separable processing components. These results are discussed in the context of analogous dissociations found in language domains such as word reading and inflectional morphology.