Carnegie Mellon University
Browse

Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision.

journal contribution
posted on 2003-03-01, 00:00 authored by Roberta KlatzkyRoberta Klatzky, Yvonne Lippa, Jack M. Loomis, Reginald G. Golledge

Participants standing at an origin learned the distance and azimuth of target objects that were specified by 3-D sound, spatial language, or vision. We tested whether the ensuing target representations functioned equivalently across modalities for purposes of spatial updating. In experiment 1, participants localized targets by pointing to each and verbalizing its distance, both directly from the origin and at an indirect waypoint. In experiment 2, participants localized targets by walking to each directly from the origin and via an indirect waypoint. Spatial updating bias was estimated by the spatial-coordinate difference between indirect and direct localization; noise from updating was estimated by the difference in variability of localization. Learning rate and noise favored vision over the two auditory modalities. For all modalities, bias during updating tended to move targets forward, comparably so for three and five targets and for forward and rightward indirect-walking directions. Spatial language produced additional updating bias and noise from updating. Although spatial representations formed from language afford updating, they do not function entirely equivalently to those from intrinsically spatial modalities.

History

Date

2003-03-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC