10.1184/R1/6617501.v1 Nicholas A. Giudice Nicholas A. Giudice Roberta Klatzky Roberta Klatzky Christopher R. Bennett Christopher R. Bennett Jack M. Loomis Jack M. Loomis Perception of 3-D location based on vision, touch, and extended touch. Carnegie Mellon University 2013 Adult Depth Perception Female Humans Male Physical Stimulation Spatial Behavior Touch Vision Ocular Young Adult 2013-01-01 00:00:00 Journal contribution https://kilthub.cmu.edu/articles/journal_contribution/Perception_of_3-D_location_based_on_vision_touch_and_extended_touch_/6617501 <p>Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.</p>