Learning to use an artificial visual cue in speech identification. Joseph D.W. Stephens Lori Holt 10.1184/R1/6616985.v1 https://kilthub.cmu.edu/articles/journal_contribution/Learning_to_use_an_artificial_visual_cue_in_speech_identification_/6616985 <p>Visual information from a speaker's face profoundly influences auditory perception of speech. However, relatively little is known about the extent to which visual influences may depend on experience, and extent to which new sources of visual speech information can be incorporated in speech perception. In the current study, participants were trained on completely novel visual cues for phonetic categories. Participants learned to accurately identify phonetic categories based on novel visual cues. These newly-learned visual cues influenced identification responses to auditory speech stimuli, but not to the same extent as visual cues from a speaker's face. The novel methods and results of the current study raise theoretical questions about the nature of information integration in speech perception, and open up possibilities for further research on learning in multimodal perception, which may have applications in improving speech comprehension among the hearing-impaired.</p> 2010-10-01 00:00:00 Acoustic Stimulation Adult Audiometry Auditory Pathways Cues Female Humans Learning Male Models Theoretical Phonetics Photic Stimulation Psychoacoustics Recognition (Psychology) Signal Detection Psychological Speech Acoustics Speech Perception Visual Perception Young Adult