%0 Journal Article %A Leech, Robert %A Holt, Lori %A Devlin, Joseph T %A Dick, Frederic %D 2009 %T Expertise with Artificial Nonspeech Sounds Recruits Speech-Sensitive Cortical Regions %U https://kilthub.cmu.edu/articles/journal_contribution/Expertise_with_Artificial_Nonspeech_Sounds_Recruits_Speech-Sensitive_Cortical_Regions/6614747 %R 10.1184/R1/6614747.v1 %2 https://kilthub.cmu.edu/ndownloader/files/12107429 %K psychology %X Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial nonlinguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the nonspeech sounds predicted the change in pretraining to posttraining activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. %I Carnegie Mellon University