CORTICAL REPRESENTATIONS OF INTELLIGIBLE AND UNINTELLIGIBLE SPEECH: EFFECTS OF AGING AND LINGUISTIC CONTENT
Publication or External Link
Speech communication requires real-time processing of rapidly varying acoustic sounds across various speech landmarks while recruiting complex cognitive processes to derive the intended meaning. Behavioral studies have highlighted that speech comprehension is altered by factors like aging, linguistic content, and intelligibility, yet the systematic neural mechanisms underlying these changes are not well understood. This thesis aims to explore how the neural bases are modulated by each of these factors using three different experiments, by comparing speech representation in the cortical responses, measured by Magnetoencephalography (MEG). We use neural encoding (Temporal Response Functions (TRFs)) and decoding (reconstruction accuracy) models which describe the mapping between stimulus features and the cortical responses, which are instrumental in understanding cortical temporal processing mechanisms in the brain.Firstly, we investigate age-related changes in timing and fidelity of the cortical representation of speech-in-noise. Understanding speech in a noisy environment becomes more challenging with age, even for healthy aging. Our findings demonstrate that some of the age-related difficulties in understanding speech in noise experienced by older adults are accompanied by age-related temporal processing differences in the auditory cortex. This is an important step towards incorporating neural measures to both diagnostic evaluation and treatments aimed at speech comprehension problems in older adults. Next, we investigate how the cortical representation of speech is influenced by the linguistic content by comparing neural responses to four types of continuous speech-like passages: non-speech, non-words, scrambled words, and narrative. We find neural evidence for emergent features of speech processing from acoustics to linguistic processes at the sentential level as incremental steps in the processing of speech input occur. We also show the gradual computation of hierarchical speech features over time, encompassing both bottom-up and top-down mechanisms. Top-down driven mechanisms at linguistic level demonstrates N400-like response, suggesting involvement of predictive coding mechanisms. Finally, we find potential neural markers of speech intelligibility using a priming paradigm, where intelligibility is varied while keeping the acoustic structure constant. Our findings suggest that segmentation of sounds into words emerges with better speech intelligibility and most strongly at ~400 ms in prefrontal cortex (PFC), in line with engagement of top-down mechanisms associated with priming. Taken together, this thesis furthers our understanding on neural mechanisms underlying speech comprehension and potential objective neural markers to evaluate the level of speech comprehension.