Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
13 results
Search Results
Item CORTICAL REPRESENTATIONS OF INTELLIGIBLE AND UNINTELLIGIBLE SPEECH: EFFECTS OF AGING AND LINGUISTIC CONTENT(2023) Karunathilake , I.M Dushyanthi; Simon, Jonathan Z.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Speech communication requires real-time processing of rapidly varying acoustic sounds across various speech landmarks while recruiting complex cognitive processes to derive the intended meaning. Behavioral studies have highlighted that speech comprehension is altered by factors like aging, linguistic content, and intelligibility, yet the systematic neural mechanisms underlying these changes are not well understood. This thesis aims to explore how the neural bases are modulated by each of these factors using three different experiments, by comparing speech representation in the cortical responses, measured by Magnetoencephalography (MEG). We use neural encoding (Temporal Response Functions (TRFs)) and decoding (reconstruction accuracy) models which describe the mapping between stimulus features and the cortical responses, which are instrumental in understanding cortical temporal processing mechanisms in the brain.Firstly, we investigate age-related changes in timing and fidelity of the cortical representation of speech-in-noise. Understanding speech in a noisy environment becomes more challenging with age, even for healthy aging. Our findings demonstrate that some of the age-related difficulties in understanding speech in noise experienced by older adults are accompanied by age-related temporal processing differences in the auditory cortex. This is an important step towards incorporating neural measures to both diagnostic evaluation and treatments aimed at speech comprehension problems in older adults. Next, we investigate how the cortical representation of speech is influenced by the linguistic content by comparing neural responses to four types of continuous speech-like passages: non-speech, non-words, scrambled words, and narrative. We find neural evidence for emergent features of speech processing from acoustics to linguistic processes at the sentential level as incremental steps in the processing of speech input occur. We also show the gradual computation of hierarchical speech features over time, encompassing both bottom-up and top-down mechanisms. Top-down driven mechanisms at linguistic level demonstrates N400-like response, suggesting involvement of predictive coding mechanisms. Finally, we find potential neural markers of speech intelligibility using a priming paradigm, where intelligibility is varied while keeping the acoustic structure constant. Our findings suggest that segmentation of sounds into words emerges with better speech intelligibility and most strongly at ~400 ms in prefrontal cortex (PFC), in line with engagement of top-down mechanisms associated with priming. Taken together, this thesis furthers our understanding on neural mechanisms underlying speech comprehension and potential objective neural markers to evaluate the level of speech comprehension.Item AN INVESTIGATION OF NEURAL MECHANISMS UNDERLYING VERB MORPHOLOGY DEFICITS IN APHASIA(2019) Pifer, Madeline R; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Agrammatic aphasia is an acquired language disorder characterized by slow, non-fluent speech that include primarily content words. It is well-documented that people with agrammatism (PWA) have difficulty with production of verbs and verb morphology, but it is unknown whether these deficits occur at the single word-level, or are the result of a sentence-level impairment. The first aim of this paper is to determine the linguistic level that verb morphology impairments exist at by using magnetoencephalography (MEG) scanning to analyze neural response to two language tasks (one word-level, and one sentence-level). It has also been demonstrated that PWA benefit from a morphosemantic intervention for verb morphology deficits, but it is unknown if this therapy induces neuroplastic changes in the brain. The second aim of this paper is to determine whether or not neuroplastic changes occur after treatment, and explore the neural mechanisms by which this improvement occurs.Item EFFECTS OF AGING ON MIDBRAIN AND CORTICAL SPEECH-IN-NOISE PROCESSING(2016) Presacco, Alessandro; Andreson, Samira; Simon, Jonathan Z.; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.Item MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION(2011) Jenkins III, Julian; Poeppel, David; Biology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Natural scenes and ecological signals are inherently complex and understanding of their perception and processing is incomplete. For example, a speech signal contains not only information at various frequencies, but is also not static; the signal is concurrently modulated temporally. In addition, an auditory signal may be paired with additional sensory information, as in the case of audiovisual speech. In order to make sense of the signal, a human observer must process the information provided by low-level sensory systems and integrate it across sensory modalities and with cognitive information (e.g., object identification information, phonetic information). The observer must then create functional relationships between the signals encountered to form a coherent percept. The neuronal and cognitive mechanisms underlying this integration can be quantified in several ways: by taking physiological measurements, assessing behavioral output for a given task and modeling signal relationships. While ecological tokens are complex in a way that exceeds our current understanding, progress can be made by utilizing synthetic signals that encompass specific essential features of ecological signals. The experiments presented here cover five aspects of complex signal processing using approximations of ecological signals : (i) auditory integration of complex tones comprised of different frequencies and component power levels; (ii) audiovisual integration approximating that of human speech; (iii) behavioral measurement of signal discrimination; (iv) signal classification via simple computational analyses and (v) neuronal processing of synthesized auditory signals approximating speech tokens. To investigate neuronal processing, magnetoencephalography (MEG) is employed to assess cortical processing non-invasively. Behavioral measures are employed to evaluate observer acuity in signal discrimination and to test the limits of perceptual resolution. Computational methods are used to examine the relationships in perceptual space and physiological processing between synthetic auditory signals, using features of the signals themselves as well as biologically-motivated models of auditory representation. Together, the various methodologies and experimental paradigms advance the understanding of ecological signal analytics concerning the complex interactions in ecological signal structure.Item The Effects of Aging on Lexical Access(2011) Tower, Kathryn Rachel; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)As the U.S. population ages, the need to understand how language changes with age becomes more important. Difficulty with word retrieval is one of the most notable changes as individuals age (Burke & Shafto, 2004); however, theoretical models of aging disagree on the cause. Two prominent theories are the impaired lexical access hypothesis and the general slowing theory. The present study aimed to explore these two ideas using magnetoencephalography (MEG). A young adult group (N=17, mean age 20.6 years) and an older adult group (N=9, mean age =64.6 years) participated in a lexical decision task using verbs. MEG latency data corresponding to lexical access found no between-group difference. Behavioral response times were significantly slower in the older group. Results point either to the idea that linguistic difficulties experienced by older individuals are the result of reduced abilities in phonological or motor processing, or that while lexical representations remain intact, the connections between them become less efficient with age.Item The Neural Dynamics of Amplitude Modulation Processing in the Human Auditory System(2010) Li, Kai Sum; Simon, Jonathan Z; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The neural, auditory amplitude modulation transfer function (MTF) is estimated from 3 - 50 Hz using magnetoencephalography (MEG). All acoustic stimuli are amplitude modulated (AM). Two different dynamical stimulus types are used: exponential sweeps with the AM rate changing from 2 up to 60 Hz, and 89 down to 3 Hz. Several carriers are also employed, including 3 pure-tone carriers (250 Hz, 707 Hz and 2 kHz) and 3 bandlimited pink-noise carriers (1/3, 2 and 5 octaves centered at 707 Hz). Neural response magnitudes, phases, group delays and impulse responses are all estimated. Our results show that the shape of modulation transfer function is flat but with a slightly low pass shape below 10 Hz. The phase of the response is approximately linear in many frequencies. The group delay is around 50 ms at 40 Hz for increasing-frequency sweeps and closer to 100 ms for decreasing-frequency sweeps.Item On The Way To Linguistic Representation: Neuromagnetic Evidence of Early Auditory Abstraction in the Perception of Speech and Pitch(2009) Monahan, Philip Joseph; Idsardi, William J; Poeppel, David E; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The goal of this dissertation is to show that even at the earliest (non-invasive) recordable stages of auditory cortical processing, we find evidence that cortex is calculating abstract representations from the acoustic signal. Looking across two distinct domains (inferential pitch perception and vowel normalization), I present evidence demonstrating that the M100, an automatic evoked neuromagnetic component that localizes to primary auditory cortex is sensitive to abstract computations. The M100 typically responds to physical properties of the stimulus in auditory and speech perception and integrates only over the first 25 to 40 ms of stimulus onset, providing a reliable dependent measure that allows us to tap into early stages of auditory cortical processing. In Chapter 2, I briefly present the episodicist position on speech perception and discuss research indicating that the strongest episodicist position is untenable. I then review findings from the mismatch negativity literature, where proposals have been made that the MMN allows access into linguistic representations supported by auditory cortex. Finally, I conclude the Chapter with a discussion of the previous findings on the M100/N1. In Chapter 3, I present neuromagnetic data showing that the re-sponse properties of the M100 are sensitive to the missing fundamental component using well-controlled stimuli. These findings suggest that listeners are reconstructing the inferred pitch by 100 ms after stimulus onset. In Chapter 4, I propose a novel formant ratio algorithm in which the third formant (F3) is the normalizing factor. The goal of formant ratio proposals is to provide an explicit algorithm that successfully "eliminates" speaker-dependent acoustic variation of auditory vowel tokens. Results from two MEG experiments suggest that auditory cortex is sensitive to formant ratios and that the perceptual system shows heightened sensitivity to tokens located in more densely populated regions of the vowel space. In Chapter 5, I report MEG results that suggest early auditory cortical processing is sensitive to violations of a phonological constraint on sound sequencing, suggesting that listeners make highly specific, knowledge-based predictions about rather abstract anticipated properties of the upcoming speech signal and violations of these predictions are evident in early cortical processing.Item Form, meaning and context in lexical access: MEG and behavioral evidence(2009) Almeida, Diogo; Poeppel, David; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)One of the main challenges in the study of cognition is how to connect brain activity to cognitive processes. In the domain of language, this requires coordination between two different lines of research: theoretical models of linguistic knowledge and language processing on the one side and brain sciences on the other. The work reported in this dissertation attempts to link these two lines of research by focusing on one particular aspect of linguistic processing, namely lexical access. The rationale for this focus is that access to the lexicon is a mandatory step in any theory of linguistic computation, and therefore findings about lexical access procedures have consequences for language processing models in general. Moreover, in the domain of brain electrophysiology, past research on event-related brain potentials (ERPs) - electrophysiological responses taken to reflect processing of certain specific kinds of stimuli or specific cognitive processes - has uncovered different ERPs that have been connected to linguistic stimuli and processes. One particular ERP, peaking at around 400 ms post-stimulus onset (N400) has been linked to lexico-semantic processing, but its precise functional interpretation remains controversial: The N400 has been proposed to reflect lexical access procedures as well as higher order semantic/pragmatic processing. In a series of three MEG experiments, we show that access to the lexicon from print occurs much earlier than previously thought, at around 200 ms, but more research is needed before the same conclusion can be reached about lexical access based on auditory or sign language input. The cognitive activity indexed by the N400 and its MEG analogue is argued to constitute predictive processing that integrates information from linguistic and non-linguistic sources at a later, post-lexical stage.Item Hearing VS. Listening: Attention Changes the Neural Representations of Auditory Percepts(2008-05-01) xiang, juanjuan; Simon, Jonathan Z.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Making sense of acoustic environments is a challenging task. At any moment, the signals from distinct auditory sources arrive in the ear simultaneously, forming an acoustic mixture. The brain must represent distinct auditory objects in this complex scene and prioritize processing of relevant stimuli while maintaining the capability to react quickly to unexpected events. The present studies explore neural representations of temporal modulations and the effects of attention on these representations. Temporal modulation plays a significant role in speech perception and auditory scene analysis. To uncover how temporal modulations are processed and represented is potentially of great importance for our general understanding of the auditory system. Neural representations of compound modulations were investigated by magnetoencephalography (MEG). Interaction components are generated by near rather than distant modulation rhythms, suggesting band-limited modulation filter banks operating in the central stage of the auditory system. Furthermore, the slowest detectable neural oscillation in the auditory cortex corresponds to the perceived oscillation of the auditory percept. Interactions between stimulus-evoked and goal-related neural responses were investigated in simultaneous behavioral-neurophysiological studies, in which we manipulate subjects' attention to different components of an auditory scene. Our experimental results reveal that attention to the target correlates with a sustained increase in the neural target representation, beyond well-known transient effects. The enhancement of power and phase coherence presumably reflects increased local and global synchronizations in the brain. Furthermore, the target's perceptual detectability improves over time (several seconds), correlating strongly with the target representation's neural buildup. The change in cortical representations can be reversed in a short time-scale (several minutes) by various behavioral goals. These aforementioned results demonstrate that the neural representation of the percept is encoded using the feature-driven mechanisms of sensory cortex, but shaped in a sustained manner via attention-driven projections from higher-level areas. This adaptive neural representations occur on multiple time scales (seconds vs. minutes) and multiple spatial scales (local vs. global synchronization). Such multiple resolutions of adaptation may underlie general mechanisms of scene organization in any sensory modality and may contribute to our highly adaptive behaviors.Item Memory-related cognitive modulation of human auditory cortex: Magnetoencephalography-based validation of a computational model(2008-04-09) Rong, Feng; Contreras-Vidal, José L; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)It is well known that cognitive functions exert task-specific modulation of the response properties of human auditory cortex. However, the underlying neuronal mechanisms are not well understood yet. In this dissertation I present a novel approach for integrating 'bottom-up' (neural network modeling) and 'top-down' (experiment) methods to study the dynamics of cortical circuits correlated to shortterm memory (STM) processing that underlie the task-specific modulation of human auditory perception during performance of the delayed-match-to-sample (DMS) task. The experimental approach measures high-density magnetoencephalography (MEG) signals from human participants to investigate the modulation of human auditory evoked responses (AER) induced by the overt processing of auditory STM during task performance. To accomplish this goal, a new signal processing method based on independent component analysis (ICA) was developed for removing artifact contamination in the MEG recordings and investigating the functional neural circuits underlying the task-specific modulation of human AER. The computational approach uses a large-scale neural network model based on the electrophysiological knowledge of the involved brain regions to simulate system-level neural dynamics related to auditory object processing and performance of the corresponding tasks. Moreover, synthetic MEG and functional magnetic resonance imaging (fMRI) signals were simulated with forward models and compared to current and previous experimental findings. Consistently, both simulation and experimental results demonstrate a DMSspecific suppressive modulation of the AER and corresponding increased connectivity between the temporal auditory and frontal cognitive regions. Overall, the integrated approach illustrates how biologically-plausible neural network models of the brain can increase our understanding of brain mechanisms and their computations at multiple levels from sensory input to behavioral output with the intermediate steps defined.