Browsing by Author "Presacco, Alessandro"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Dynamic Estimation of Auditory Temporal Response Functions via State-Space Models with Gaussian Mixture Process Noise(PLOS Computational Biology, 2020-08-02) Presacco, Alessandro; Miran, Sina; Fu, Michael; Marcus, Steven; Jonathan, Simon; Babadi, BetashMEG data used for the "Switching attention" experimentItem Dynamic Estimation of Auditory Temporal Response Functions via State-Space Models with Gaussian Mixture Process Noise(PLOS Computational Biology, 2020-08-02) Presacco, Alessandro; Miran, Sina; Fu, Michael; Marcus, Steven; Simon, Jonathan; Babadi, BehtashMEG data used for the "Switching attention" experiment. This set of data refers to the part of the "forced" switching of attentionItem EEG-MEG(2018) Presacco, Alessandro; Simon, Jonathan; Anderson, Samira; Simon, Jonathan; Anderson, SamiraData collected from normal hearing younger adults (18-30) and from normal hearing and hearing impaired older adults (>= 60) to study age-related deficits in the representation of speech in noise. EEG data were collected with Biosemi system by using the ABR module from one electrode placed in Cz and reference with respect to the left and right ear lobes. The data uploaded are the raw ones in bdf format. BDF files can be opened by using matlab scripts that can be found in toolboxes such as EEGLab or that can be found directly from Biosemi's website. Each mat file contains information about sampling frequency, channels, triggers, etc. Each participants was tested in 9 conditions: Quiet, +3 dB, 0 dB, -3 dB and -6 dB with English (H) and Dutch (L) speakers used as background noise. Files were named based on the SNR and background speaker used. For instance, S01_M3_H, means Subject 01, SNR = -3 dB and English speaker in the background, while S01_P3_L, means Subject 01, SNR = +3 dB and Dutch speaker in the background. The "Q" denomination is used for the condition in quiet. Subjects id# from 1 to 17 are younger adults, subjects with id# from 21 to 35 are normal hearing older adults and subjects named S01_HL.....S17_HL are older adults with hearing loss. MEG data were collected from 157 sensors. Each mat file contains information about the data, such as sampling frequency. A 3D matrix is used to store the 3 repetitions recorded from each conditions. Each participants was tested in 10 conditions: Quiet, +3 dB, 0 dB, -3 dB and -6 dB with English (H) and Dutch (L) speakers used as background noise. Two conditions in quiet were played. Files were named based on the SNR and background speaker used, in the same way as for the EEG data. Auditory stimuli were also uploaded. The stimuli have been pre-processed as described in our publications "Evidence of degraded representation of speech in noise, in the aging midbrain and cortex" and "Effect of informational content of noise on speech representation in the aging midbrain and cortex". The envelope needs to be extracted.Item EFFECTS OF AGING ON MIDBRAIN AND CORTICAL SPEECH-IN-NOISE PROCESSING(2016) Presacco, Alessandro; Andreson, Samira; Simon, Jonathan Z.; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.Item High Frequency Cortical Processing of Continuous Speech in Younger and Older Listeners - Dataset(2019) Kulasingham, Joshua; Brodbeck, Christian; Presacco, Alessandro; Kuchinsky, Stefanie E.; Anderson, Samira; Simon, Jonathan Z.Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ~100 Hz to several hundred Hz, phase-locking to the stimulus waveform at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed high-frequency responses (70-300 Hz) to continuous speech using neural source-localized reverse correlation and its corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-300 Hz band. Consistent with the insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a purely cortical origin with ~40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-300 Hz band of the speech waveform itself, and b) the 70-300 Hz temporal modulations in the high frequency envelope (300-4000 Hz) of the speech stimulus. The response was dominantly driven by the high frequency envelope, with a much weaker contribution from the waveform (carrier) itself. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study does not find clear age-related differences in high frequency cortical responses. Finally, these results suggest that EEG high (FFR-like) frequency responses have distinct and separable contributions from both subcortical and cortical sources. Cortical responses at FFR-like frequencies share some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.