Hearing & Speech Sciences
Permanent URI for this community
Browse
Browsing Hearing & Speech Sciences by Title
Now showing 1 - 20 of 132
Results Per Page
Sort Options
Item Acoustic-Lexical Characteristics of Child-Directed Speech Between 7 and 24 Months and Their Impact on Toddlers' Phonological Processing(Frontiers, 2021-09-24) Cychosz, Margaret; Edwards, Jan R.; Ratner, Nan Bernstein; Eaton, Catherine Torrington; Newman, Rochelle S.Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (n = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.Item Age Effects on Perceptual Organization of Speech in Realistic Environments(2017) Bologna, William Joseph; Dubno, Judy R; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Communication often occurs in environments where background sounds fluctuate and mask portions of the intended message. Listeners use envelope and periodicity cues to group together audible glimpses of speech and fill in missing information. When the background contains other talkers, listeners also use focused attention to select the appropriate target talker and ignore competing talkers. Whereas older adults are known to experience significantly more difficulty with these challenging tasks than younger adults, the sources of these difficulties remain unclear. In this project, three related experiments explored the effects of aging on several aspects of speech understanding in realistic listening environments. Experiments 1 and 2 determined the extent to which aging affects the benefit of envelope and periodicity cues for recognition of short glimpses of speech, phonemic restoration of missing speech segments, and/or segregation of glimpses with a competing talker. Experiment 3 investigated effects of age on the ability to focus attention on an expected voice in a two-talker environment. Twenty younger adults and 20 older adults with normal hearing participated in all three experiments and also completed a battery of cognitive measures to examine contributions from specific cognitive abilities to speech recognition. Keyword recognition and cognitive data were analyzed with an item-level logistic regression based on a generalized linear mixed model. Results indicated that older adults were poorer than younger adults at glimpsing short segments of speech but were able use envelope and periodicity cues to facilitate phonemic restoration and speech segregation. Whereas older adults performed poorer than younger adults overall, these groups did not differ in their ability to focus attention on an expected voice. Across all three experiments, older adults were poorer than younger adults at recognizing speech from a female talker both in quiet and with a competing talker. Results of cognitive tasks indicated that faster processing speed and better visual-linguistic closure were predictive of better speech understanding. Taken together these results suggest that age-related declines in speech recognition may be partially explained by difficulty grouping short glimpses of speech into a coherent message, which may be particularly difficult for older adults when the talker is female.Item Age-related Effects on the Threshold Equalizing Noise (TEN) Test(2008-05-30) Gmitter, Christine; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Some individuals with sensorineural hearing loss have certain places along the basilar membrane where inner hair cells and/or neurons are damaged or destroyed and consequently have ceased to function. These regions have been referred to as "dead regions" in the literature. The TEN (HL) test is a relatively quick behavioral test designed to identify cochlear dead regions. The test relies on the detection of pure-tone signals in the presence of a specially designed broadband noise (threshold equalizing noise) masker. The TEN (HL) test was validated on young to middle aged adult listeners, an age group which does not represent that of all adults with hearing loss. The goal of this study was to evaluate the effects of age on the TEN (HL) test. The TEN (HL) test was administered to 18 younger and 18 older adults with normal to near-normal hearing sensitivity at seven different frequencies in three different levels of TEN noise. These measures were conducted twice to assess test re-test reliability. The older group demonstrated significantly poorer (higher) SNRs compared to the younger group at all three TEN noise levels and for all seven test frequencies. The greatest difference between groups was observed for the highest level of TEN noise. The greatest difference in SNRs was at 4000 Hz compared to other test frequencies for both groups. Both groups performed best (lowest SNRs) at 4000 Hz compared to the other test frequencies. Finally, a main effect of trial was found, revealing that both groups performed statistically better (lower SNRs) on the second trial; however the small magnitude of this improvement (0.37 dB), suggests that the TEN (HL) test has good repeatability for clinical use, at least within the time period assessed. Although there were significant differences between the two groups, overall the TEN (HL) test yielded accurate results in classifying all normal to near-normal hearing participants as not having a dead region. The significantly higher (poorer) SNRs associated with age, combined with the expected difference in SNRs associated with hearing loss, may allow for older hearing-impaired individuals to demonstrate abnormally high SNRs on the TEN (HL) test in the absence of a cochlear dead region. Future studies that include younger and older participants with normal hearing and hearing loss are needed to assess these differences and examine whether different norms are needed for this older population.Item Age-Related Temporal Processing Deficits in Word Segments in Adult Cochlear-Implant Users(Sage, 2019-12-06) Xie, Zilong; Gaskins, Casey R.; Shader, Maureen J.; Gordon-Salant, Sandra; Anderson, Samira; Goupell, Matthew J.Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.Item AN ANALYSIS OF CODE SWITCHING EVENTS IN TYPICALLY DEVELOPING SPANISH-ENGLISH BILINGUAL CHILDREN(2020) Guevara, Sandra Stephanie; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Code-switching (CS) patterns were investigated in language samples of 14 typically-developing Spanish-English bilingual preschool-aged children. CS occurred primarily when the children spoke in Spanish. We investigated code-switched events, vocabulary measures, and disfluencies to better understand if children utilize code-switching to fill in lexical gaps in Spanish, as measured by disfluencies surrounding the code-switch. Results indicate that children’s spoken vocabulary diversity is not related to code-switching frequency, although their receptive vocabulary skills are negatively correlated to proportions of code-switched events. We also found no significant relationship between code-switched events and disfluencies across participants. Findings suggest clinical implications related to best practice for speech-language pathologists when working with bilingual children, as they observe language attrition, and code-switching related to language proficiency and dominance.Item Auditory Temporal Processing Ability in Cochlear-Implant Users: The Effects of Age and Peripheral Neural Survival(2019) Shader, Maureen Joyce; Goupell, Matthew J; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Cochlear implants (CIs) are a valuable tool in the treatment of hearing loss and are considered a safe and effective option for adults of all ages. Nevertheless, older adults with CIs do not always achieve comparable speech recognition performance to younger adults following implantation. The mechanism(s) underlying this age limitation are unknown. It was hypothesized that older CI users would demonstrate age-related deficits in auditory temporal processing ability, which could contribute to an age limitation in CI performance. This is because the ability to accurately encode temporal information is critical to speech recognition through a CI. The current studies were aimed at identifying age-related limitations for processing temporal information using a variety of electrical stimulation parameters with the goal of identifying parameters that could mitigate the negative effects of age on CI performance. Studies 1 and 2 measured auditory temporal processing ability for non-speech signals at the single-electrode level for various electrical stimulation rates. Specifically, Study 1 measured gap detection thresholds, which constitutes a simple, static measurement of temporal processing. Study 2 measured amplitude-modulation detection thresholds, which utilized relatively more complex and dynamic signals. Peripheral neural survival was estimated on each electrode location that was tested in Studies 1 and 2. Study 3 measured phoneme recognition ability for consonant contrasts that varied in discrete temporal cues at multiple stimulation rates and envelope modulation frequencies. Results demonstrated significant effects of age and/or peripheral neural survival on temporal processing ability in each study. However, age and the degree of neural survival were often strongly correlated, with older participants exhibiting poorer neural survival compared to younger participants. This result suggested that a substantial reduction in peripheral neural survival accompanies aging in older CI users, and that these factors should be considered together, rather than separately. Parametric variation in the stimulation settings impacted performance for some participants, but this effect was not consistent across participants, nor was it predicted by age or peripheral neural survival.Item Automatic Syntactic Processing in Agrammatic Aphasia: The Effect of Grammatical Violations(2020) Kim, Minsun; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study aimed to examine syntactic processing in agrammatic aphasia. We hypothesized that agrammatic individuals’ automatic syntactic processing would be preserved, as measured by word monitoring task, and their knowledge of syntactic constraints would be impaired, as measured by sentence judgment task, and their performance would vary by type of syntactic violation. The study found that the sentence processing in agrammatism differed based on the type of violation in both tasks: preserved for semantic and tense violations and impaired for word category violations. However, there was no correlation between the two tasks. Furthermore, single-subject analyses showed that automatic syntactic processing for word category violations does not seem to be impaired in aphasia. Based on the findings, this study supports that knowledge of syntactic constraints and automatic processing may be relatively independent abilities which are not related. Findings suggest that individuals with agrammatic aphasia may have preserved automatic syntactic processing.Item The benefits of acoustic input to combined electric and contralateral acoustic hearing(2008-08-01) Zhang, Ting; Gordan-Salant, Sandra; Dorman, Michael F.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With the extension of cochlear implant candidacy, more and more cochlear-implant listeners fitted with a traditional-long electrode array or a partial-insertion electrode array have residual acoustic hearing either in the nonimplanted ear or in both ears and have shown to receive significant speech-perception benefits from the low-frequency acoustic information provided by residual acoustic hearing. The aim of Experiment 1 was to assess the minimum amount of low-frequency acoustic information that was required to achieve speech-perception benefits both in quiet and in noise from combined electric and contralateral acoustic stimulation (EAS). Speech-recognition performance of consonant-nucleus vowel-consonant (CNC) words in quiet and AzBio sentences in a competing babble noise at +10 dB SNR was evaluated in nine cochlear-implant subjects with residual acoustic hearing in the nonimplanted ear in three listening conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that adding low-frequency acoustic information to electrically stimulated information led to an overall improvement in speech-recognition performance for both words in quiet and sentences in noise. This improvement was observed even when the acoustic information was limited down to 125 Hz, suggesting that the benefits were primarily due to the voice-pitch information provided by residual acoustic hearing. A further improvement in speech-recognition performance was also observed for sentences in noise, suggesting that part of the improvement in performance was likely due to the improved spectral representation of the first formant. The aims of Experiments 2 and 3 were to investigate the underlying psychophysical mechanisms of the contribution of the acoustic input to electric hearing. Temporal Modulation Transfer Functions (TMTFs) and Spectral Modulation Transfer Functions (SMTFs) were measured in three stimulation conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that the temporal resolution of acoustic hearing was as good as that of electric hearing and the spectral resolution of acoustic hearing was better than that of electric hearing, suggesting that the speech-perception benefits were attributable to the normal temporal resolution and the better spectral resolution of residual acoustic hearing. The present dissertation research provided important information about the benefits of low-frequency acoustic input added to electric hearing in cochlear-implant listeners with some residual hearing. The overall results reinforced the importance of preserving residual acoustic hearing in cochlear-implant listeners.Item The benefits of closed captioning for elderly hearing aid users(2007-08-02) Callahan, Julia Susan; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this study was to determine the effects of closed captioning and hearing aid use on word recognition of televised materials in a sample of 15 older adults with hearing loss, who use hearing aids. Participants viewed television segments in four viewing conditions: 1) without hearing aids or closed captioning (BSLN), 2) with hearing aids (HA), 3) with closed captioning (CC), and 4) with hearing aids and closed captioning (HA+CC). Three types of programming (game show, drama, and news) comprised the stimulus sentences. Anecdotal reports suggest older hearing impaired people do not use closed captioning, despite its potential benefit in understanding television. The extent to which listeners use closed captioning and hearing aids on a daily basis was examined. It was expected that listeners would have considerable difficulty in the BSLN condition, because the primary cue is speechreading alone. The HA condition was expected to produce significantly higher scores, because listeners would be able to combine information from two modalities: vision (speechreading) and hearing. It was predicted that CC would yield higher scores than these two conditions, because the visual text signal provides unambiguous information, and that the combined HA+CC condition would produce the highest scores. In addition, differences in speech recognition scores were expected for different program types. One prediction was that drama programming would result in consistently lower speech recognition scores due to reduced availability of visual cues compared to game show or news programming. Results indicated that 77% of participants reported never using the closed captioning when watching television, although most wore hearing aids during television viewing. There was a significant effect of listening/viewing condition for all three program types. For all program types, participants achieved higher word recognition scores in the CC and HA+CC conditions than in HA or BSLN conditions. There was no significant difference in performance between the BSLN and HA conditions. These findings indicate older people with hearing loss do not receive significant benefit from hearing aid use while watching television. However, closed captioning appears to provide significant benefit to older hearing aid users, even though they seldom use this technology.Item The Broad Autism Phenotype Within Mother-Child Interactions(2012) Royster, Christina; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study sought to identify features of the Broad Autism Phenotype (BAP) expressed by mothers during interactions with their infants to further understand how these features relate to early indicators of autism. Twelve mothers were selected who had an older child with autism, and the control group included twelve mothers who did not. Results demonstrated that the groups of mothers did not have significantly different responses on the BAP assessment, and they did not differ in any features of interactions, except that the experimental group used less inhibitory language. Children in the experimental group had lower language scores than the controls. When subjects were divided into groups based upon both child responsiveness and maternal BAP traits, subsequent patterns indicated four mother-child profiles, suggesting that a combination of maternal BAP characteristics and child behavior might influence interaction outcomes. Further research regarding BAP features as an early indicator for autism is discussed.Item Caregiver–Child Interactions and Their Impact on Children’s Fluency: Implications for Treatment(American Speech-Language-Hearing Association, 2004-01) Ratner, Nan BernsteinThere is a relatively strong focus in the stuttering literature on the desirability of selected alterations in parental speech and language style in the management of early stuttering. In this article, the existing research support for such recommendations is evaluated, together with relevant research from the normal language acquisition literature that bears on the potential consequences of changing parental interaction style. Recommendations with relatively stronger and weaker support are discussed. Ways in which children’s communication styles and fluency may be altered through newer fluency treatment protocols are contrasted with older, more general parent advisements. Finally, directions for future research into the efficacy of recommendations made to the parents of children who stutter (CWS) are offered.Item Characterizing the Auditory Phenotype of Niemann-Pick, Type C Disease: A Comparative Examination of Humans and Mice(2011) King, Kelly Anne; Gordon-Salant, Sandra; Brewer, Carmen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Niemann-Pick, type C disease (NPC) is a rare (1:120,000-150,000) autosomal recessive lysosomal lipidosis resulting in a progressive and fatal neurological deterioration. There is much about the pathogenesis and natural history of this complex, heterogeneous disorder that remains unknown. Limited literature suggests auditory dysfunction is part of the phenotype, but an aspect of the disease process that is poorly understood and, indeed, has likely been underreported. Experiment one includes auditory data from 55 patients with NPC seen at the National Institutes of Health between 8/14/2006 and 12/27/2010. These data confirm a prevalent high frequency hearing loss that progressively worsens in at least some individuals. Retrocochlear involvement is common, with abnormalities that suggest a profile of auditory neuropathy spectrum disorder in some patients. Analysis of late-onset cases suggests hearing loss is a premonitory symptom in this disease subcategory. The investigation was expanded to include the mouse model for NPC (BALB/cNctr-Npc1m1N/J), in which symptomatology is clinically, biochemically, and morphologically comparable with affected humans. There have been no previous reports of auditory function in NPC mice, although brainstem histopathology has been localized to the auditory pathway. Experiment two includes auditory brainstem response (ABR) and otoacoustic emission (OAE) data revealing a high frequency hearing loss in mutant NPC mice as early as postnatal day (p) 20, which becomes progressively poorer across the experimental lifespan. With support for both a cochlear and retrocochlear site of lesion, OAE level and ABR latency data provide surprising evidence for a disruption in maturational development of the auditory system in diseased animals, which may add a unique perspective on the role of NPC pathogenesis. This comparative, translational study has, for the first time, addressed comprehensively the existence of, and implications for, auditory dysfunction in NPC. Similar auditory phenotypes between affected humans and mutant mice should aid future efforts in refining site of lesion. In combination, these data support the auditory system as a useful marker for disease status and provide valuable prognostic and quality of life information for patients and their families.Item A communication partner training program: Assessing conversational behaviors and attitudes towards communication in Persons with Aphasia and their Communication Partners(2016) Yutesler, Allison E. Carlson; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study examined the conversational behaviors of eleven dyads consisting of a person with aphasia (PWA) and their familiar communication partner (CP), and investigated changes in behaviors as a result of attending a communication partner-training program CPT). Attitudes about communication were examined and related to conversational behaviors observed pre- and post- training. Results indicated that CPs and PWA used significantly more facilitating behaviors than barrier behaviors, although most dyads experienced some barriers. A comparison of pre-and post-CPT conversations revealed a significant interaction between time and type of behavior, with the increase in the number of facilitators approaching significance. Overall, persons with aphasia and their conversational partners expressed positive attitudes about communication. There were no significant correlations between scores on attitude surveys and behaviors pre or post-training. This study demonstrated that these dyads employed facilitative conversational behaviors even before CPT, and that facilitative behaviors can increase after a one-day training workshop.Item A comparison of lexical access in adults who do and do not stutter(2015) Howell, Timothy Andrew; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Previous work has postulated that a deficit in lexicalization may be an underlying cause of a stuttering disorder (Prins, Main, & Wampler, 1997; Wingate, 1988). This study investigates the time course of lexicalization of nouns and verbs in adults who stutter. A generalized phoneme monitoring (GPM) paradigm was used. Both populations showed a significant effect of word class (verbs yielded slower and less accurate monitoring than nouns), as well as phoneme position (word medial/final phonemes yielded slower and less accurate monitoring than word initial phonemes). Few significant differences were found between groups, although the experimental group showed poorer performance in all conditions, with the exception of null trials, where the experimental group actually out-performed the control group. The trends provide some level of support for the notion that people who stutter have a deficit in lexicalization, although the effect is mitigated by the lack of significance.Item Concussion in Women's Flat-Track Roller Derby(Frontiers, 2022-02-14) Stockbridge, Melissa D.; Keser, Zafer; Newman, Rochelle S.Concussions are common among flat-track roller derby players, a unique and under-studied sport, but little has been done to assess how common they are or what players can do to manage injury risk. The purpose of this study is to provide an epidemiological investigation of concussion incidence and experience in a large international sampling of roller derby players. Six hundred sixty-five roller derby players from 25 countries responded to a comprehensive online survey about injury and sport participation. Participants also responded to a battery of psychometric assessment tools targeting risk-factors for poor injury recovery (negative bias, social support, mental toughness) and players' thoughts and feelings in response to injury. Per 1,000 athletes, 790.98 concussions were reported. Current players reported an average of 2.2 concussions, while former players reported 3.1 concussions. However, groups were matched when these figures were corrected for differences in years of play (approximately one concussion every 2 years). Other frequent injuries included fractures in extremities and upper limbs, torn knee ligaments, and sprained ankles. We found no evidence that players' position, full-contact scrimmages, or flooring impacted number of concussions. However, neurological history and uncorrected vision were more influential predictors of an individual's number of concussions during roller derby than years of participation or age, though all four contributed significantly. These findings should assist athletes in making informed decisions about participation in roller derby, though more work is needed to understand the nature of risk.Item Connected Language in Primary Progressive Aphasia: Testing the Utility of Linguistic Measures in Differentially Diagnosing PPA and its Variants(2017) Vander Woude, Ashlyn; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Difficulty in using language is the primary impairment in Primary Progressive Aphasia (PPA). Individuals with different variants of PPA have been shown to have unequal deficits in various domains of language; however, little research has focused on finding common deficits in PPA that could aid in the differential diagnosis of PPA relative to healthy aging and age-related neurogenerative conditions. The commonality of deficits in variants of PPA was explored in this study by examining the connected speech of 26 individuals with PPA (10 with PPA-G, 9 with PPA-L, 7 with PPA-S), compared to 25 neurologically healthy controls, 20 individuals with Mild Cognitive Impairments (MCI), and 20 individuals with Alzheimer’s Dementia (AD). Measures of fluency, word retrieval, and syntax were used to assess linguistic ability in a between-groups comparison, in addition to a within-groups comparison of the same linguistic measures among specific PPA variants. It was found that participants with PPA showed significant deficits on certain measures of fluency, word retrieval, and syntax. These findings support the idea that a brief language sample has clinical utility in contributing to the differential diagnosis of PPA.Item Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent(2022-01) Gordon-Salant, Sandra; Schwartz, Maya; Oppler, Kelsey; Yeni-Komshian, GraceThis investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the “AV simultaneity window”). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which the judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.Item Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants(2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.Item Development of an Evidence Based Referral Protocol for Early Diagnosis of Vestibular Schwannomas(2008-09-03) Barrett, Jessica Ann; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this investigation was to identify the presenting symptoms and testing outcomes that were most suggestive of a potential vestibular schwannoma and to propose an audiological referral protocol for MRIs. To that end, a retrospective chart review was conducted to examine radiologic, audiometric, and case history information from patients at Walter Reed Army Medical Center who were referred to the Department of Radiology to rule out retrocochlear pathology. Charts of 628 patients were reviewed from their electronic medical records, although the final patient sample was 328 patients who had complete audiologic data. Analyses were conducted to compare the unaffected and affected ears of the positive MRI group to the better and poorer ears of the negative MRI group. Results were significant between the affected ear of the positive group and the poorer ear of the negative group for pure tone thresholds, speech discrimination scores, and acoustic reflex thresholds. Significant differences between the groups were not generally seen for the comparison of the unaffected ear to the better ear, with the exception of acoustic reflex thresholds. The interaural difference between ears was significant between the two groups for pure tone thresholds and speech discrimination scores; however, the difference was not significant for acoustic reflex thresholds. For all significant differences between the groups, the positive MRI group evidenced poorer audiological results. Additionally, three symptoms/outcomes that led to the patients' referral were significantly different between the two groups: unilateral tinnitus, asymmetrical word recognition, and positive rollover in speech recognition scores. Logistic regression was applied to the audiological tests and symptoms to determine the most predictive set of variables that differentiated between the patients with a positive and negative MRI. The most predictive model yielded a sensitivity of 81.25% and a specificity of 82.59% when applied to the current patient sample. The audiological profile identified may be useful for clinicians in deciding whether their patient should be referred for an MRI to rule out the presence of a vestibular schwannoma.Item The Development of Syntactic Complexity and the Irregular Past Tense in Children Who Do and Do Not Stutter(2009) Bauman, Jessica; Ratner, Nan B; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study examined spontaneous language samples and standardized test data obtained from 31 pairs of children who stutter (CWS), ages 25-59 months, and age-matched children who do not stutter (CWNS). Developmental Sentence Scores (DSS; Lee, 1974) as well as the relationships among age, DSS, and other standardized test scores were compared for both groups. No substantial differences were found between groups in the syntactic complexity of spontaneous language; however, the two groups show different relationships between age and DSS and between test scores and DSS. Additionally, observed differences between CWS and CWNS in patterns of past-tense errors and usage are discussed in light of a recent theoretical model of language performance in populations with suspected basal ganglia involvement (Ullman, 2004).