Browsing Hearing & Speech Sciences Research Works by Issue Date
Now showing 1 - 14 of 14
Results Per Page
- ItemSyllable structure development of toddlers with expressive specific language impairment(Cambridge University Press, 2000) Pharr, Aimee Baird; Ratner, Nan Bernstein; Rescorla, LeslieA total of 35 children – 20 with expressive specific language impairment (SLI-E) and 15 typically developing (TD) peers – were compared longitudinally from 24 to 36 months with respect to their production of syllable shapes in 10-minute spontaneous speech samples. SLI-E 24-month-olds predominantly produced earlier developing syllable shapes containing vowels, liquids, and glides. TD 24-month-olds and SLI-E 36-month-olds produced approximately the same proportion of syllable types, with the exception of consonant clusters, where TD 24-month-olds produced more than SLI-E 36-month-olds. TD children at 36 months showed the greatest use of syllable shapes containing two different consonants and consonant clusters. Detailed analyses revealed that SLI-E children produced fewer syllable shapes containing final consonants, more than one consonant type, and consonant clusters. Furthermore, the children with SLI-E were found to vocalize less often than their TD peers. The possible relationships between these findings, SLI-E children’s concomitant deficits in morphology and syntax, and the implications for diagnosis and remediation are discussed.
- ItemParental Perceptions of Children’s Communicative Development at Stuttering Onset(American Speech-Language-Hearing Association, 2000-10) Ratner, Nan Bernstein; Silverman, StacyThere has been clinical speculation that parents of young stuttering children have expectations of their children’s communication abilities that are not well-matched to the children’s actual skills. We appraised the language abilities of 15 children close to the onset of stuttering symptoms and 15 age-, sex-, and SES-matched fluent children using an array of standardized tests and spontaneous language sample measures. Parents concurrently completed two parent-report measures of the children’s communicative development. Results indicated generally depressed performance on all child speech and language measures by the children who stutter. Parent report was closely attuned to child performance for the stuttering children; parents of nonstuttering children were less accurate in their predictions of children’s communicative performance. Implications for clinical advisement to parents of stuttering children are discussed.
- ItemParental Language Input to Children at Stuttering Onset(American Speech-Language-Hearing Association, 2001-10) Miles, Stephanie; Ratner, Nan BernsteinMany programs for the indirect management of stuttering in early childhood counsel adjustment of parental language models, which are presumed to play an exacerbating influence on vulnerable children’s fluency. We examined the relative levels of linguistic demand in maternal language to stuttering and nonstuttering children, adjusted for each child’s current level of linguistic development. No significant or observable differences were detected in the relative level of linguistic demand posed by parents of stuttering children very close to onset of symptoms. Empirical support for current advisement and potential ramifications are discussed.
- ItemFluency of School-Aged Children With a History of Specific Expressive Language Impairment: An Exploratory Study(American Speech-Language-Hearing Association, 2002-02) Boscolo, Brian; Ratner, Nan Bernstein; Rescorla, LeslieA large volume of literature now links language demand and fluency behaviors in children. Although it might be reasonable to assume that children with relatively weak language skills might demonstrate higher levels of disfluency, the sparse literature on this topic is characterized by conflicting findings on the relationship between language impairment and disfluency. However, in studies finding elevated disfluency in children with specific language impairment, a higher frequency of disfluencies more characteristic of stuttering has been noted. This study asks whether children with long-standing histories of language delay and impairment are more disfluent, and display different types of disfluencies than their typically developing, age-matched peers. Elicited narratives from 22 pairs of 9-year-old children were analyzed for fluency characteristics. Half of the children had histories of specific expressive language impairment (HSLI-E), whereas the others had typical developmental histories. The children with HSLI-E were significantly more disfluent than their peers and produced more stutter-like disfluencies, although these behaviors were relatively infrequent in both groups. Implications for clinical intervention and future research are discussed.
- ItemCaregiver–Child Interactions and Their Impact on Children’s Fluency: Implications for Treatment(American Speech-Language-Hearing Association, 2004-01) Ratner, Nan BernsteinThere is a relatively strong focus in the stuttering literature on the desirability of selected alterations in parental speech and language style in the management of early stuttering. In this article, the existing research support for such recommendations is evaluated, together with relevant research from the normal language acquisition literature that bears on the potential consequences of changing parental interaction style. Recommendations with relatively stronger and weaker support are discussed. Ways in which children’s communication styles and fluency may be altered through newer fluency treatment protocols are contrasted with older, more general parent advisements. Finally, directions for future research into the efficacy of recommendations made to the parents of children who stutter (CWS) are offered.
- ItemEffects of Age, Cognition, and Neural Encoding on the Perception of Temporal Speech Cues(Frontiers Media, 2019-07-19) Roque, Lindsey; Karawani, Hanin; Sandra, Gordon-Salant; Anderson, SamiraOlder adults commonly report difficulty understanding speech, particularly in adverse listening environments. These communication difficulties may exist in the absence of peripheral hearing loss. Older adults, both with normal hearing and with hearing loss, demonstrate temporal processing deficits that affect speech perception. The purpose of the present study is to investigate aging, cognition, and neural processing factors that may lead to deficits on perceptual tasks that rely on phoneme identification based on a temporal cue – vowel duration. A better understanding of the neural and cognitive impairments underlying temporal processing deficits could lead to more focused aural rehabilitation for improved speech understanding for older adults. This investigation was conducted in younger (YNH) and older normal-hearing (ONH) participants who completed three measures of cognitive functioning known to decline with age: working memory, processing speed, and inhibitory control. To evaluate perceptual and neural processing of auditory temporal contrasts, identification functions for the contrasting word-pair WHEAT and WEED were obtained on a nine-step continuum of vowel duration, and frequency-following responses (FFRs) and cortical auditory-evoked potentials (CAEPs) were recorded to the two endpoints of the continuum. Multiple linear regression analyses were conducted to determine the cognitive, peripheral, and/or central mechanisms that may contribute to perceptual performance. YNH participants demonstrated higher cognitive functioning on all three measures compared to ONH participants. The slope of the identification function was steeper in YNH than in ONH participants, suggesting a clearer distinction between the contrasting words in the YNH participants. FFRs revealed better response waveform morphology and more robust phase-locking in YNH compared to ONH participants. ONH participants also exhibited earlier latencies for CAEP components compared to the YNH participants. Linear regression analyses revealed that cortical processing significantly contributed to the variance in perceptual performance in the WHEAT/WEED identification functions. These results suggest that reduced neural precision contributes to age-related speech perception difficulties that arise from temporal processing deficits.
- ItemAge-Related Temporal Processing Deficits in Word Segments in Adult Cochlear-Implant Users(Sage, 2019-12-06) Xie, Zilong; Gaskins, Casey R.; Shader, Maureen J.; Gordon-Salant, Sandra; Anderson, Samira; Goupell, Matthew J.Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.
- ItemRead my lips! Perception of speech in noise by preschool children with autism and the impact of watching the speaker’s face(Springer Nature, 2021-01-05) Newman, Rochelle S.; Kirby, Laura A.; Von Holzen, Katie; Redcay, ElizabethAdults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments.
- ItemAcoustic-Lexical Characteristics of Child-Directed Speech Between 7 and 24 Months and Their Impact on Toddlers' Phonological Processing(Frontiers, 2021-09-24) Cychosz, Margaret; Edwards, Jan R.; Ratner, Nan Bernstein; Eaton, Catherine Torrington; Newman, Rochelle S.Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (n = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.
- ItemDetection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent(2022-01) Gordon-Salant, Sandra; Schwartz, Maya; Oppler, Kelsey; Yeni-Komshian, GraceThis investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the “AV simultaneity window”). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which the judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.
- ItemConcussion in Women's Flat-Track Roller Derby(Frontiers, 2022-02-14) Stockbridge, Melissa D.; Keser, Zafer; Newman, Rochelle S.Concussions are common among flat-track roller derby players, a unique and under-studied sport, but little has been done to assess how common they are or what players can do to manage injury risk. The purpose of this study is to provide an epidemiological investigation of concussion incidence and experience in a large international sampling of roller derby players. Six hundred sixty-five roller derby players from 25 countries responded to a comprehensive online survey about injury and sport participation. Participants also responded to a battery of psychometric assessment tools targeting risk-factors for poor injury recovery (negative bias, social support, mental toughness) and players' thoughts and feelings in response to injury. Per 1,000 athletes, 790.98 concussions were reported. Current players reported an average of 2.2 concussions, while former players reported 3.1 concussions. However, groups were matched when these figures were corrected for differences in years of play (approximately one concussion every 2 years). Other frequent injuries included fractures in extremities and upper limbs, torn knee ligaments, and sprained ankles. We found no evidence that players' position, full-contact scrimmages, or flooring impacted number of concussions. However, neurological history and uncorrected vision were more influential predictors of an individual's number of concussions during roller derby than years of participation or age, though all four contributed significantly. These findings should assist athletes in making informed decisions about participation in roller derby, though more work is needed to understand the nature of risk.
- ItemTaking language science to zoom school: Virtual outreach to elementary school students(Wiley, 2022-09-11) Oppenheimer, Kathleen E.; Salig, Lauren K.; Thorburn, Craig A.; Exton, Erika L.We describe guest speaker presentations that we developed to bring language science to elementary school students via videoconference. By using virtual backgrounds and guided discovery learning, we effectively engage children as young as 7 years in in-depth explorations of language science concepts. We share the core principles that guide our presentations and describe two of our outreach activities, Speech Detectives and Bilingual Barnyard. We report brief survey data from 157 elementary school students showing that they find our presentations interesting and educational. While our pivot to virtual outreach was motivated by the Covid-19 pandemic, it allows us to reach geographically diverse audiences, and we suggest that virtual guest speaker presentations will remain a viable and effective method of public outreach.
- ItemThe impact of dialect differences on spoken language comprehension(Cambridge University Press, 2023-05-02) Byrd, Arynn S.; Huang, Yi Ting; Edwards, JanResearch has suggested that children who speak African American English (AAE) have difficulty using features produced in Mainstream American English (MAE) but not AAE, to comprehend sentences in MAE. However, past studies mainly examined dialect features, such as verbal -s, that are produced as final consonants with shorter durations when produced in conversation which impacts their phonetic saliency. Therefore, it is unclear if previous results are due to the phonetic saliency of the feature or how AAE speakers process MAE dialect features more generally. This study evaluated if there were group differences in how AAE- and MAE-speaking children used the auxiliary verbs was and were, a dialect feature with increased phonetic saliency but produced differently between the dialects, to interpret sentences in MAE. Participants aged 6, 5–10, and 0 years, who spoke MAE or AAE, completed the DELV-ST, a vocabulary measure (PVT), and a sentence comprehension task. In the sentence comprehension task, participants heard sentences in MAE that had either unambiguous or ambiguous subjects. Sentences with ambiguous subjects were used to evaluate group differences in sentence comprehension. AAE-speaking children were less likely than MAE-speaking children to use the auxiliary verbs was and were to interpret sentences in MAE. Furthermore, dialect density was predictive of Black participant’s sensitivity to the auxiliary verb. This finding is consistent with how the auxiliary verb is produced between the two dialects: was is used to mark both singular and plural subjects in AAE, while MAE uses was for singular and were for plural subjects. This study demonstrated that even when the dialect feature is more phonetically salient, differences between how verb morphology is produced in AAE and MAE impact how AAE-speaking children comprehend MAE sentences.
- ItemLinking frequency to bilingual switch costs during real-time sentence comprehension(Cambridge University Press, 2023-05-30) Salig, Lauren K.; Valdés Kroff, Jorge R.; Slevc, L. Robert; Novick, Jared M.Bilinguals experience processing costs when comprehending code-switches, yet the magnitude of the cost fluctuates depending on numerous factors. We tested whether switch costs vary based on the frequency of different types of code-switches, as estimated from natural corpora of bilingual speech and text. Spanish–English bilinguals in the U.S. read single-language and code-switched sentences in a self-paced task. Sentence regions containing code-switches were read more slowly than single-language control regions, consistent with the idea that integrating a code-switch poses a processing challenge. Crucially, more frequent code-switches elicited significantly smaller costs both within and across most classes of switch types (e.g., within verb phrases and when comparing switches at verb-phrase and noun-phrase sites). The results suggest that, in addition to learning distributions of syntactic and semantic patterns, bilinguals develop finely tuned expectations about code-switching behavior – representing one reason why code-switching in naturalistic contexts may not be particularly costly.