Hearing & Speech Sciences Research Works
Permanent URI for this collection
- ItemAcoustic-Lexical Characteristics of Child-Directed Speech Between 7 and 24 Months and Their Impact on Toddlers' Phonological Processing(Frontiers, 2021-09-24) Cychosz, Margaret; Edwards, Jan R.; Ratner, Nan Bernstein; Eaton, Catherine Torrington; Newman, Rochelle S.Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (n = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.
- ItemDetection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent(2022-01) Gordon-Salant, Sandra; Schwartz, Maya; Oppler, Kelsey; Yeni-Komshian, GraceThis investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the “AV simultaneity window”). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which the judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.
- ItemRead my lips! Perception of speech in noise by preschool children with autism and the impact of watching the speaker’s face(Springer Nature, 2021-01-05) Newman, Rochelle S.; Kirby, Laura A.; Von Holzen, Katie; Redcay, ElizabethAdults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments.
- ItemAge-Related Temporal Processing Deficits in Word Segments in Adult Cochlear-Implant Users(Sage, 2019-12-06) Xie, Zilong; Gaskins, Casey R.; Shader, Maureen J.; Gordon-Salant, Sandra; Anderson, Samira; Goupell, Matthew J.Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.
- ItemEffects of Age, Cognition, and Neural Encoding on the Perception of Temporal Speech Cues(Frontiers Media, 2019-07-19) Roque, Lindsey; Karawani, Hanin; Sandra, Gordon-Salant; Anderson, SamiraOlder adults commonly report difficulty understanding speech, particularly in adverse listening environments. These communication difficulties may exist in the absence of peripheral hearing loss. Older adults, both with normal hearing and with hearing loss, demonstrate temporal processing deficits that affect speech perception. The purpose of the present study is to investigate aging, cognition, and neural processing factors that may lead to deficits on perceptual tasks that rely on phoneme identification based on a temporal cue – vowel duration. A better understanding of the neural and cognitive impairments underlying temporal processing deficits could lead to more focused aural rehabilitation for improved speech understanding for older adults. This investigation was conducted in younger (YNH) and older normal-hearing (ONH) participants who completed three measures of cognitive functioning known to decline with age: working memory, processing speed, and inhibitory control. To evaluate perceptual and neural processing of auditory temporal contrasts, identification functions for the contrasting word-pair WHEAT and WEED were obtained on a nine-step continuum of vowel duration, and frequency-following responses (FFRs) and cortical auditory-evoked potentials (CAEPs) were recorded to the two endpoints of the continuum. Multiple linear regression analyses were conducted to determine the cognitive, peripheral, and/or central mechanisms that may contribute to perceptual performance. YNH participants demonstrated higher cognitive functioning on all three measures compared to ONH participants. The slope of the identification function was steeper in YNH than in ONH participants, suggesting a clearer distinction between the contrasting words in the YNH participants. FFRs revealed better response waveform morphology and more robust phase-locking in YNH compared to ONH participants. ONH participants also exhibited earlier latencies for CAEP components compared to the YNH participants. Linear regression analyses revealed that cortical processing significantly contributed to the variance in perceptual performance in the WHEAT/WEED identification functions. These results suggest that reduced neural precision contributes to age-related speech perception difficulties that arise from temporal processing deficits.