Hearing & Speech Sciences Research Works

Permanent URI for this collectionhttp://hdl.handle.net/1903/1643

Browse

Search Results

Now showing 1 - 8 of 8
  • Thumbnail Image
    Item
    Preschoolers rely on rich speech representations to process variable speech
    (Wiley, 2023-04-10) Cychosz, Margaret; Mahr, Tristan; Munson, Benjamin; Newman, Rochelle; Edwards, Jan R.
    To learn language, children must map variable input to categories such as phones and words. How do children process variation and distinguish between variable pronunciations (“shoup” for soup) versus new words? The unique sensory experience of children with cochlear implants, who learn speech through their device's degraded signal, lends new insight into this question. In a mispronunciation sensitivity eyetracking task, children with implants (N = 33), and typical hearing (N = 24; 36–66 months; 36F, 19M; all non-Hispanic white), with larger vocabularies processed known words faster. But children with implants were less sensitive to mispronunciations than typical hearing controls. Thus, children of all hearing experiences use lexical knowledge to process familiar words but require detailed speech representations to process variable speech in real time.
  • Thumbnail Image
    Item
    Taking language science to zoom school: Virtual outreach to elementary school students
    (Wiley, 2022-09-11) Oppenheimer, Kathleen E.; Salig, Lauren K.; Thorburn, Craig A.; Exton, Erika L.
    We describe guest speaker presentations that we developed to bring language science to elementary school students via videoconference. By using virtual backgrounds and guided discovery learning, we effectively engage children as young as 7 years in in-depth explorations of language science concepts. We share the core principles that guide our presentations and describe two of our outreach activities, Speech Detectives and Bilingual Barnyard. We report brief survey data from 157 elementary school students showing that they find our presentations interesting and educational. While our pivot to virtual outreach was motivated by the Covid-19 pandemic, it allows us to reach geographically diverse audiences, and we suggest that virtual guest speaker presentations will remain a viable and effective method of public outreach.
  • Thumbnail Image
    Item
    The impact of dialect differences on spoken language comprehension
    (Cambridge University Press, 2023-05-02) Byrd, Arynn S.; Huang, Yi Ting; Edwards, Jan
    Research has suggested that children who speak African American English (AAE) have difficulty using features produced in Mainstream American English (MAE) but not AAE, to comprehend sentences in MAE. However, past studies mainly examined dialect features, such as verbal -s, that are produced as final consonants with shorter durations when produced in conversation which impacts their phonetic saliency. Therefore, it is unclear if previous results are due to the phonetic saliency of the feature or how AAE speakers process MAE dialect features more generally. This study evaluated if there were group differences in how AAE- and MAE-speaking children used the auxiliary verbs was and were, a dialect feature with increased phonetic saliency but produced differently between the dialects, to interpret sentences in MAE. Participants aged 6, 5–10, and 0 years, who spoke MAE or AAE, completed the DELV-ST, a vocabulary measure (PVT), and a sentence comprehension task. In the sentence comprehension task, participants heard sentences in MAE that had either unambiguous or ambiguous subjects. Sentences with ambiguous subjects were used to evaluate group differences in sentence comprehension. AAE-speaking children were less likely than MAE-speaking children to use the auxiliary verbs was and were to interpret sentences in MAE. Furthermore, dialect density was predictive of Black participant’s sensitivity to the auxiliary verb. This finding is consistent with how the auxiliary verb is produced between the two dialects: was is used to mark both singular and plural subjects in AAE, while MAE uses was for singular and were for plural subjects. This study demonstrated that even when the dialect feature is more phonetically salient, differences between how verb morphology is produced in AAE and MAE impact how AAE-speaking children comprehend MAE sentences.
  • Thumbnail Image
    Item
    Linking frequency to bilingual switch costs during real-time sentence comprehension
    (Cambridge University Press, 2023-05-30) Salig, Lauren K.; Valdés Kroff, Jorge R.; Slevc, L. Robert; Novick, Jared M.
    Bilinguals experience processing costs when comprehending code-switches, yet the magnitude of the cost fluctuates depending on numerous factors. We tested whether switch costs vary based on the frequency of different types of code-switches, as estimated from natural corpora of bilingual speech and text. Spanish–English bilinguals in the U.S. read single-language and code-switched sentences in a self-paced task. Sentence regions containing code-switches were read more slowly than single-language control regions, consistent with the idea that integrating a code-switch poses a processing challenge. Crucially, more frequent code-switches elicited significantly smaller costs both within and across most classes of switch types (e.g., within verb phrases and when comparing switches at verb-phrase and noun-phrase sites). The results suggest that, in addition to learning distributions of syntactic and semantic patterns, bilinguals develop finely tuned expectations about code-switching behavior – representing one reason why code-switching in naturalistic contexts may not be particularly costly.
  • Thumbnail Image
    Item
    Concussion in Women's Flat-Track Roller Derby
    (Frontiers, 2022-02-14) Stockbridge, Melissa D.; Keser, Zafer; Newman, Rochelle S.
    Concussions are common among flat-track roller derby players, a unique and under-studied sport, but little has been done to assess how common they are or what players can do to manage injury risk. The purpose of this study is to provide an epidemiological investigation of concussion incidence and experience in a large international sampling of roller derby players. Six hundred sixty-five roller derby players from 25 countries responded to a comprehensive online survey about injury and sport participation. Participants also responded to a battery of psychometric assessment tools targeting risk-factors for poor injury recovery (negative bias, social support, mental toughness) and players' thoughts and feelings in response to injury. Per 1,000 athletes, 790.98 concussions were reported. Current players reported an average of 2.2 concussions, while former players reported 3.1 concussions. However, groups were matched when these figures were corrected for differences in years of play (approximately one concussion every 2 years). Other frequent injuries included fractures in extremities and upper limbs, torn knee ligaments, and sprained ankles. We found no evidence that players' position, full-contact scrimmages, or flooring impacted number of concussions. However, neurological history and uncorrected vision were more influential predictors of an individual's number of concussions during roller derby than years of participation or age, though all four contributed significantly. These findings should assist athletes in making informed decisions about participation in roller derby, though more work is needed to understand the nature of risk.
  • Thumbnail Image
    Item
    Acoustic-Lexical Characteristics of Child-Directed Speech Between 7 and 24 Months and Their Impact on Toddlers' Phonological Processing
    (Frontiers, 2021-09-24) Cychosz, Margaret; Edwards, Jan R.; Ratner, Nan Bernstein; Eaton, Catherine Torrington; Newman, Rochelle S.
    Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (n = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.
  • Item
    Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent
    (2022-01) Gordon-Salant, Sandra; Schwartz, Maya; Oppler, Kelsey; Yeni-Komshian, Grace
    This investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the “AV simultaneity window”). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which the judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.
  • Thumbnail Image
    Item
    Read my lips! Perception of speech in noise by preschool children with autism and the impact of watching the speaker’s face
    (Springer Nature, 2021-01-05) Newman, Rochelle S.; Kirby, Laura A.; Von Holzen, Katie; Redcay, Elizabeth
    Adults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments.