Hearing & Speech Sciences

Permanent URI for this communityhttp://hdl.handle.net/1903/2245

Browse

Search Results

Now showing 1 - 10 of 38
  • Item
    Values in American Hearing Healthcare
    (2024) Menon, Katherine Noel; Hoover, Eric C; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this research is to create a more inclusive, patient-centered hearing healthcare system that aligns with all stakeholders’ diverse values and needs. This dissertation explores the values shaping hearing healthcare through three complementary studies. Chapter 2 analyzes the introduction of over-the-counter (OTC) hearing aids, revealing a values shift from traditional audiology’s focus on accuracy, safety, and subjective benefit to prioritizing access and affordability. Implementing an OTC service delivery model for hearing healthcare promoted values different from those of traditional audiology. Still, the creation of OTC offers affordances that enable us to create more patient-centered hearing healthcare systems to reflect stakeholders’ values. Chapter 3 validates a comprehensive list of values in audiology through a national survey of audiologists, confirming alignment with best-practice guidelines. Previous work developed a codebook of values based on textual documents representing best practices in traditional audiology, and it was essential to validate these findings by directly engaging with audiologists. Chapter 4 develops a codebook based on the values of individuals with hearing difficulties, categorizing their concerns into Material, Social, and Healthcare domains. Results from this study highlight the importance of considering the values of individuals with hearing loss, which encompasses not only the use of hearing aids and affordable hearing healthcare but also concerns regarding the effectiveness, usefulness, and social implications of hearing aids. Together, these studies underscore the balance between efforts to improve accessibility and the need to maintain patient-centered outcomes, suggesting that future research should focus on understanding how values intersect with the daily lives and decision-making processes of all people with difficulty hearing.
  • Item
    Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies
    (2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.
  • Item
    Modeling Language Development: How Machine Learning can Enhance Analysis of the Language Environment
    (2024-12-18) Harvey, James; Huang, Yi Ting; Newman, Rochelle; Domanski, Sophie
    Language sampling elicits a representative picture of a child’s language and provides methods for assessing functional communication beyond what is offered by standardized tests. Naturalistic sampling reduces time costs, and offers an ideal way to assess differences in home language associated with differences in socioeconomic status (SES). Unfortunately, naturalistic dense recordings present challenges in terms of how to scale analysis and extract meaningful information. This study investigates the application and analysis of the Language ENvironment Analysis system (LENA) for sampling home language using technology-assisted transcription and topic modeling. To evaluate the efficacy of transcription, segments were selected in reference to their amount of meaningful speech as measured by LENA, and transcribed by Whisper, OpenAI’s automatic speech recognition software. Research assistants trimmed text files to retain available adult language separated by utterance. Results suggest that this method of sampling, technology-assisted transcription, and automated analysis of traditional language metrics reproduces expected associations between parental input, SES, and standardized child vocabulary size. Topic models did not identify activity contexts, likely due to the nature of the input. This research presents a validated pipeline to produce dense representative data that utilizes modern approaches to reduce traditional time costs.
  • Item
    Evaluating the role of acoustic cues in identifying the presence of a code-switch
    (2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.
  • Item
    UNDERSTANDING HOW AFRICAN AMERICAN ENGLISH-SPEAKING CHILDREN USE INFLECTIONAL VERB MORPHOLOGY IN SENTENCE PROCESSING AND WORD LEARNING
    (2024) Byrd, Arynn S; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This research examined how linguistic differences between African American English (AAE) and Mainstream American English (MAE) impact how children process sentences and learn new information. The central hypothesis of this dissertation is that these linguistic differences adversely impact how AAE-speaking children use contrastive inflectional verb morphology (e.g., was/were, third person singular -s) to process and comprehend MAE sentences, as well as to infer word meanings when they depend on dialect-specific parsing of sentence cues. To test this hypothesis, this dissertation conducted three experiments on how linguistic mismatch impacts spoken language comprehension and word learning in school-age MAE- and AAE-speaking children. The first study examined how children used the auxiliary verbs was or were to comprehend MAE sentences in an offline spoken language comprehension task. In contrast, the second study asked the same question in an online sentence processing task. The final study examined how children used inflectional verb morphology (i.e., third-person singular -s, was/were) to infer information about novel verbs. Each study examined how participants’ dialect, either MAE or AAE, predicted performance on listening tasks produced in MAE. Furthermore, each study examined how individual differences such as age, dialect density, and vocabulary size influenced children’s performance.Across all studies, results demonstrated that when there were redundant linguistic cues that were not impacted by dialect differences, AAE- and MAE-speaking children used available linguistic cues to process and comprehend spoken language and infer verb meanings in a similar manner. However, when linguistic redundancy was decreased due to perceptual ambiguity, there were group differences in how AAE- and MAE-speaking children used inflectional verb morphology on spoken language tasks. The second study showed that AAE-speaking children were sensitive to contrastive verb morphology in real-time processing, but they were less likely than their MAE-speaking peers to use it as an informative cue to revise initial parses when processing spoken language. The results of the final study indicated that individual characteristics such as age and dialect density influence how dialect impacts a learning process. These results demonstrate that linguistic mismatch can affect spoken language processes. Furthermore, the findings from this research highlight a complex relationship between the effects of linguistic mismatch and individual differences such as age and dialect density.
  • Item
    EXPLORING THE RELATIONSHIP BETWEEN VERB RETRIEVAL, AGRAMMATISM AND PAUSES
    (2024) Campbell, Lauren; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Persons with agrammatic aphasia (a symptom of Broca’s aphasia) tend to speak at a slow rate compared to neurotypical adults and to other aphasia subtypes (Kertesz, 2007). This connection between slow rate and agrammatic aphasia is underexplored. This study examines three key variables impacting speech rate in agrammatic narratives: syntactic impairment (i.e., diagnosis of agrammatism), verb retrieval, and pauses. Specifically, forty-five narrative (Cinderella) samples (15 agrammatic aphasia, 15 anomic aphasia, and 15 controls) from AphasiaBank database (MacWinney et al., 2011) were converted into Praat TextGrids (Boersma & Weenink, 2023) with sound files, and the first five qualifying pre-verb pause durations were recorded. Additionally, the first five qualifiying pre-noun pauses were logged for comparison as well as the overall grammaticality of each targeted utterance. The results were that the number of pauses and pause duration differentiated persons with agrammatic aphasia from persons with anomic aphasia and neurotypical controls, yet verb retrieval and the syntactic well-formedness of an utterance did not significantly vary by aphasia type in utterances where verbs were successfully retrieved. Overall, this study did not lend support to the Synergistic Processing Bottleneck model for agrammatic aphasia (Faroqi-Shah, 2023).
  • Item
    Temporal Processing in Adults Who Stutter
    (2024-05-10) Wathen, Jasmine; Anderson, Samira; Ratner, Nan Bernstein
    Stuttering is often thought of as simply an impairment in speech production. However, some studies have indicated that people who stutter (PWS) also experience temporal processing impairments which affect perception of speech. In particular, previous behavioral and electrophysiology (EEG) studies have demonstrated time delays in processing speech stimuli in PWS. Most research to this date has only examined these timing delays at the level of the cerebral cortex, which represents the later stages of processing. Very few studies have examined delays at the level of the brainstem, and no study has looked at processing in both the cortex and the brainstem. This study recruited adults who stutter (AWS) and adults who do not stutter (AWNS) to examine how each group processes speech at both subcortical and cortical levels. Participants completed a perceptual test to determine how well they perceived speech and underwent EEG testing to measure cortical and subcortical electrical activity while listening to speech stimuli. Compared to AWNS, AWS showed poorer neural representations of the speech stimulus in the brainstem and delays at the cortical level. Perceptual testing in AWS also seemed to show a poorer perception of phoneme boundaries in words compared to AWNS. Our research suggests that temporal processing deficits are a factor in stuttering and that these deficits arise at early levels of the auditory system. These findings might call for an update of current speech therapy methods to address the timing delays that AWS experience in speech processing.
  • Item
    Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants
    (2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.
  • Item
    Stuttering or not? Analysis of language exposure effects on fluency assessment
    (2024-05-03) Ahluwalia, Seetal; Bernstein Ratner, Nan; Faroqi-Shah, Yasmeen; Ortiz, José
    Language exposure is hypothesized to impact bilingual speakers’ levels of typical and stuttering-like disfluency. The current study examined the relationship between English language exposure before school age and bilingual children’s speech fluency during an English task. The sample included 33 Spanish-English bilingual children from the English-MiamiBiling corpus at CHILDES. Participants were asked to narrate Mayer’s (1969) wordless picture book, Frog, Where Are You? Children who spoke only Spanish in the home were referred to as “MonoSpanHome” while children who spoke English and Spanish at home were referred to as “BilingHome.” It was hypothesized that children in the “MonoSpanHome” group would be more disfluent than their “BilingHome” peers. It was found that the “MonoSpanHome” participants had an increased number of typical disfluencies than their “BilingHome” counterparts. However, the number of stuttering-like disfluencies and total overall disfluency was similar between the two groups.
  • Item
    Following the Conversation: Impacts of Set-Shifting and Topic-Shifting in Healthy Adults and Individuals with Traumatic Brain Injury
    (2024-04-26) Vess, Avery; Novick, Jared; Marshall, Kelly
    Difficulty in conversational discourse abilities, marked by issues with processing topic structure, are a common characteristic of cognitive-communication disorders in individuals with traumatic brain injury (TBI). Despite progress in foundational word and sentence-level skills through speech therapy interventions, problems with conversational discourse tend to remain persistent. This persistence suggests a gap in understanding how non-linguistic cognitive processes influence conversation. In this set of experiments, I test whether cognitive mechanisms related to set-shifting contribute to processing topic shifts in conversation. The purpose of Experiment 1 was to determine if topic switches show characteristic behavioral signatures of set-shifting that emerge in non-linguistic tasks: longer response onset latencies and decreased information content efficiency. The results from the first experiment showed no differences between responses to new topics and the same topics for these measures; however, it was unclear whether these results would reflect responses to shifts in naturalistic conversation or if they simply were products of the experimental design. In Experiment 2, I examined the impacts of the topic switches in naturalistic conversation on language production in healthy adults and TBI patients. This includes measuring productivity, semantic complexity, semantic complexity, syntactic complexity, and fluency for responses to new topics and the same topics. I found that topic shifts elicited costs in terms of the number of words per utterance, verbs per utterance, revisions/rephrasing, and filled and unfilled pauses per syllable for both groups. These findings demonstrate that there are costs associated with switching topics that mirror non-linguistic shift costs and may suggest they arise from similar mechanisms.