Hearing & Speech Sciences Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2776
Browse
81 results
Search Results
Item Values in American Hearing Healthcare(2024) Menon, Katherine Noel; Hoover, Eric C; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The long-term objective of this research is to create a more inclusive, patient-centered hearing healthcare system that aligns with all stakeholders’ diverse values and needs. This dissertation explores the values shaping hearing healthcare through three complementary studies. Chapter 2 analyzes the introduction of over-the-counter (OTC) hearing aids, revealing a values shift from traditional audiology’s focus on accuracy, safety, and subjective benefit to prioritizing access and affordability. Implementing an OTC service delivery model for hearing healthcare promoted values different from those of traditional audiology. Still, the creation of OTC offers affordances that enable us to create more patient-centered hearing healthcare systems to reflect stakeholders’ values. Chapter 3 validates a comprehensive list of values in audiology through a national survey of audiologists, confirming alignment with best-practice guidelines. Previous work developed a codebook of values based on textual documents representing best practices in traditional audiology, and it was essential to validate these findings by directly engaging with audiologists. Chapter 4 develops a codebook based on the values of individuals with hearing difficulties, categorizing their concerns into Material, Social, and Healthcare domains. Results from this study highlight the importance of considering the values of individuals with hearing loss, which encompasses not only the use of hearing aids and affordable hearing healthcare but also concerns regarding the effectiveness, usefulness, and social implications of hearing aids. Together, these studies underscore the balance between efforts to improve accessibility and the need to maintain patient-centered outcomes, suggesting that future research should focus on understanding how values intersect with the daily lives and decision-making processes of all people with difficulty hearing.Item Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies(2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.Item Evaluating the role of acoustic cues in identifying the presence of a code-switch(2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.Item UNDERSTANDING HOW AFRICAN AMERICAN ENGLISH-SPEAKING CHILDREN USE INFLECTIONAL VERB MORPHOLOGY IN SENTENCE PROCESSING AND WORD LEARNING(2024) Byrd, Arynn S; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This research examined how linguistic differences between African American English (AAE) and Mainstream American English (MAE) impact how children process sentences and learn new information. The central hypothesis of this dissertation is that these linguistic differences adversely impact how AAE-speaking children use contrastive inflectional verb morphology (e.g., was/were, third person singular -s) to process and comprehend MAE sentences, as well as to infer word meanings when they depend on dialect-specific parsing of sentence cues. To test this hypothesis, this dissertation conducted three experiments on how linguistic mismatch impacts spoken language comprehension and word learning in school-age MAE- and AAE-speaking children. The first study examined how children used the auxiliary verbs was or were to comprehend MAE sentences in an offline spoken language comprehension task. In contrast, the second study asked the same question in an online sentence processing task. The final study examined how children used inflectional verb morphology (i.e., third-person singular -s, was/were) to infer information about novel verbs. Each study examined how participants’ dialect, either MAE or AAE, predicted performance on listening tasks produced in MAE. Furthermore, each study examined how individual differences such as age, dialect density, and vocabulary size influenced children’s performance.Across all studies, results demonstrated that when there were redundant linguistic cues that were not impacted by dialect differences, AAE- and MAE-speaking children used available linguistic cues to process and comprehend spoken language and infer verb meanings in a similar manner. However, when linguistic redundancy was decreased due to perceptual ambiguity, there were group differences in how AAE- and MAE-speaking children used inflectional verb morphology on spoken language tasks. The second study showed that AAE-speaking children were sensitive to contrastive verb morphology in real-time processing, but they were less likely than their MAE-speaking peers to use it as an informative cue to revise initial parses when processing spoken language. The results of the final study indicated that individual characteristics such as age and dialect density influence how dialect impacts a learning process. These results demonstrate that linguistic mismatch can affect spoken language processes. Furthermore, the findings from this research highlight a complex relationship between the effects of linguistic mismatch and individual differences such as age and dialect density.Item EXPLORING THE RELATIONSHIP BETWEEN VERB RETRIEVAL, AGRAMMATISM AND PAUSES(2024) Campbell, Lauren; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Persons with agrammatic aphasia (a symptom of Broca’s aphasia) tend to speak at a slow rate compared to neurotypical adults and to other aphasia subtypes (Kertesz, 2007). This connection between slow rate and agrammatic aphasia is underexplored. This study examines three key variables impacting speech rate in agrammatic narratives: syntactic impairment (i.e., diagnosis of agrammatism), verb retrieval, and pauses. Specifically, forty-five narrative (Cinderella) samples (15 agrammatic aphasia, 15 anomic aphasia, and 15 controls) from AphasiaBank database (MacWinney et al., 2011) were converted into Praat TextGrids (Boersma & Weenink, 2023) with sound files, and the first five qualifying pre-verb pause durations were recorded. Additionally, the first five qualifiying pre-noun pauses were logged for comparison as well as the overall grammaticality of each targeted utterance. The results were that the number of pauses and pause duration differentiated persons with agrammatic aphasia from persons with anomic aphasia and neurotypical controls, yet verb retrieval and the syntactic well-formedness of an utterance did not significantly vary by aphasia type in utterances where verbs were successfully retrieved. Overall, this study did not lend support to the Synergistic Processing Bottleneck model for agrammatic aphasia (Faroqi-Shah, 2023).Item Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants(2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.Item SYNTACTIC AND LEXICAL ALIGNMENT DURING NATURALISTIC CONVERSATIONS AMONGST AFRICAN AMERICAN PARENTS OF 4-YEAR-OLD CHILDREN FROM PROFESSIONAL- AND WORKING-CLASS FAMILIES(2023) Ogbonna, Chidinma; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Parents play an important role when it comes to child language development. This study examines differences in lexical and syntactic alignment, in child-directed speech (CDS), between African American mothers and fathers from the professional- and working-class. The Hall (1984) corpus from the Child Language Data Exchange System (CHILDES; MacWhinney, 1991) was used to analyze syntactic and lexical alignment in African American professional- and working-class parent-child dyads (children aged 4;6). We investigated the proportion of overlapping nouns shared between mother-child and father-child dyads, as well as differences between parent-child syntactic complexity scores (i.e., Mean Length of Utterance-words (MLU-w), and Verbs per Utterance (Verbs/utt). Results revealed there to be no significant differences regarding lexical and syntactic alignment between the professional- and working-class families; however, fathers were found to produce a significantly higher average proportion of overlapping nouns compared to mothers.Item ISOLATING EFFECTS OF PERCEPTUAL ANALYSIS AND SOCIOCULTURAL CONTEXT ON CHILDREN’S COMPREHENSION OF TWO DIALECTS OF ENGLISH, AFRICAN AMERICAN ENGLISH AND GENERAL AMERICAN ENGLISH(2023) Erskine, Michelle E; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)There is a long-standing gap in literacy achievement between African American and European American students (e.g., NAEP, 2019, 2022). A large body of research has examined different factors that continue to reinforce performance differences across students. One variable that has been a long-term interest to sociolinguists and applied scientists is children’s use of different dialects in the classroom. Many African American students speak African American English (AAE), a rule-governed, but socially stigmatized, dialect of English that differs in phonology, morphosyntax, and pragmatics from General American English (GAE), the dialect of classroom instruction. Empirical research on dialect variation and literacy achievement has demonstrated that linguistic differences between dialects make it more difficult to learn to read (Buhler et al., 2018; Charity et al., 2004; Gatlin & Wanzek, 2015; Washington et al., 2018, inter alia) and recently, more difficult to comprehend spoken language (Byrd et al., 2022, Edwards et al., 2014; Erskine, 2022a; Johnson, 2005; de Villiers & Johnson, 2007; JM Terry, Hendrick, Evangelou, et al., 2010; JM Terry, Thomas, Jackson, et al., 2022). The prevailing explanation for these results has been the perceptual analysis hypothesis, a framework that asserts that linguistic differences across dialects creates challenges in mapping variable speech signals to listeners’ stored mental representations (Adank et al., 2009; Clopper, 2012; Clopper & Bradlow, 2008; Cristia et al., 2012). However, spoken language comprehension is more than perceptual analysis, requiring the integration of perceptual information with communicative intent and sociocultural information (speaker identity). To this end, it is proposed that the perceptual analysis hypothesis views dialect variation as another form of signal degradation. Simplifying dialect variation to a signal-mapping problem potentially limits our understanding of the contribution of dialect variation to spoken language comprehension. This dissertation proposes that research on spoken language comprehension should integrate frameworks that are more sensitive to the contributions of the sociocultural aspects of dialect variation, such as the role of linguistic and nonlinguistic cues that are associated with speakers of different dialects. This dissertation includes four experiments that use the visual world paradigm to explore the effects of dialect variation on spoken language comprehension among children between the ages of 3;0 to 11;11 years old (years;months) from two linguistic communities, European American speakers of GAE and African American speakers with varying degrees of exposure to AAE and GAE. Chapter 2 (Erskine [2022a]) investigates the effects of dialect variation in auditory-only contexts in two spoken word recognition tasks that vary in linguistic complexity: a) word recognition in simple phrases and b) word recognition in sentences that vary in semantic predictability. Chapter 3 [Erskine (2022b)] examine the effects of visual and auditory speaker identity cues on dialect variation on literal semantic comprehension (i.e., word recognition in semantically facilitating sentences). Lastly, Chapter 4 [Erskine (2022c)] examines the effects of visual and auditory speaker identity cues on children’s comprehension of different dialects in a task that evaluates pragmatic inferencing (i.e., scalar implicature). Each of the studies investigate the validity of the perceptual analysis against sociolinguistcally informed hypotheses that account for the integration of linguistic and nonlinguistic speaker identity cues as adequate explanations for relationships that are observed between dialect variation and spoken language comprehension. Collectively, these studies address the question of how dialect variation impacts spoken language comprehension. This dissertation provides evidence that traditional explanations that focus on perceptual costs are limited in their ability to account for correlations typically reported between spoken language comprehension and dialect use. Additionally, it shows that school-age children rapidly integrate linguistic and nonlinguistic socioindexical cues in ways that meaningfully guide their comprehension of different speakers. The implication of these findings and future research directions are also addressed within.Item The Effect Of Language Mixing on Word Retrieval in Bilingual Adults with Aphasia(2022) Nichols, Meghan; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Lexical retrieval deficits are a common feature in aphasia, and while much research has been done on bilingual aphasia and on the processes involved in language mixing by healthy bilingual adults, it is not clear whether it may be beneficial for bilingual people with aphasia to change languages in moments of lexical retrieval or if it is more effective to continue the lexical search in one language. The primary aim of this project was to determine whether bilingual people with aphasia demonstrate global and local effects of language mixing. Grammatical categories (i.e., nouns and verbs) were examined separately, and participant- and stimulus-related factors were considered. Based on preliminary analyses of participants’ accuracy and response onset latencies, it appears that participants tended to benefit from mixing in terms of speed and accuracy and that their results may be related to their language proficiency and dominance.Item Examining Narrative Language in Early Stage Parkinson's Disease and Intermediate Farsi-English Bilingual Speakers(2022) Lohrasbi, Bushra; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study aimed to examine procedural aspects of language (grammaticality, syntactic complexity, regular past tense verb production), verb use, and the association between motor-speech, language abilities, and intelligibility in Early Stage Parkinson's Disease (PD) and Intermediate Farsi-English Bilingual Speakers (L2). Ullman’s Declarative-Procedural Model (2001) provided this study with a dual-mechanism model that justified a theoretical comparison between these two populations. Twenty-four neurologically healthy native speakers of English, twenty-three Parkinson’s Disease participants, and thirteen bilingual Farsi-English speakers completed three narrative picture description tasks and read the first three sentences of the Rainbow Passage. Language samples were transcribed and analyzed to derive measures of morphosyntax and verb use, including grammatical accuracy, grammatical complexity, and proportions of regular past tense, action verbs and light verbs. The results did not show any evidence of morphosyntactic or action verb deficit in PD. Neither was there any evidence of a trade-off between morphosyntactic performance and severity of speech motor impairment in PD. L2 speakers had lower scores on grammatical accuracy and a measure of morphosyntactic complexity, but did not differ from monolingual speakers on measures of verb use. Overall, these results show that language abilities (morphosyntax and verb use) are preserved in early stage PD. This study replicates the well-documented finding that morphosyntax is particularly challenging for late bilingual speakers. The results did not support Ullman’s Declarative-Procedural (2001) hypothesis of language production in Parkinson’s Disease or L2 speakers.