Hearing & Speech Sciences Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2776

Browse

Recent Submissions

Now showing 1 - 20 of 113
  • Item
    Evaluating the role of acoustic cues in identifying the presence of a code-switch
    (2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.
  • Item
    UNDERSTANDING HOW AFRICAN AMERICAN ENGLISH-SPEAKING CHILDREN USE INFLECTIONAL VERB MORPHOLOGY IN SENTENCE PROCESSING AND WORD LEARNING
    (2024) Byrd, Arynn S; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This research examined how linguistic differences between African American English (AAE) and Mainstream American English (MAE) impact how children process sentences and learn new information. The central hypothesis of this dissertation is that these linguistic differences adversely impact how AAE-speaking children use contrastive inflectional verb morphology (e.g., was/were, third person singular -s) to process and comprehend MAE sentences, as well as to infer word meanings when they depend on dialect-specific parsing of sentence cues. To test this hypothesis, this dissertation conducted three experiments on how linguistic mismatch impacts spoken language comprehension and word learning in school-age MAE- and AAE-speaking children. The first study examined how children used the auxiliary verbs was or were to comprehend MAE sentences in an offline spoken language comprehension task. In contrast, the second study asked the same question in an online sentence processing task. The final study examined how children used inflectional verb morphology (i.e., third-person singular -s, was/were) to infer information about novel verbs. Each study examined how participants’ dialect, either MAE or AAE, predicted performance on listening tasks produced in MAE. Furthermore, each study examined how individual differences such as age, dialect density, and vocabulary size influenced children’s performance.Across all studies, results demonstrated that when there were redundant linguistic cues that were not impacted by dialect differences, AAE- and MAE-speaking children used available linguistic cues to process and comprehend spoken language and infer verb meanings in a similar manner. However, when linguistic redundancy was decreased due to perceptual ambiguity, there were group differences in how AAE- and MAE-speaking children used inflectional verb morphology on spoken language tasks. The second study showed that AAE-speaking children were sensitive to contrastive verb morphology in real-time processing, but they were less likely than their MAE-speaking peers to use it as an informative cue to revise initial parses when processing spoken language. The results of the final study indicated that individual characteristics such as age and dialect density influence how dialect impacts a learning process. These results demonstrate that linguistic mismatch can affect spoken language processes. Furthermore, the findings from this research highlight a complex relationship between the effects of linguistic mismatch and individual differences such as age and dialect density.
  • Item
    EXPLORING THE RELATIONSHIP BETWEEN VERB RETRIEVAL, AGRAMMATISM AND PAUSES
    (2024) Campbell, Lauren; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Persons with agrammatic aphasia (a symptom of Broca’s aphasia) tend to speak at a slow rate compared to neurotypical adults and to other aphasia subtypes (Kertesz, 2007). This connection between slow rate and agrammatic aphasia is underexplored. This study examines three key variables impacting speech rate in agrammatic narratives: syntactic impairment (i.e., diagnosis of agrammatism), verb retrieval, and pauses. Specifically, forty-five narrative (Cinderella) samples (15 agrammatic aphasia, 15 anomic aphasia, and 15 controls) from AphasiaBank database (MacWinney et al., 2011) were converted into Praat TextGrids (Boersma & Weenink, 2023) with sound files, and the first five qualifying pre-verb pause durations were recorded. Additionally, the first five qualifiying pre-noun pauses were logged for comparison as well as the overall grammaticality of each targeted utterance. The results were that the number of pauses and pause duration differentiated persons with agrammatic aphasia from persons with anomic aphasia and neurotypical controls, yet verb retrieval and the syntactic well-formedness of an utterance did not significantly vary by aphasia type in utterances where verbs were successfully retrieved. Overall, this study did not lend support to the Synergistic Processing Bottleneck model for agrammatic aphasia (Faroqi-Shah, 2023).
  • Item
    Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants
    (2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.
  • Item
    SYNTACTIC AND LEXICAL ALIGNMENT DURING NATURALISTIC CONVERSATIONS AMONGST AFRICAN AMERICAN PARENTS OF 4-YEAR-OLD CHILDREN FROM PROFESSIONAL- AND WORKING-CLASS FAMILIES
    (2023) Ogbonna, Chidinma; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Parents play an important role when it comes to child language development. This study examines differences in lexical and syntactic alignment, in child-directed speech (CDS), between African American mothers and fathers from the professional- and working-class. The Hall (1984) corpus from the Child Language Data Exchange System (CHILDES; MacWhinney, 1991) was used to analyze syntactic and lexical alignment in African American professional- and working-class parent-child dyads (children aged 4;6). We investigated the proportion of overlapping nouns shared between mother-child and father-child dyads, as well as differences between parent-child syntactic complexity scores (i.e., Mean Length of Utterance-words (MLU-w), and Verbs per Utterance (Verbs/utt). Results revealed there to be no significant differences regarding lexical and syntactic alignment between the professional- and working-class families; however, fathers were found to produce a significantly higher average proportion of overlapping nouns compared to mothers.
  • Item
    ISOLATING EFFECTS OF PERCEPTUAL ANALYSIS AND SOCIOCULTURAL CONTEXT ON CHILDREN’S COMPREHENSION OF TWO DIALECTS OF ENGLISH, AFRICAN AMERICAN ENGLISH AND GENERAL AMERICAN ENGLISH
    (2023) Erskine, Michelle E; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    There is a long-standing gap in literacy achievement between African American and European American students (e.g., NAEP, 2019, 2022). A large body of research has examined different factors that continue to reinforce performance differences across students. One variable that has been a long-term interest to sociolinguists and applied scientists is children’s use of different dialects in the classroom. Many African American students speak African American English (AAE), a rule-governed, but socially stigmatized, dialect of English that differs in phonology, morphosyntax, and pragmatics from General American English (GAE), the dialect of classroom instruction. Empirical research on dialect variation and literacy achievement has demonstrated that linguistic differences between dialects make it more difficult to learn to read (Buhler et al., 2018; Charity et al., 2004; Gatlin & Wanzek, 2015; Washington et al., 2018, inter alia) and recently, more difficult to comprehend spoken language (Byrd et al., 2022, Edwards et al., 2014; Erskine, 2022a; Johnson, 2005; de Villiers & Johnson, 2007; JM Terry, Hendrick, Evangelou, et al., 2010; JM Terry, Thomas, Jackson, et al., 2022). The prevailing explanation for these results has been the perceptual analysis hypothesis, a framework that asserts that linguistic differences across dialects creates challenges in mapping variable speech signals to listeners’ stored mental representations (Adank et al., 2009; Clopper, 2012; Clopper & Bradlow, 2008; Cristia et al., 2012). However, spoken language comprehension is more than perceptual analysis, requiring the integration of perceptual information with communicative intent and sociocultural information (speaker identity). To this end, it is proposed that the perceptual analysis hypothesis views dialect variation as another form of signal degradation. Simplifying dialect variation to a signal-mapping problem potentially limits our understanding of the contribution of dialect variation to spoken language comprehension. This dissertation proposes that research on spoken language comprehension should integrate frameworks that are more sensitive to the contributions of the sociocultural aspects of dialect variation, such as the role of linguistic and nonlinguistic cues that are associated with speakers of different dialects. This dissertation includes four experiments that use the visual world paradigm to explore the effects of dialect variation on spoken language comprehension among children between the ages of 3;0 to 11;11 years old (years;months) from two linguistic communities, European American speakers of GAE and African American speakers with varying degrees of exposure to AAE and GAE. Chapter 2 (Erskine [2022a]) investigates the effects of dialect variation in auditory-only contexts in two spoken word recognition tasks that vary in linguistic complexity: a) word recognition in simple phrases and b) word recognition in sentences that vary in semantic predictability. Chapter 3 [Erskine (2022b)] examine the effects of visual and auditory speaker identity cues on dialect variation on literal semantic comprehension (i.e., word recognition in semantically facilitating sentences). Lastly, Chapter 4 [Erskine (2022c)] examines the effects of visual and auditory speaker identity cues on children’s comprehension of different dialects in a task that evaluates pragmatic inferencing (i.e., scalar implicature). Each of the studies investigate the validity of the perceptual analysis against sociolinguistcally informed hypotheses that account for the integration of linguistic and nonlinguistic speaker identity cues as adequate explanations for relationships that are observed between dialect variation and spoken language comprehension. Collectively, these studies address the question of how dialect variation impacts spoken language comprehension. This dissertation provides evidence that traditional explanations that focus on perceptual costs are limited in their ability to account for correlations typically reported between spoken language comprehension and dialect use. Additionally, it shows that school-age children rapidly integrate linguistic and nonlinguistic socioindexical cues in ways that meaningfully guide their comprehension of different speakers. The implication of these findings and future research directions are also addressed within.
  • Item
    Supportive Messages Perceived and Recevied in a Therapeutic Setting
    (1994) Barr, Jeanine Rice; Freimuth, Vicki S.; Speech Communication; University of Maryland (College Park, Md); Digital Repository at the University of Maryland
    This study examines how communication of social support influences the behavioral change process in a particular environment. Specifically, the research question is: How is social support related to commitment to recovery from alcoholism/addiction? A one group pre-test/post-test research design was used with subjects in two addictions treatment centers. Questions were designed to measure changes that took place in individual's perception of supportiveness of messages received, the network support available to them, changes in uncertainty and self-esteem. Finally, how these variables predict commitment to recovery was examined. Results showed no relationship between strength of network at time 1 and the supportiveness of messages received. Strength of network support, self-esteem, and uncertainty reduction improved from time 1 to time 2. The major predictor of a patient's commitment to recovery was the level of self esteem at time 2. However, a strong correlation was found between self-esteem and strength of network at time 2.
  • Item
    The Effect Of Language Mixing on Word Retrieval in Bilingual Adults with Aphasia
    (2022) Nichols, Meghan; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Lexical retrieval deficits are a common feature in aphasia, and while much research has been done on bilingual aphasia and on the processes involved in language mixing by healthy bilingual adults, it is not clear whether it may be beneficial for bilingual people with aphasia to change languages in moments of lexical retrieval or if it is more effective to continue the lexical search in one language. The primary aim of this project was to determine whether bilingual people with aphasia demonstrate global and local effects of language mixing. Grammatical categories (i.e., nouns and verbs) were examined separately, and participant- and stimulus-related factors were considered. Based on preliminary analyses of participants’ accuracy and response onset latencies, it appears that participants tended to benefit from mixing in terms of speed and accuracy and that their results may be related to their language proficiency and dominance.
  • Item
    Examining Narrative Language in Early Stage Parkinson's Disease and Intermediate Farsi-English Bilingual Speakers
    (2022) Lohrasbi, Bushra; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to examine procedural aspects of language (grammaticality, syntactic complexity, regular past tense verb production), verb use, and the association between motor-speech, language abilities, and intelligibility in Early Stage Parkinson's Disease (PD) and Intermediate Farsi-English Bilingual Speakers (L2). Ullman’s Declarative-Procedural Model (2001) provided this study with a dual-mechanism model that justified a theoretical comparison between these two populations. Twenty-four neurologically healthy native speakers of English, twenty-three Parkinson’s Disease participants, and thirteen bilingual Farsi-English speakers completed three narrative picture description tasks and read the first three sentences of the Rainbow Passage. Language samples were transcribed and analyzed to derive measures of morphosyntax and verb use, including grammatical accuracy, grammatical complexity, and proportions of regular past tense, action verbs and light verbs. The results did not show any evidence of morphosyntactic or action verb deficit in PD. Neither was there any evidence of a trade-off between morphosyntactic performance and severity of speech motor impairment in PD. L2 speakers had lower scores on grammatical accuracy and a measure of morphosyntactic complexity, but did not differ from monolingual speakers on measures of verb use. Overall, these results show that language abilities (morphosyntax and verb use) are preserved in early stage PD. This study replicates the well-documented finding that morphosyntax is particularly challenging for late bilingual speakers. The results did not support Ullman’s Declarative-Procedural (2001) hypothesis of language production in Parkinson’s Disease or L2 speakers.
  • Item
    The Impact of Maternal Negative Language on Children’s Language Development
    (2022) Lee, Hae Ri; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Various features of infant- and child-directed speech (IDS/CDS) are known to have a positive impact on children’s language development. Some, such as directive language, appear to be less facilitating. We investigated whether mothers’ usage of negative language impacts children’s language development. Thirty-three mothers’ language samples at 30 months and children’s conversational language samples at 66 months were analyzed to locate operationally defined negative language and imperatives. Five language sample analysis measures were utilized to assess children’s expressive language abilities. Inverse relationships between maternal use of negative language and children’s language outcome measures were found. This preliminary result suggests that the more children hear negative language at an earlier age, the lower their language outcomes are at a later age. This study was exploratory in nature, and various limitations and implications for future studies are outlined in the paper.
  • Item
    SPECTRAL CONTRASTS PRODUCED BY CHILDREN WITH COCHLEAR IMPLANTS: INVESTIGATING THE IMPACT OF SIGNAL DEGRADATION ON SPEECH ACQUISITION
    (2022) Johnson, Allison Ann; Edwards, Jan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The primary objective of this dissertation was to assess four consonants, /t/, /k/, /s/, and /ʃ/, produced by young children with cochlear implants (CIs). These consonants were chosen because they comprise two place-of-articulation contrasts, which are cued auditorily by spectral information in English, and they cover both early-acquired (/t/, /k/) and late-acquired (/s/, /ʃ/) manners of articulation. Thus, the auditory-perceptual limitations imposed by CIs is likely to impact acquisition of these sounds: because spectral information is particularly distorted, children have limited access to the cues that differentiate these sounds.Twenty-eight children with CIs and a group of peers with normal hearing (NH) who were matched in terms of age, sex, and maternal education levels participated in this project. The experiment required children to repeat familiar words with initial /t/, /k/, /s/, or /ʃ/ following an auditory model and picture prompt. To create in-depth speech profiles and examine variability both within and across children, target consonants were elicited many times in front-vowel and back-vowel contexts. Patterns of accuracy and errors were analyzed based on transcriptions. Acoustic robustness of contrast was analyzed based on correct productions. Centroid frequencies were calculated from the release-burst spectra for /t/ and /k/ and the fricative noise spectra for /s/ and /ʃ/. Results showed that children with CIs demonstrated patterns not observed in children with NH. Findings provide evidence that for children with CIs, speech acquisition is not simply delayed due to a period of auditory deprivation prior to implantation. Idiosyncratic patterns in speech production are explained in-part by the limitations of CI’s speech-processing algorithms. The first chapter of this dissertation provides a general introduction. The second chapter includes a validation study for a measure to differentiate /t/ and /k/ in adults’ productions. The third chapter analyzes accuracy, errors, and spectral features of /t/ and /k/ across groups of children with and without CIs. The fourth chapter analyzes /s/ and /ʃ/ across groups of children, as well as the spectral robustness of both the /t/-/k/ and the /s/-/ʃ/ contrasts across adults and children. The final chapter discusses future directions for research and clinical applications for speech-language pathologists.
  • Item
    INFLUENCE OF SUPPORTIVE CONTEXT AND STIMULUS VARIABILITY ON RAPID ADAPTATION TO NON-NATIVE SPEECH
    (2021) Bieber, Rebecca; Gordon-Salant, Sandra; Anderson, Samira; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Older listeners, particularly those with age-related hearing loss, report a high level of difficulty in perception of non-native speech when queried in clinical settings. In an increasingly global society, addressing these challenges is an important component of providing auditory care and rehabilitation to this population. Prior literature shows that younger listeners can quickly adapt to both unfamiliar and challenging auditory stimuli, improving their perception over a short period of exposure. Prior work has suggested that a protocol including higher variability of the speech materials may be most beneficial for learning; variability within the stimuli may serve to provide listeners with a larger range of acoustic information to map onto higher level lexical representations. However, there is also evidence that increased acoustic variability is not beneficial for all listeners. Listeners also benefit from the presence of semantic context during speech recognition tasks. It is less clear, however, whether older listeners derive more benefit than younger listeners from supportive context; some studies find increased benefit for older listeners, while others find that the context benefit is similar in magnitude across age groups.This project comprises a series of experiments utilizing behavioral and electrophysiologic measures designed to examine the contributions of acoustic variability and semantic context in relation to speech recognition during the course of rapid adaptation to non-native English speech. Experiment 1 examined the effects of increasing stimulus variability on behavioral measures of rapid adaptation. The results of the study indicated that stimulus variability impacted overall levels of recognition, but did not affect rate of adaptation. This was confirmed in Experiment 2, which also showed that degree of semantic context influenced rate of adaptation, but not overall performance levels. In Experiment 3, younger and older normal-hearing adults showed similar rates of adaptation to a non-native talker regardless of context level, though talker accent and context level interacted to the detriment of older listeners’ speech recognition. When cortical responses were examined, younger and older normal-hearing listeners showed similar predictive processing effects for both native and non-native speech.
  • Item
    THE RELATIONSHIP BETWEEN LANGUAGE EXPERIENCE AND PERFORMANCE ON LANGUAGE ASSESSMENT MEASURES IN TYPICALLY-DEVELOPING SPANISH-ENGLISH BILINGUAL CHILDREN
    (2021) Otarola-Seravalli, Daniella; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to better understand the factors that affect bilingual children’s assessment performance and compare the effects of language experience on different types of measures. English language sample measures (i.e., Index of Productive Syntax, Mean Length of Utterance in morphemes, number of Brown’s morphemes, and Vocabulary Diversity) and English/Spanish nonword repetition (NWR) from 29 children with varying degrees of English and Spanish language experience were analyzed. Language experience, age, and baseline language abilities were identified as factors that influence and predict performance on language samples. Additionally, it was determined that NWR ability was not influenced by language-specific knowledge, due to the lack of significant correlation between nonword repetition accuracy and language experience. These preliminary findings suggest that NWR, even in a child’s second language, is a relatively unbiased tool. Future studies should compare the role of language experience on different measures in other languages.
  • Item
    Utterance-level predictors of stuttering-like, stall, and revision disfluencies in the speech of young children who do and do not stutter
    (2021) Garbarino, Julianne; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Disfluencies are generally divided into two types: stuttering-like disfluencies (SLDs), which are characteristic of the speech of people who stutter, and typical disfluencies (TDs), which are produced by nearly all speakers. In several studies, TDs have been further divided into stalls and revisions; stalls (fillers, repetitions) are thought to be prospective, occurring due to glitches in planning upcoming words and structures, while revisions (word and phrase repetitions, word fragments) are thought to be retrospective, occurring when a speaker corrects language produced in error.This dissertation involved the analysis of 15,782 utterances produced by 32 preschool-age children who stutter (CWS) and 32 matched children who do not stutter (CWNS). The first portion of this dissertation focused on how syntactic factors relate to disfluency. Disfluencies (of all three types) were more likely to occur when utterances were ungrammatical. The disfluency types thought a priori to relate to planning (SLDs and stalls) occurred significantly more often before errors, which is consistent with these disfluencies occurring, in part, due to difficulty planning the error-containing portion of the utterance. Previous findings of a distributional dichotomy between stalls and revisions were not replicated. Both stalls and revisions increased in likelihood in ungrammatical utterances, as the length of the utterance increased, and as the language level of the child who produced the utterance increased. This unexpected result suggests that both stalls and revisions are more likely to occur in utterances that are harder to plan (those that are ungrammatical and/or longer), and that as children’s language develops, so do the skills they need to produce both stalls and revisions. The second part of this dissertation assessed the evidence base for the widespread recommendation that caregivers of young CWS should avoid asking them questions, as CWS have been thought to stutter more often when answering questions. CWS were, in fact, less likely to stutter when answering questions than in other utterance types. Given this finding, the absence of previous evidence connecting question-answering to stuttering, and the potential benefits of asking children questions, clinicians should reconsider the recommendation for caregivers of CWS to reduce their question-asking.
  • Item
    Effects of Age, Hearing Loss and Cognition on Discourse Comprehension and Speech Intelligibility Performance
    (2020) Schurman, Jaclyn; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Discourse comprehension requires listeners to interpret the meaning of an incoming message, integrate the message into memory and use the information to respond appropriately. Discourse comprehension is a skill required to effectively communicate with others in real time. The overall goal of this research is to determine the relative impact of multiple environmental and individual factors on discourse comprehension performance for younger and older adults with and without hearing loss using a clinically feasible testing approach. Study 1 focused on the impact of rapid speech on discourse comprehension performance for younger and older adults with and without hearing loss. Study 2 focused on the impact of background noise and masker type on discourse comprehension performance for younger and older adults with and without hearing loss. The influences of cognitive function and speech intelligibility were also of interest. The impact of these factors was measured using a self-selection paradigm in both studies. Listeners were required to self-select a time-compression ratio or signal-to-noise ratio (SNR) where they could understand and effectively answer questions about the discourse comprehension passages. Results showed that comprehension accuracy performance was held relatively constant across groups and conditions, but the time-compression ratios and SNRs varied significantly. Results in both studies demonstrated significant effects of age and hearing loss on the self-selection of listening rate and SNR. This result suggests that older adults are at a disadvantage for rapid speech and in the presence of background noise during a discourse comprehension task compared to younger adults. Older adults with hearing loss showed an additional disadvantage compared to older normal-hearing listeners for both difficult discourse comprehension tasks. Cognitive function, specifically processing speed and working memory, was shown to predict self-selected time-compression ratio and SNR. Understanding the effects of age, hearing loss and cognitive decline on discourse comprehension performance may eventually help mitigate these effects in real world listening situations.
  • Item
    Automatic Syntactic Processing in Agrammatic Aphasia: The Effect of Grammatical Violations
    (2020) Kim, Minsun; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to examine syntactic processing in agrammatic aphasia. We hypothesized that agrammatic individuals’ automatic syntactic processing would be preserved, as measured by word monitoring task, and their knowledge of syntactic constraints would be impaired, as measured by sentence judgment task, and their performance would vary by type of syntactic violation. The study found that the sentence processing in agrammatism differed based on the type of violation in both tasks: preserved for semantic and tense violations and impaired for word category violations. However, there was no correlation between the two tasks. Furthermore, single-subject analyses showed that automatic syntactic processing for word category violations does not seem to be impaired in aphasia. Based on the findings, this study supports that knowledge of syntactic constraints and automatic processing may be relatively independent abilities which are not related. Findings suggest that individuals with agrammatic aphasia may have preserved automatic syntactic processing.
  • Item
    Effects of talker familiarity on speech understanding and cognitive effort in complex environments.
    (2020) Cohen, Julie; Gordon-Salant, Sandra; Brungart, Douglas S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term goal of this project is to understand the cognitive mechanisms responsible for familiar voice (FV) benefit in real-world environments, and to develop means to exploit the FV benefit to increase saliency of attended speech for older adults with hearing loss. Older adults and those with hearing loss have greater difficulty in noisy environments than younger adults, due in part to a reduction in available cognitive resources. When older listeners are in a challenging environment, their reduced cognitive resources (i.e., working memory and inhibitory control) can result in increased listening effort to maintain speech understanding performance. Both younger and older listeners were tested in this study to determine if the familiar voice benefit varies with listener age under various listening conditions. Study 1 examined whether a FV improves speech understanding and working memory during a dynamic speech understanding task in a real-world setting for couples of younger and older adults. Results showed that both younger and older adults exhibited a talker familiarity benefit to speech understanding performance, but performance on a test of working memory capacity did not vary as a function of talker familiarity. Study 2 examined if a FV improves speech understanding in a simulated cocktail-party environment in a lab setting by presenting multi-talker stimuli that were either monotic or dichotic. Both YNH and ONH groups exhibited a familiarity benefit in monotic and dichotic listening conditions. However, results also showed that talker familiarity benefit in the monotic conditions varied as a function of talker identification accuracy. When the talker identification was correct, speech understanding was similar when listening to a familiar masker or when both voices were unfamiliar. However, when talker identification was incorrect, listening to a familiar masker resulted in a decline in speech understanding. Study 3 examined if a FV improves performance on a measure of auditory working memory. ONH listeners with higher working memory capacity exhibited a benefit in performance when listening to a familiar vs. unfamiliar target voice. Additionally, performance on the 1-back test varied as a function of working memory capacity and inhibitory control. Taken together, talker familiarity is a beneficial cue that both younger and older adults can utilize when listening in complex environments, such as a restaurant or a crowded gathering. Listening to a familiar voice can improve speech understanding in noise, particularly when the noise is composed of speech. However, this benefit did not impact performance on a high memory load task. Understanding the role that familiar voices may have on the allocation of cognitive resources could result in improved aural rehabilitation strategies and may ultimately facilitate improvements in partner communication in complex real-world environments.
  • Item
    The Role of Age and Bilingualism on Perception of Vocoded Speech
    (2020) Waked, Arifi Noman; Goupell, Matthew J; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation examines the role of age and bilingualism on perception of vocoded speech in order to determine whether bilingual individuals, children, and bilingual individuals with later ages of second language acquisition show greater difficulties in vocoded speech perception. Measures of language skill and verbal inhibition were also examined in relation to vocoded speech perception. Two studies were conducted, each of which had two participant language groups: Monolingual English speakers and bilingual Spanish-English speakers. The first study also explored the role of age at the time of testing by including both monolingual and bilingual children (8-10 years), and monolingual and bilingual adults (18+ years). As such, this study included four total groups of adult and child language pairs. Participants were tested on vocoded stimuli simulating speech as perceived through an 8-channel CI in conditions of both deep (0-mm shift) and shallow (6-mm shift) insertion of the electrode array. Between testing trials, participants were trained on the more difficult, 6-mm shift condition. The second study explored the role of age of second language acquisition in native speakers of Spanish (18+ years) first exposed to English at ages ranging from 0 to 12 years. This study also included a control group of monolingual English speakers (18+ years). This study examined perception of target lexical items presented either in isolation or at the end of sentences. Stimuli in this study were either unaltered or vocoded to simulate speech as heard through an 8-channel CI at 0-mm shift. Items presented in isolation were divided into differing levels of difficulty based on frequency and neighborhood density. Target items presented at the ends of sentences were divided into differing levels of difficulty based on the degree of semantic context provided by the sentence. No effects of age at testing or age of acquisition were found. In the first study, there was also no effect of language group. All groups improved with training and showed significant improvement between pre- and post-test speech perception scores in both conditions of shift. In the second study, all participants were significantly negatively impacted by vocoding; however, bilingual participants showed greater difficulty in perception of vocoded lexical items presented in isolation relative to their monolingual peers. This group difference was not found in sentence conditions, where all participants significantly benefited from greater semantic context. From this, we can conclude that bilingual individuals can make use of semantic context to perceive vocoded speech similarly to their monolingual peers. Neither language skills nor verbal inhibition, as measured in these studies, were found to significantly impact speech perception scores in any of the tested conditions across groups.
  • Item
    EFFECTS OF INTERRUPTING NOISE AND SPEECH REPAIR MECHANISMS IN ADULT COCHLEAR-IMPLANT USERS
    (2020) Jaekel, Brittany Nicole; Goupell, Matthew J; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project is to help cochlear-implant (CI) users achieve better speech understanding in noisy, real-world listening environments. The specific objective of the proposed research is to evaluate why speech repair (“restoration”) mechanisms are often atypical or absent in this population. Restoration allows for improved speech understanding when signals are interrupted with noise, at least among normal-hearing listeners. These experiments measured how CI device factors like noise-reduction algorithms and compression and listener factors like peripheral auditory encoding and linguistic skills affected restoration mechanisms. We hypothesized that device factors reduce opportunities to restore speech; noise in the restoration paradigm must act as a plausible masker in order to prompt the illusion of intact speech, and CIs are designed to attenuate noise. We also hypothesized that CI users, when listening with an ear with better peripheral auditory encoding and provided with a semantic cue, would show improved restoration ability. The interaction of high-quality bottom-up acoustic information with top-down linguistic knowledge is integral to the restoration paradigm, and thus restoration could be possible if CI users listen to noise-interrupted speech with a “better ear” and have opportunities to utilize their linguistic knowledge. We found that CI users generally failed to restore speech regardless of device factors, ear presentation, and semantic cue availability. For CI users, interrupting noise apparently serves as an interferer rather than a promoter of restoration. The most common concern among CI users is difficulty understanding speech in noisy listening conditions; our results indicate that one reason for this difficulty could be that CI users are unable to utilize tools like restoration to process noise-interrupted speech effectively.
  • Item
    USE OF MULTIMODAL COMMUNICATION IN PLAY INTERACTIONS WITH CHILDREN WITH AUTISM
    (2020) Rain, Avery; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In typical adult-child interaction, adults tend to coordinate gesture and other nonverbal modes of communication with their verbalizations (multimodal communication). This study explored the effectiveness of multimodal communication with young children with autism spectrum disorders (ASD) to encourage child responses. The maternal use of verbal, nonverbal, and multimodal initiations and the subsequent response or lack of response of their child was examined in fifty mother/child video-recorded play interactions. Results indicated that mothers initiated multimodally at similar rates with children with lower and higher expressive language levels. Child response rates to multimodal communication initiations were higher than response rates to verbal-only or nonverbal-only initiations; this finding was consistent across low and high expressive language groups. Additionally, a significant positive correlation was found between maternal wait time after initiation and overall child response rate. These findings have important ramifications for clinical practice and parent training.