Hearing & Speech Sciences Theses and Dissertations

Permanent URI for this collection


Recent Submissions

Now showing 1 - 20 of 109
  • Item
    (2023) Ogbonna, Chidinma; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Parents play an important role when it comes to child language development. This study examines differences in lexical and syntactic alignment, in child-directed speech (CDS), between African American mothers and fathers from the professional- and working-class. The Hall (1984) corpus from the Child Language Data Exchange System (CHILDES; MacWhinney, 1991) was used to analyze syntactic and lexical alignment in African American professional- and working-class parent-child dyads (children aged 4;6). We investigated the proportion of overlapping nouns shared between mother-child and father-child dyads, as well as differences between parent-child syntactic complexity scores (i.e., Mean Length of Utterance-words (MLU-w), and Verbs per Utterance (Verbs/utt). Results revealed there to be no significant differences regarding lexical and syntactic alignment between the professional- and working-class families; however, fathers were found to produce a significantly higher average proportion of overlapping nouns compared to mothers.
  • Item
    (2023) Erskine, Michelle E; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    There is a long-standing gap in literacy achievement between African American and European American students (e.g., NAEP, 2019, 2022). A large body of research has examined different factors that continue to reinforce performance differences across students. One variable that has been a long-term interest to sociolinguists and applied scientists is children’s use of different dialects in the classroom. Many African American students speak African American English (AAE), a rule-governed, but socially stigmatized, dialect of English that differs in phonology, morphosyntax, and pragmatics from General American English (GAE), the dialect of classroom instruction. Empirical research on dialect variation and literacy achievement has demonstrated that linguistic differences between dialects make it more difficult to learn to read (Buhler et al., 2018; Charity et al., 2004; Gatlin & Wanzek, 2015; Washington et al., 2018, inter alia) and recently, more difficult to comprehend spoken language (Byrd et al., 2022, Edwards et al., 2014; Erskine, 2022a; Johnson, 2005; de Villiers & Johnson, 2007; JM Terry, Hendrick, Evangelou, et al., 2010; JM Terry, Thomas, Jackson, et al., 2022). The prevailing explanation for these results has been the perceptual analysis hypothesis, a framework that asserts that linguistic differences across dialects creates challenges in mapping variable speech signals to listeners’ stored mental representations (Adank et al., 2009; Clopper, 2012; Clopper & Bradlow, 2008; Cristia et al., 2012). However, spoken language comprehension is more than perceptual analysis, requiring the integration of perceptual information with communicative intent and sociocultural information (speaker identity). To this end, it is proposed that the perceptual analysis hypothesis views dialect variation as another form of signal degradation. Simplifying dialect variation to a signal-mapping problem potentially limits our understanding of the contribution of dialect variation to spoken language comprehension. This dissertation proposes that research on spoken language comprehension should integrate frameworks that are more sensitive to the contributions of the sociocultural aspects of dialect variation, such as the role of linguistic and nonlinguistic cues that are associated with speakers of different dialects. This dissertation includes four experiments that use the visual world paradigm to explore the effects of dialect variation on spoken language comprehension among children between the ages of 3;0 to 11;11 years old (years;months) from two linguistic communities, European American speakers of GAE and African American speakers with varying degrees of exposure to AAE and GAE. Chapter 2 (Erskine [2022a]) investigates the effects of dialect variation in auditory-only contexts in two spoken word recognition tasks that vary in linguistic complexity: a) word recognition in simple phrases and b) word recognition in sentences that vary in semantic predictability. Chapter 3 [Erskine (2022b)] examine the effects of visual and auditory speaker identity cues on dialect variation on literal semantic comprehension (i.e., word recognition in semantically facilitating sentences). Lastly, Chapter 4 [Erskine (2022c)] examines the effects of visual and auditory speaker identity cues on children’s comprehension of different dialects in a task that evaluates pragmatic inferencing (i.e., scalar implicature). Each of the studies investigate the validity of the perceptual analysis against sociolinguistcally informed hypotheses that account for the integration of linguistic and nonlinguistic speaker identity cues as adequate explanations for relationships that are observed between dialect variation and spoken language comprehension. Collectively, these studies address the question of how dialect variation impacts spoken language comprehension. This dissertation provides evidence that traditional explanations that focus on perceptual costs are limited in their ability to account for correlations typically reported between spoken language comprehension and dialect use. Additionally, it shows that school-age children rapidly integrate linguistic and nonlinguistic socioindexical cues in ways that meaningfully guide their comprehension of different speakers. The implication of these findings and future research directions are also addressed within.
  • Item
    Supportive Messages Perceived and Recevied in a Therapeutic Setting
    (1994) Barr, Jeanine Rice; Freimuth, Vicki S.; Speech Communication; University of Maryland (College Park, Md); Digital Repository at the University of Maryland
    This study examines how communication of social support influences the behavioral change process in a particular environment. Specifically, the research question is: How is social support related to commitment to recovery from alcoholism/addiction? A one group pre-test/post-test research design was used with subjects in two addictions treatment centers. Questions were designed to measure changes that took place in individual's perception of supportiveness of messages received, the network support available to them, changes in uncertainty and self-esteem. Finally, how these variables predict commitment to recovery was examined. Results showed no relationship between strength of network at time 1 and the supportiveness of messages received. Strength of network support, self-esteem, and uncertainty reduction improved from time 1 to time 2. The major predictor of a patient's commitment to recovery was the level of self esteem at time 2. However, a strong correlation was found between self-esteem and strength of network at time 2.
  • Item
    The Effect Of Language Mixing on Word Retrieval in Bilingual Adults with Aphasia
    (2022) Nichols, Meghan; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Lexical retrieval deficits are a common feature in aphasia, and while much research has been done on bilingual aphasia and on the processes involved in language mixing by healthy bilingual adults, it is not clear whether it may be beneficial for bilingual people with aphasia to change languages in moments of lexical retrieval or if it is more effective to continue the lexical search in one language. The primary aim of this project was to determine whether bilingual people with aphasia demonstrate global and local effects of language mixing. Grammatical categories (i.e., nouns and verbs) were examined separately, and participant- and stimulus-related factors were considered. Based on preliminary analyses of participants’ accuracy and response onset latencies, it appears that participants tended to benefit from mixing in terms of speed and accuracy and that their results may be related to their language proficiency and dominance.
  • Item
    Examining Narrative Language in Early Stage Parkinson's Disease and Intermediate Farsi-English Bilingual Speakers
    (2022) Lohrasbi, Bushra; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to examine procedural aspects of language (grammaticality, syntactic complexity, regular past tense verb production), verb use, and the association between motor-speech, language abilities, and intelligibility in Early Stage Parkinson's Disease (PD) and Intermediate Farsi-English Bilingual Speakers (L2). Ullman’s Declarative-Procedural Model (2001) provided this study with a dual-mechanism model that justified a theoretical comparison between these two populations. Twenty-four neurologically healthy native speakers of English, twenty-three Parkinson’s Disease participants, and thirteen bilingual Farsi-English speakers completed three narrative picture description tasks and read the first three sentences of the Rainbow Passage. Language samples were transcribed and analyzed to derive measures of morphosyntax and verb use, including grammatical accuracy, grammatical complexity, and proportions of regular past tense, action verbs and light verbs. The results did not show any evidence of morphosyntactic or action verb deficit in PD. Neither was there any evidence of a trade-off between morphosyntactic performance and severity of speech motor impairment in PD. L2 speakers had lower scores on grammatical accuracy and a measure of morphosyntactic complexity, but did not differ from monolingual speakers on measures of verb use. Overall, these results show that language abilities (morphosyntax and verb use) are preserved in early stage PD. This study replicates the well-documented finding that morphosyntax is particularly challenging for late bilingual speakers. The results did not support Ullman’s Declarative-Procedural (2001) hypothesis of language production in Parkinson’s Disease or L2 speakers.
  • Item
    The Impact of Maternal Negative Language on Children’s Language Development
    (2022) Lee, Hae Ri; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Various features of infant- and child-directed speech (IDS/CDS) are known to have a positive impact on children’s language development. Some, such as directive language, appear to be less facilitating. We investigated whether mothers’ usage of negative language impacts children’s language development. Thirty-three mothers’ language samples at 30 months and children’s conversational language samples at 66 months were analyzed to locate operationally defined negative language and imperatives. Five language sample analysis measures were utilized to assess children’s expressive language abilities. Inverse relationships between maternal use of negative language and children’s language outcome measures were found. This preliminary result suggests that the more children hear negative language at an earlier age, the lower their language outcomes are at a later age. This study was exploratory in nature, and various limitations and implications for future studies are outlined in the paper.
  • Item
    (2022) Johnson, Allison Ann; Edwards, Jan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The primary objective of this dissertation was to assess four consonants, /t/, /k/, /s/, and /ʃ/, produced by young children with cochlear implants (CIs). These consonants were chosen because they comprise two place-of-articulation contrasts, which are cued auditorily by spectral information in English, and they cover both early-acquired (/t/, /k/) and late-acquired (/s/, /ʃ/) manners of articulation. Thus, the auditory-perceptual limitations imposed by CIs is likely to impact acquisition of these sounds: because spectral information is particularly distorted, children have limited access to the cues that differentiate these sounds.Twenty-eight children with CIs and a group of peers with normal hearing (NH) who were matched in terms of age, sex, and maternal education levels participated in this project. The experiment required children to repeat familiar words with initial /t/, /k/, /s/, or /ʃ/ following an auditory model and picture prompt. To create in-depth speech profiles and examine variability both within and across children, target consonants were elicited many times in front-vowel and back-vowel contexts. Patterns of accuracy and errors were analyzed based on transcriptions. Acoustic robustness of contrast was analyzed based on correct productions. Centroid frequencies were calculated from the release-burst spectra for /t/ and /k/ and the fricative noise spectra for /s/ and /ʃ/. Results showed that children with CIs demonstrated patterns not observed in children with NH. Findings provide evidence that for children with CIs, speech acquisition is not simply delayed due to a period of auditory deprivation prior to implantation. Idiosyncratic patterns in speech production are explained in-part by the limitations of CI’s speech-processing algorithms. The first chapter of this dissertation provides a general introduction. The second chapter includes a validation study for a measure to differentiate /t/ and /k/ in adults’ productions. The third chapter analyzes accuracy, errors, and spectral features of /t/ and /k/ across groups of children with and without CIs. The fourth chapter analyzes /s/ and /ʃ/ across groups of children, as well as the spectral robustness of both the /t/-/k/ and the /s/-/ʃ/ contrasts across adults and children. The final chapter discusses future directions for research and clinical applications for speech-language pathologists.
  • Item
    (2021) Bieber, Rebecca; Gordon-Salant, Sandra; Anderson, Samira; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Older listeners, particularly those with age-related hearing loss, report a high level of difficulty in perception of non-native speech when queried in clinical settings. In an increasingly global society, addressing these challenges is an important component of providing auditory care and rehabilitation to this population. Prior literature shows that younger listeners can quickly adapt to both unfamiliar and challenging auditory stimuli, improving their perception over a short period of exposure. Prior work has suggested that a protocol including higher variability of the speech materials may be most beneficial for learning; variability within the stimuli may serve to provide listeners with a larger range of acoustic information to map onto higher level lexical representations. However, there is also evidence that increased acoustic variability is not beneficial for all listeners. Listeners also benefit from the presence of semantic context during speech recognition tasks. It is less clear, however, whether older listeners derive more benefit than younger listeners from supportive context; some studies find increased benefit for older listeners, while others find that the context benefit is similar in magnitude across age groups.This project comprises a series of experiments utilizing behavioral and electrophysiologic measures designed to examine the contributions of acoustic variability and semantic context in relation to speech recognition during the course of rapid adaptation to non-native English speech. Experiment 1 examined the effects of increasing stimulus variability on behavioral measures of rapid adaptation. The results of the study indicated that stimulus variability impacted overall levels of recognition, but did not affect rate of adaptation. This was confirmed in Experiment 2, which also showed that degree of semantic context influenced rate of adaptation, but not overall performance levels. In Experiment 3, younger and older normal-hearing adults showed similar rates of adaptation to a non-native talker regardless of context level, though talker accent and context level interacted to the detriment of older listeners’ speech recognition. When cortical responses were examined, younger and older normal-hearing listeners showed similar predictive processing effects for both native and non-native speech.
  • Item
    (2021) Otarola-Seravalli, Daniella; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to better understand the factors that affect bilingual children’s assessment performance and compare the effects of language experience on different types of measures. English language sample measures (i.e., Index of Productive Syntax, Mean Length of Utterance in morphemes, number of Brown’s morphemes, and Vocabulary Diversity) and English/Spanish nonword repetition (NWR) from 29 children with varying degrees of English and Spanish language experience were analyzed. Language experience, age, and baseline language abilities were identified as factors that influence and predict performance on language samples. Additionally, it was determined that NWR ability was not influenced by language-specific knowledge, due to the lack of significant correlation between nonword repetition accuracy and language experience. These preliminary findings suggest that NWR, even in a child’s second language, is a relatively unbiased tool. Future studies should compare the role of language experience on different measures in other languages.
  • Item
    Utterance-level predictors of stuttering-like, stall, and revision disfluencies in the speech of young children who do and do not stutter
    (2021) Garbarino, Julianne; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Disfluencies are generally divided into two types: stuttering-like disfluencies (SLDs), which are characteristic of the speech of people who stutter, and typical disfluencies (TDs), which are produced by nearly all speakers. In several studies, TDs have been further divided into stalls and revisions; stalls (fillers, repetitions) are thought to be prospective, occurring due to glitches in planning upcoming words and structures, while revisions (word and phrase repetitions, word fragments) are thought to be retrospective, occurring when a speaker corrects language produced in error.This dissertation involved the analysis of 15,782 utterances produced by 32 preschool-age children who stutter (CWS) and 32 matched children who do not stutter (CWNS). The first portion of this dissertation focused on how syntactic factors relate to disfluency. Disfluencies (of all three types) were more likely to occur when utterances were ungrammatical. The disfluency types thought a priori to relate to planning (SLDs and stalls) occurred significantly more often before errors, which is consistent with these disfluencies occurring, in part, due to difficulty planning the error-containing portion of the utterance. Previous findings of a distributional dichotomy between stalls and revisions were not replicated. Both stalls and revisions increased in likelihood in ungrammatical utterances, as the length of the utterance increased, and as the language level of the child who produced the utterance increased. This unexpected result suggests that both stalls and revisions are more likely to occur in utterances that are harder to plan (those that are ungrammatical and/or longer), and that as children’s language develops, so do the skills they need to produce both stalls and revisions. The second part of this dissertation assessed the evidence base for the widespread recommendation that caregivers of young CWS should avoid asking them questions, as CWS have been thought to stutter more often when answering questions. CWS were, in fact, less likely to stutter when answering questions than in other utterance types. Given this finding, the absence of previous evidence connecting question-answering to stuttering, and the potential benefits of asking children questions, clinicians should reconsider the recommendation for caregivers of CWS to reduce their question-asking.
  • Item
    Effects of Age, Hearing Loss and Cognition on Discourse Comprehension and Speech Intelligibility Performance
    (2020) Schurman, Jaclyn; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Discourse comprehension requires listeners to interpret the meaning of an incoming message, integrate the message into memory and use the information to respond appropriately. Discourse comprehension is a skill required to effectively communicate with others in real time. The overall goal of this research is to determine the relative impact of multiple environmental and individual factors on discourse comprehension performance for younger and older adults with and without hearing loss using a clinically feasible testing approach. Study 1 focused on the impact of rapid speech on discourse comprehension performance for younger and older adults with and without hearing loss. Study 2 focused on the impact of background noise and masker type on discourse comprehension performance for younger and older adults with and without hearing loss. The influences of cognitive function and speech intelligibility were also of interest. The impact of these factors was measured using a self-selection paradigm in both studies. Listeners were required to self-select a time-compression ratio or signal-to-noise ratio (SNR) where they could understand and effectively answer questions about the discourse comprehension passages. Results showed that comprehension accuracy performance was held relatively constant across groups and conditions, but the time-compression ratios and SNRs varied significantly. Results in both studies demonstrated significant effects of age and hearing loss on the self-selection of listening rate and SNR. This result suggests that older adults are at a disadvantage for rapid speech and in the presence of background noise during a discourse comprehension task compared to younger adults. Older adults with hearing loss showed an additional disadvantage compared to older normal-hearing listeners for both difficult discourse comprehension tasks. Cognitive function, specifically processing speed and working memory, was shown to predict self-selected time-compression ratio and SNR. Understanding the effects of age, hearing loss and cognitive decline on discourse comprehension performance may eventually help mitigate these effects in real world listening situations.
  • Item
    Automatic Syntactic Processing in Agrammatic Aphasia: The Effect of Grammatical Violations
    (2020) Kim, Minsun; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study aimed to examine syntactic processing in agrammatic aphasia. We hypothesized that agrammatic individuals’ automatic syntactic processing would be preserved, as measured by word monitoring task, and their knowledge of syntactic constraints would be impaired, as measured by sentence judgment task, and their performance would vary by type of syntactic violation. The study found that the sentence processing in agrammatism differed based on the type of violation in both tasks: preserved for semantic and tense violations and impaired for word category violations. However, there was no correlation between the two tasks. Furthermore, single-subject analyses showed that automatic syntactic processing for word category violations does not seem to be impaired in aphasia. Based on the findings, this study supports that knowledge of syntactic constraints and automatic processing may be relatively independent abilities which are not related. Findings suggest that individuals with agrammatic aphasia may have preserved automatic syntactic processing.
  • Item
    Effects of talker familiarity on speech understanding and cognitive effort in complex environments.
    (2020) Cohen, Julie; Gordon-Salant, Sandra; Brungart, Douglas S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term goal of this project is to understand the cognitive mechanisms responsible for familiar voice (FV) benefit in real-world environments, and to develop means to exploit the FV benefit to increase saliency of attended speech for older adults with hearing loss. Older adults and those with hearing loss have greater difficulty in noisy environments than younger adults, due in part to a reduction in available cognitive resources. When older listeners are in a challenging environment, their reduced cognitive resources (i.e., working memory and inhibitory control) can result in increased listening effort to maintain speech understanding performance. Both younger and older listeners were tested in this study to determine if the familiar voice benefit varies with listener age under various listening conditions. Study 1 examined whether a FV improves speech understanding and working memory during a dynamic speech understanding task in a real-world setting for couples of younger and older adults. Results showed that both younger and older adults exhibited a talker familiarity benefit to speech understanding performance, but performance on a test of working memory capacity did not vary as a function of talker familiarity. Study 2 examined if a FV improves speech understanding in a simulated cocktail-party environment in a lab setting by presenting multi-talker stimuli that were either monotic or dichotic. Both YNH and ONH groups exhibited a familiarity benefit in monotic and dichotic listening conditions. However, results also showed that talker familiarity benefit in the monotic conditions varied as a function of talker identification accuracy. When the talker identification was correct, speech understanding was similar when listening to a familiar masker or when both voices were unfamiliar. However, when talker identification was incorrect, listening to a familiar masker resulted in a decline in speech understanding. Study 3 examined if a FV improves performance on a measure of auditory working memory. ONH listeners with higher working memory capacity exhibited a benefit in performance when listening to a familiar vs. unfamiliar target voice. Additionally, performance on the 1-back test varied as a function of working memory capacity and inhibitory control. Taken together, talker familiarity is a beneficial cue that both younger and older adults can utilize when listening in complex environments, such as a restaurant or a crowded gathering. Listening to a familiar voice can improve speech understanding in noise, particularly when the noise is composed of speech. However, this benefit did not impact performance on a high memory load task. Understanding the role that familiar voices may have on the allocation of cognitive resources could result in improved aural rehabilitation strategies and may ultimately facilitate improvements in partner communication in complex real-world environments.
  • Item
    The Role of Age and Bilingualism on Perception of Vocoded Speech
    (2020) Waked, Arifi Noman; Goupell, Matthew J; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation examines the role of age and bilingualism on perception of vocoded speech in order to determine whether bilingual individuals, children, and bilingual individuals with later ages of second language acquisition show greater difficulties in vocoded speech perception. Measures of language skill and verbal inhibition were also examined in relation to vocoded speech perception. Two studies were conducted, each of which had two participant language groups: Monolingual English speakers and bilingual Spanish-English speakers. The first study also explored the role of age at the time of testing by including both monolingual and bilingual children (8-10 years), and monolingual and bilingual adults (18+ years). As such, this study included four total groups of adult and child language pairs. Participants were tested on vocoded stimuli simulating speech as perceived through an 8-channel CI in conditions of both deep (0-mm shift) and shallow (6-mm shift) insertion of the electrode array. Between testing trials, participants were trained on the more difficult, 6-mm shift condition. The second study explored the role of age of second language acquisition in native speakers of Spanish (18+ years) first exposed to English at ages ranging from 0 to 12 years. This study also included a control group of monolingual English speakers (18+ years). This study examined perception of target lexical items presented either in isolation or at the end of sentences. Stimuli in this study were either unaltered or vocoded to simulate speech as heard through an 8-channel CI at 0-mm shift. Items presented in isolation were divided into differing levels of difficulty based on frequency and neighborhood density. Target items presented at the ends of sentences were divided into differing levels of difficulty based on the degree of semantic context provided by the sentence. No effects of age at testing or age of acquisition were found. In the first study, there was also no effect of language group. All groups improved with training and showed significant improvement between pre- and post-test speech perception scores in both conditions of shift. In the second study, all participants were significantly negatively impacted by vocoding; however, bilingual participants showed greater difficulty in perception of vocoded lexical items presented in isolation relative to their monolingual peers. This group difference was not found in sentence conditions, where all participants significantly benefited from greater semantic context. From this, we can conclude that bilingual individuals can make use of semantic context to perceive vocoded speech similarly to their monolingual peers. Neither language skills nor verbal inhibition, as measured in these studies, were found to significantly impact speech perception scores in any of the tested conditions across groups.
  • Item
    (2020) Jaekel, Brittany Nicole; Goupell, Matthew J; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project is to help cochlear-implant (CI) users achieve better speech understanding in noisy, real-world listening environments. The specific objective of the proposed research is to evaluate why speech repair (“restoration”) mechanisms are often atypical or absent in this population. Restoration allows for improved speech understanding when signals are interrupted with noise, at least among normal-hearing listeners. These experiments measured how CI device factors like noise-reduction algorithms and compression and listener factors like peripheral auditory encoding and linguistic skills affected restoration mechanisms. We hypothesized that device factors reduce opportunities to restore speech; noise in the restoration paradigm must act as a plausible masker in order to prompt the illusion of intact speech, and CIs are designed to attenuate noise. We also hypothesized that CI users, when listening with an ear with better peripheral auditory encoding and provided with a semantic cue, would show improved restoration ability. The interaction of high-quality bottom-up acoustic information with top-down linguistic knowledge is integral to the restoration paradigm, and thus restoration could be possible if CI users listen to noise-interrupted speech with a “better ear” and have opportunities to utilize their linguistic knowledge. We found that CI users generally failed to restore speech regardless of device factors, ear presentation, and semantic cue availability. For CI users, interrupting noise apparently serves as an interferer rather than a promoter of restoration. The most common concern among CI users is difficulty understanding speech in noisy listening conditions; our results indicate that one reason for this difficulty could be that CI users are unable to utilize tools like restoration to process noise-interrupted speech effectively.
  • Item
    (2020) Rain, Avery; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In typical adult-child interaction, adults tend to coordinate gesture and other nonverbal modes of communication with their verbalizations (multimodal communication). This study explored the effectiveness of multimodal communication with young children with autism spectrum disorders (ASD) to encourage child responses. The maternal use of verbal, nonverbal, and multimodal initiations and the subsequent response or lack of response of their child was examined in fifty mother/child video-recorded play interactions. Results indicated that mothers initiated multimodally at similar rates with children with lower and higher expressive language levels. Child response rates to multimodal communication initiations were higher than response rates to verbal-only or nonverbal-only initiations; this finding was consistent across low and high expressive language groups. Additionally, a significant positive correlation was found between maternal wait time after initiation and overall child response rate. These findings have important ramifications for clinical practice and parent training.
  • Item
    Investigation of Cognitive and Linguistic Effects of Exercise on Older Adults
    (2020) Crossman, Claire Marjorie; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study examines the effect of a single session of exercise on response speed, inhibitory control, and lexical processing in older adults. A prior study in college-aged adults found faster responses in domain general processing and lexical recognition after exercise, but not in inhibitory control or lexical retrieval. We hypothesized older adults would show a greater exercise benefit with slower overall response times. This study found that, relative to a sedentary control condition, there were no changes in any experimental condition. Older adults showed practice effects in the exercise and control conditions. This study shows the effects of acute exercise in older adults are negligible compared to those in younger adults, at least in the paradigm used in this study. Findings highlight the importance of using a control task and are consistent with meta-analyses that highlight small effect sizes associated with acute exercise and the role of other mediating variables.
  • Item
    (2020) Guevara, Sandra Stephanie; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (CS) patterns were investigated in language samples of 14 typically-developing Spanish-English bilingual preschool-aged children. CS occurred primarily when the children spoke in Spanish. We investigated code-switched events, vocabulary measures, and disfluencies to better understand if children utilize code-switching to fill in lexical gaps in Spanish, as measured by disfluencies surrounding the code-switch. Results indicate that children’s spoken vocabulary diversity is not related to code-switching frequency, although their receptive vocabulary skills are negatively correlated to proportions of code-switched events. We also found no significant relationship between code-switched events and disfluencies across participants. Findings suggest clinical implications related to best practice for speech-language pathologists when working with bilingual children, as they observe language attrition, and code-switching related to language proficiency and dominance.
  • Item
    Auditory Temporal Processing Ability in Cochlear-Implant Users: The Effects of Age and Peripheral Neural Survival
    (2019) Shader, Maureen Joyce; Goupell, Matthew J; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Cochlear implants (CIs) are a valuable tool in the treatment of hearing loss and are considered a safe and effective option for adults of all ages. Nevertheless, older adults with CIs do not always achieve comparable speech recognition performance to younger adults following implantation. The mechanism(s) underlying this age limitation are unknown. It was hypothesized that older CI users would demonstrate age-related deficits in auditory temporal processing ability, which could contribute to an age limitation in CI performance. This is because the ability to accurately encode temporal information is critical to speech recognition through a CI. The current studies were aimed at identifying age-related limitations for processing temporal information using a variety of electrical stimulation parameters with the goal of identifying parameters that could mitigate the negative effects of age on CI performance. Studies 1 and 2 measured auditory temporal processing ability for non-speech signals at the single-electrode level for various electrical stimulation rates. Specifically, Study 1 measured gap detection thresholds, which constitutes a simple, static measurement of temporal processing. Study 2 measured amplitude-modulation detection thresholds, which utilized relatively more complex and dynamic signals. Peripheral neural survival was estimated on each electrode location that was tested in Studies 1 and 2. Study 3 measured phoneme recognition ability for consonant contrasts that varied in discrete temporal cues at multiple stimulation rates and envelope modulation frequencies. Results demonstrated significant effects of age and/or peripheral neural survival on temporal processing ability in each study. However, age and the degree of neural survival were often strongly correlated, with older participants exhibiting poorer neural survival compared to younger participants. This result suggested that a substantial reduction in peripheral neural survival accompanies aging in older CI users, and that these factors should be considered together, rather than separately. Parametric variation in the stimulation settings impacted performance for some participants, but this effect was not consistent across participants, nor was it predicted by age or peripheral neural survival.
  • Item
    Intelligibility in Children with Cochlear Implants: The /t/ vs. /k/ Contrast
    (2019) Leonard, Elinora C; Edwards, Jan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Previous research has found that the speech of children with cochlear implants (CI) is less intelligible than the speech of peers with normal hearing (NH). This claim has been supported by research showing that children with CIs have difficulty with the late-acquired spectral contrast of /s/ vs. /ʃ/: correctly produced words containing these initial-consonants are less intelligible when produced by children with CIs relative to children with NH. The current study examined whether a similar result is observed with the early-acquired spectral contrast of /t/ vs. /k/. Crowd-sourced data were used to evaluate intelligibility of /t/- and /k/-initial words correctly produced by children with CIs and children with NH embedded in multi-talker babble. Results indicated that whole-word productions of children with CIs were less intelligible than productions of children with NH for words beginning with this early-acquired contrast. However, results also indicated this difference in intelligibility was not dependent on the intelligibility of the initial consonant alone.