Hearing & Speech Sciences Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2776

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants
    (2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.
  • Thumbnail Image
    Item
    Understanding and remembering pragmatic inferences
    (2018) Kowalski, Alix; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation examines the extent to which sentence interpretations are incrementally encoded in memory. While traditional models of sentence processing assume that comprehension results in a single interpretation, evidence from syntactic parsing indicates that initial misinterpretations are sometimes maintained in memory along with their revised counterparts (e.g., Christianson, Hollingworth, Halliwell & Ferreira, 2001). However, this evidence has largely come from experiments featuring sentences that are presented in isolation and words that are biased toward incorrect syntactic analyses. Because there is typically enough sentential context in natural speech to avoid the incorrect analysis (Roland, Elman, & Ferreira, 2006), it is unclear whether initial interpretations are incrementally encoded in memory when there is sufficient context. The scalar term “some” provides a test case where context is necessary to select between two interpretations, one based on semantics (some and possibly all) and one based on pragmatic inference (some but not all) (Horn, 1989). Although listeners strongly prefer the pragmatic interpretation (e.g., Van Tiel, Van Miltenburg, Zevakhina, & Geurts, 2016), prior research suggests that the semantic meaning is considered before the inference is adopted (Rips, 1975; Noveck & Posada, 2003; Bott & Noveck, 2004; Breheny, Katsos, & Williams, 2006; De Neys & Schaeken, 2007; Huang & Snedeker, 2009, 2011). I used a word-learning and recall task to show that there is evidence of the semantic meaning in the memory representation of sentences featuring “some,” even when the pragmatic interpretation is ultimately adopted. This raises two possibilities: first, the memory representation was of poor quality because both interpretations were available during encoding, or the semantic meaning was computed and encoded first and lingered even after the pragmatic interpretation was computed and encoded. Data from a conflict-adaptation experiment revealed a facilitating effect of cognitive control engagement. However, there was still a delay before the pragmatic inference was adopted. This suggests that only the semantic meaning is available initially and the system failed to override it in memory when the pragmatic interpretation was computed. Taken together, these findings demonstrate the incrementality of memory encoding during sentence processing.
  • Thumbnail Image
    Item
    Interactions between language experience and cognitive abilities in word learning and word recognition
    (2014) Morini, Giovanna; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    There has been much recent interest in the finding of a "bilingual advantage". That is, bilingualism confers benefits on various non-linguistic cognitive measures, particularly executive control. Yet bilingual children often face a different situation when it comes to language: their profile often negatively diverges from that of monolinguals, potentially leading to classification as language-disordered. This, in turn, contributes to public policies that discourage bilingualism. Most studies have examined ways in which bilinguals are better or worse than monolinguals. However, it is possible that bilinguals simply approach tasks differently, or weight information sources differently. This leads to advantages in some tasks and disadvantages in others. This dissertation seeks a principled understanding of this conflict by testing the hypothesis that differences in linguistic exposure and age alter how individuals approach the problem space for learning and comprehending language. To become proficient in a language, learners must process complex acoustic information, while relying on cognition to accomplish higher thought processes like working memory and attention. Over the course of development, individuals rely on these skills to acquire an impressive vocabulary, and to recognize words even in adverse listening conditions (e.g., when speech is heard in the presence of noise). I present findings from four experiments with monolingual and bilingual adults and toddlers. In adulthood, despite showing advantages in cognitive control, bilinguals appear to be less accurate than monolinguals at identifying familiar words in the presence of white noise. However, the bilingual "disadvantage" identified during word recognition was not present when listeners were asked to acquire novel word-object relations that were trained either in noise or in quiet. Similar group differences were identified with 30-month-olds during word recognition. Bilingual children performed significantly worse than monolinguals, particularly when asked to identify words that were accompanied by white noise. Unlike the pattern shown by adults, when presented with a word-learning task, monolingual but not bilingual toddlers were able to acquire novel word-object associations. Data from this work thus suggest that age, linguistic experience, and the demands associated with the type of task all play a role in the ability of listeners to process speech in noise.