UMD Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/3

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    Item
    Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies
    (2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.
  • Item
    A world without words: A non-lexicalist framework for psycho- and neuro-linguistics
    (2024) Krauska, Alexandra; Lau, Ellen; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items”. Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend - in a variety of languages, including English - are challenging for them to account for. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. This dissertation discusses the issues with lexicalism in linguistic theory as well as its implications in psycho- and neuro-linguistics. In addition, I propose a non-lexicalist model of language production, the “WithOut Words” (WOW) model, which does not rely on lemma representations, but instead represents that knowledge as independent mappings between meaning and syntax, and syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. Based on this, the model suggests that neural responses during language production should be modulated not just by the pieces of meaning, syntax, and form, but also by the complexity of the mapping processes which link those separate representations. This prediction is supported by the results of a novel experimental paradigm using electroencephalography (EEG) during language production, which observes greater neural responses for meaning-syntax and syntax-form mapping complexity in two separate time windows. Finally, I re-evaluate the dissociation between regular and irregular verbs in aphasia, which has been used as supporting evidence for a distinction between the grammar and the lexicon. By training recurrent neural networks and measuring their performance after lesioning, I show that the observed clinical data can be accounted for within a single mechanism. By moving away from lexicalist assumptions, the non-lexicalist framework described in this dissertation provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.
  • Thumbnail Image
    Item
    HOW BILINGUALS' COMPREHENSION OF CODE-SWITCHES INFLUENCES ATTENTION AND MEMORY
    (2024) Salig, Lauren; Novick, Jared; Slevc, L. Robert; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Bilinguals sometimes code-switch between their shared languages. While psycholinguistics research has focused on the challenges of comprehending code-switches compared to single-language utterances, bilinguals seem unhindered by code-switching in communication, suggesting benefits that offset the costs. I hypothesize that bilinguals orient their attention to speech content after hearing a code-switch because they draw a pragmatic inference about its meaning. This hypothesis is based on the pragmatic meaningfulness of code-switches, which speakers may use to emphasize information, signal their identity, or ease production difficulties, inter alia. By considering how code-switches may benefit listeners, this research attempts to better align our psycholinguistic understanding of code-switch processing with actual bilingual language use, while also inspiring future work to investigate how diverse language contexts may facilitate learning in educational settings. In this dissertation, I share the results of three pre-registered experiments with Spanish-English bilinguals that evaluate how hearing a code-switch affects attention and memory. Experiment 1a shows that code-switches increase bilinguals’ self-reported attention to speech content and improve memory for that information, compared to single-language equivalents. Experiment 1b demonstrates that this effect requires bilingual experience, as English-speaking monolinguals did not demonstrate increased attention upon hearing a code-switch. Experiment 2 attempts to replicate these results and establish the time course of the attentional effect using an EEG measure previously associated with attentional engagement (alpha power). However, I conclude that alpha power was not a valid measure of attention to speech content in this experiment. In Experiment 3, bilinguals again showed better memory for information heard in a code-switched context, with a larger benefit for those with more code-switching experience and when listeners believed the code-switches were natural (as opposed to inserted randomly, removing the element of speaker choice). This suggests that the memory benefit comes from drawing a pragmatic inference, which likely requires prior code-switching experience and a belief in code-switches’ communicative purpose. These experiments establish that bilingual listeners derive attentional and memory benefits from ecologically valid code-switches—challenging a simplistic interpretation of the traditional finding of “costs.” Further, these findings motivate future applied work assessing if/how code-switches might benefit learning in educational contexts.
  • Thumbnail Image
    Item
    Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants
    (2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.
  • Thumbnail Image
    Item
    Knowledge and Processing of Morphosyntactic Variation in African American Language and Mainstream American English
    (2023) Maher, Zachary Kevin; Edwards, Jan; Novick, Jared; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    As people from different social groups come into contact, they must accommodate differences in morphosyntax (e.g., He seem nice vs. He seems nice) in order to successfully represent and comprehend their interlocutor’s speech. Listeners usually have high comprehension across such differences, but little is known about the mechanisms behind morphosyntactic accommodation. In this dissertation, I asked what listeners know about variation in morphosyntax and how they deploy this knowledge in real-time language processing. As a test case, I focused on regularized subject-verb agreement (e.g., He seem nice, They was happy)—which is common in African American Language (AAL), but not in Mainstream American English (MAE)—and compared how listeners adjust their linguistic expectations depending on what language varieties both they and their interlocutors speak. In Experiment 1, I showed that participants who primarily speak MAE 1) recognize that some speakers use regularized subject-verb agreement, 2) evaluate that regularized subject-verb agreement is associated with AAL, and 3) predict that the subject-verb agreement rules of AAL allow for some patterns (They was happy) but not others (*He were happy). This was accomplished using a novel sentence rating task, where participants heard audio examples of a given language variety, then rated written sentences for how likely a speaker of that variety would be to say them. In Experiment 2, I showed that a similar pool of participants did not merely recognize regularized subject-verb agreement; their knowledge of variation lead them to predict that AAL speakers use regularized forms in an acoustically ambiguous context. Participants heard sentences like He sit(s) still, where it is unclear whether the verb includes a verbal -s due to a segmentation ambiguity. They were more likely to transcribe a regularized form (He sit still) when it was spoken by an AAL-speaking voice than when it was spoken by an MAE-speaking voice. Together, these results indicate that listeners have rich mental models of their interlocutors that extend beyond a general awareness of linguistic difference. In Experiment 3, I compared bidialectal speakers of AAL and MAE and monodialectal speakers of MAE. On the rating task from Experiment 1, bidialectal participants showed a greater degree of differentiation between sentences that are grammatical in AAL and sentences that are ungrammatical in AAL, compared to monodialectal participants. However, both groups of participants indicated that ungrammatical sentences are broadly more likely in AAL than MAE, contrary to usage patterns in the world. On the transcription task from Experiment 2, bidialectal participants were overall more likely to transcribe regularized subject-verb agreement, but they differentiated between AAL- and MAE-speaking voices to the same degree as monodialectal participants. Both groups were more likely to use MAE subject-verb agreement (He sits still) than regularized subject-verb agreement (He sit still). These results suggest that bidialectal listeners broadly expect regularized subject-verb agreement to a greater degree than do monodialectal listeners, rather than making stronger predictions about a given speaker. Moreover, while bidialectal listeners have a more granular sense of AAL’s grammatical rules, all listeners still favor MAE, likely reflecting MAE’s dominant status. In Experiment 4, I asked how listeners use their knowledge of variation in subject-verb agreement to guide real-time interpretation of sentences, again comparing bidialectal and monodialectal participants. Participants heard sentences like The duck(s) swim in the pond, where they must rely on the agreement morphology of the verb to determine whether the subject of the sentence is singular or plural, since a segmentation ambiguity makes it unclear whether the noun ends in -s. In MAE, only a plural interpretation is available, while in AAL, a singular interpretation is also available. Participants’ eye-movements were tracked as they looked at and selected images on a screen. Participants were more likely to look at and select a singular image if the sentence was presented in an AAL-speaking voice, compared to an MAE-speaking voice, and bidialectal participants were more likely to look at and select a singular image, compared to monodialectal participants. As with the transcription task in Experiment 3, this suggests that bidialectal participants are broadly more likely to consider the possibility that a speaker uses regularized SVA, compared to monodialectal participants, but their linguistic expectations are not more strongly differentiated based on the grammar of their interlocutor. These results make it clear that listeners have mental models of morphosyntactic variation, which can be characterized along a variety of dimensions, including the syntax, semantics, and indexicality (social meaning) of a given variable. This can serve as a foundation for future inquiry into the details of these models and the real-time switching and control dynamics as listeners adjust to different varieties in their environment.
  • Thumbnail Image
    Item
    CROSS-LINGUISTIC DIFFERENCES IN THE LEARNING OF INFLECTIONAL MORPHOLOGY: EFFECTS OF TARGET LANGUAGE PARADIGM COMPLEXITY
    (2020) Solovyeva, Ekaterina; DeKeyser, Robert M.; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Inflectional morphology poses significant difficulty to learners of foreign languages. Multiple approaches have attempted to explain it through one of two lenses. First, inflection has been viewed as one manifestation of syntactic knowledge; its learning has been related to the learning of syntactic structures. Second, the perceptual and semantic properties of the morphemes themselves have been invoked as a cause of difficulty. These groups of accounts presuppose different amounts of abstract knowledge and quite different learning mechanisms. On syntactic accounts, learners possess elaborate architectures of syntactic projections that they use to analyze linguistic input. They do not simply learn morphemes as discrete units in a list—instead, they learn the configurations of feature settings that these morphemes express. On general-cognitive accounts, learners do learn morphemes as units—each with non-zero difficulty and more or less independent of the others. The “more” there is to learn, the worse off the learner. This dissertation paves the way towards integrating the two types of accounts by testing them on cross-linguistic data. This study compares learning rates for languages whose inflectional systems vary in complexity (as reflected in the number of distinct inflectional endings)—German (lowest), Italian (high), and Czech (high, coupled with morpholexical variation). Written learner productions were examined for the accuracy of verbal inflection on dimensions ranging from morphosyntactic (uninflected forms, non-finite forms, use of finite instead of non-finite forms) to morpholexical (errors in root processes, application of wrong verb class templates, or wrong phonemic composition of the root or ending). Error frequencies were modeled using Poisson regression. Complexity affected accuracy differently in different domains of inflection production. Inflectional paradigm complexity was facilitative for learning to supply inflection, and learners of Italian and Czech were not disadvantaged compared to learners of German, despite their paradigms having more distinct elements. However, the complexity of verb class systems and the opacity of morphophonological alternations did result in disadvantages. Learners of Czech misapplied inflectional patterns associated with verb classes more than learners of German; they also failed to recall the correct segments associated with inflections, which resulted in more frequent use of inexistent forms.
  • Thumbnail Image
    Item
    Adjunct Control: Syntax and processing
    (2018) Green, Jeffrey Jack; Williams, Alexander; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation analyzes the syntax and processing of adjunct control. Adjunct control is the referential relation between the implicit (PRO) subject of a non-finite adjunct clause and its understood antecedent, as in the temporal adjunct in ‘Holly1 went to bed [after PRO1 drinking milk]’, or the rationale clause in ‘August1 sat on the couch [in order PRO1 to read library books]’. Adjunct control is often assumed to involve a syntactic ‘Obligatory Control’ (OC) dependency, but I show that some adjuncts also permit what is referred to as ‘Non-Obligatory Control’ (NOC), as in the sentences ‘The food tasted better [after PRO drinking milk]’ and ‘The book was checked out from the library [in order PRO to read it]’, where PRO refers to some unnamed entity. I argue that for some adjuncts, OC and NOC are not in complementary distribution, contrary to assumptions of much prior literature, but in agreement with Landau (2017). Contrary to implicit assumptions of Landau, however, I also show that this OC/NOC duality does not extend to all adjuncts. I outline assumptions that Landau’s theory would have to make in order to accommodate the wider distribution of OC and NOC in adjuncts, but argue that this is better accomplished within the Movement Theory of Control (Hornstein, 1999) by relaxing the assumption that all adjuncts are phases. Even in adjuncts where both OC and NOC are possible, OC is often strongly preferred. I argue that this is in large part due to interpretive biases in processing. As a foundational step in examining what these processing biases are, the second part of this dissertation uses visual-world eyetracking to compare the timecourse of interpretation of subject-controlled PRO and overt pronouns in temporal adjuncts. The results suggest that PRO can be interpreted just as quickly as overt pronouns once the relevant bottom-up input is received. These experiments also provide evidence that structural predictions can facilitate reference resolution independent of next-mention predictions.
  • Thumbnail Image
    Item
    SECOND LANGUAGE LEXICAL REPRESENTATION AND PROCESSING OF MANDARIN CHINESE TONES
    (2018) Pelzl, Eric; DeKeyser, Robert; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation investigates second language (L2) speech learning challenges by testing advanced L2 Mandarin Chinese learners’ tone and word knowledge. We consider L2 speech learning under the scope of three general hypotheses. (1) The Tone Perception Hypothesis: Tones may be difficult for L2 listeners to perceive auditorily. (2) The Tone Representation Hypothesis: Tones may be difficult for L2 listeners to represent effectively. (3) The Tone Processing Hypothesis: Tones may be difficult for L2 listeners to process efficiently. Experiments 1 and 2 test tone perception and representation using tone identification tasks with monosyllabic and disyllabic stimuli with L1 and advanced L2 Mandarin listeners. Results suggest that both groups are highly accurate in identification of tones on isolated monosyllables; however, L2 learners have some difficulty in disyllabic contexts. This suggests that low-level auditory perception of tones presents L2 learners with persistent long-term challenges. Results also shed light on tone representations, showing that both L1 and L2 listeners are able to form abstract representations of third tone allotones. Experiments 3 and 4 test tone representation and processing through the use of online (behavioral and ERP) and offline measures of tone word recognition. Offline results suggest weaknesses in L2 learners’ long-term memory of tones for specific vocabulary. However, even when we consider only trials for which learners had correct and confident explicit knowledge of tones and words, we still see significant differences in accuracy for rejection of tone compared to vowel nonwords in lexical recognition tasks. Using a lexical decision task, ERP measures in Experiment 3 reveal consistent L1 sensitivity to tones and vowels in isolated word recognition, and individual differences among L2 listeners. While some are sensitive to both tone and vowel mismatches, others are only sensitive to vowels or not at all. Experiment 4 utilized picture cues to test neural responses tied directly to tone and vowel mismatches. Results suggest strong L1 sensitivity to vowel mismatches. No other significant results were found. The final chapter considers how the three hypotheses shed light on the results as a whole, and how they relate to the broader context of L2 speech learning.
  • Thumbnail Image
    Item
    Comparative psychosyntax
    (2015) Chacón, Dustin Alfonso; Phillips, Colin; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Every difference between languages is a “choice point” for the syntactician, psycholinguist, and language learner. The syntactician must describe the differences in representations that the grammars of different languages can assign. The psycholinguist must describe how the comprehension mechanisms search the space of the representations permitted by a grammar to quickly and effortlessly understand sentences in real time. The language learner must determine which representations are permitted in her grammar on the basis of her primary linguistic evidence. These investigations are largely pursued independently, and on the basis of qualitatively different data. In this dissertation, I show that these investigations can be pursued in a way that is mutually informative. Specifically, I show how learnability con- cerns and sentence processing data can constrain the space of possible analyses of language differences. In Chapter 2, I argue that “indirect learning”, or abstract, cross-contruction syntactic inference, is necessary in order to explain how the learner determines which complementizers can co-occur with subjects gaps in her target grammar. I show that adult speakers largely converge in the robustness of the that-trace effect, a constraint on complementation complementizers and subject gaps observed in lan- guages like English, but unobserved in languages like Spanish or Italian. I show that realistic child-directed speech has very few long-distance subject extractions in En- glish, Spanish, and Italian, implying that learners must be able to distinguish these different hypotheses on the basis of other data. This is more consistent with more conservative approaches to these phenomena (Rizzi, 1982), which do not rely on ab- stract complementizer agreement like later analyses (Rizzi, 2006; Rizzi & Shlonsky, 2007). In Chapter 3, I show that resumptive pronoun dependencies inside islands in English are constructed in a non-active fashion, which contrasts with recent findings in Hebrew (Keshev & Meltzer-Asscher, ms). I propose that an expedient explanation of these facts is to suppose that resumptive pronouns in English are ungrammat- ical repair devices (Sells, 1984), whereas resumptive pronouns in island contexts are grammatical in Hebrew. This implies that learners must infer which analysis is appropriate for their grammars on the basis of some evidence in linguistic envi- ronment. However, a corpus study reveals that resumptive pronouns in islands are exceedingly rare in both languages, implying that this difference must be indirectly learned. I argue that theories of resumptive dependencies which analyze resump- tive pronouns as incidences of the same abstract construction (e.g., Hayon 1973; Chomsky 1977) license this indirect learning, as long as resumptive dependencies in English are treated as ungrammatical repair mechanisms. In Chapter 4, I compare active dependency formation processes in Japanese and Bangla. These findings suggest that filler-gap dependencies are preferentially resolved with the first position available. In Japanese, this is the most deeply em- bedded clause, since embedded clauses always precede the embedding verb(Aoshima et al., 2004; Yoshida, 2006; Omaki et al., 2014). Bangla allows a within-language comparison of the relationship between active dependency formation processes and word order, since embedded clauses may precede or follow the embedding verb (Bayer, 1996). However, the results from three experiments in Bangla are mixed, suggesting a weaker preference for a lineary local resolution of filler-gap dependen- cies, unlike in Japanese. I propose a number of possible explanations for these facts, and discuss how differences in processing profiles may be accounted for in a variety of ways. In Chapter 5, I conclude the dissertation.
  • Thumbnail Image
    Item
    Investigations into the Neural Basis of Structured Representations
    (2004-11-22) Whitney, Carol; Weinberg, Amy; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The problem of how the brain encodes structural representations is investigated via the formulation of computational theories constrained from the bottom-up by neurobiological factors, and from the top-down by behavioral data. This approach is used to construct models of letter-position encoding in visual word recognition, and of hierarchical representations in sentence parsing. The problem of letter-position encoding entails the specification of how the retinotopic representation of a stimulus (a printed word) is progressively converted into an abstract representation of letter order. Consideration of the architecture of the visual system, letter perceptibility studies, and form-priming experiments led to the SERIOL model, which is comprised of five layers: (1) a (retinotopic) edge layer, in which letter activations are determined by the acuity gradient; (2) a (retinotopic) feature layer, in which letter activations conform to a monotonically decreasing activation gradient, dubbed the locational gradient; (3) an abstract letter layer, in which letter order is encoded sequentially. (4) a bigram layer, in which contextual units encode letter pairs that fire in a particular order; (5) a word layer. Because the acuity and locational gradients are congruent to each other in one hemisphere but not the other, formation of the locational gradient requires hemisphere-specific processing. It is proposed that this processing underlies visual-field asymmetries associated with word length and orthographic-neighborhood size. Hemifield lexical-decision experiments in which contrast manipulations were used to modify activation patterns confirmed this account. In contrast to the linear relationships between letters, a parse of a sentence requires hierarchical representations. Consideration of a fixed-connectivity constraint, brain imaging studies, sentence-complexity phenomena, and insights from the SERIOL model led to the TPARRSE model, in which hierarchical relationships are represented by a predefined distributed encoding. This encoding is constructed with the support of working memory, which encodes relationships between phrases via two synchronized sequential representations. The model explains complexity phenomena based on specific proposals as to how information is represented and manipulated in syntactic working memory. In contrast to capacity-based metrics, the TPARRSE model provides a more comprehensive account of these phenomena.