UMD Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/3

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 144
  • Item
    DO MEASURES OF INDIVIDUAL WORDS AND FORMULAIC SEQUENCES TAP INTO THE SAME TRAIT: THE PERSPECTIVE OF ASSESSMENT AND THE CONTRIBUTIONS OF PHONOLOGICAL SHORT-TERM MEMORY AND EXPOSURE
    (2024) Deng, Zhiyuan; Hui, Bronson; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Nativelike language use is characterized by a high level of formulaicity (Pawley & Syder, 1983; Sinclair, 1991), and formulaic sequences are often believed to be building blocks of language acquisition (Christiansen & Arnon, 2017) and crucial to language fluency (Saito, 2020). Although they consist of multiple words and are analyzable, some researchers argued that the knowledge of formulaic sequences is largely lexical in nature, i.e., stored and processed holistically without recourse to analysis (Wray, 2002). Wray (2008) further proposed a heteromorphic view of mental lexicon, pushing the boundary of vocabulary to encompass not only individual words but also larger-than-word units such as formulaic sequences. The main purpose of the present study was to empirically test this proposal from the perspective of assessment, i.e., see if measures of formulaic sequences tap into the same latent construct underlying measures of individual words. In addition, the present study also investigated the contributions of phonological short-term memory (PSTM) and exposure to the knowledge of formulaic sequences and individual words. The study was carried out in an English as a Foreign Language (EFL) context, and 136 Chinese participants of intermediate to advanced proficiency completed a battery of nine linguistic measures assessing their receptive and controlled productive knowledge of collocations, phrasal verbs, and individual words. In addition, their capacity of PSTM was measured by a non-word span test, and their engagement in various types of English-medium activities was measured by an exposure questionnaire. Confirmatory factor analysis and model comparisons were conducted to examine the factor structure of nine linguistic measures, and a bi-factor solution with a single latent trait factor underlying all nine linguistic measures and a method-specific grouping factor for all six receptive measures was selected as the best-fitting model in terms of fit and parsimony. In addition, structural equation modeling revealed that PSTM, exposure, and length of learning English were all significant predictors for the knowledge of formulaic sequences and the knowledge of individual words. The three predictors combined explained about 33.4% of variance in the knowledge of formulaic sequences and 30.9% of the variance in the knowledge of individual words. However, the contributions of PSTM and exposure to the knowledge of formulaic sequences and to the knowledge of individual words were not significantly different in magnitude. The results provided psychometric evidence supporting the legitimacy of conceptualizing a heteromorphic mental lexicon showing that measures of formulaic sequences and individual words tapped into the same latent trait.
  • Thumbnail Image
    Item
    DOES MODALITY MATTER? AURAL AND WRITTEN VOCABULARY IN SECOND LANGUAGE LISTENING AND READING COMPREHENSION
    (2024) Iizuka, Takehiro; Hui, Bronson; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study examined the significance of the mode of delivery—aural versus written—in second language (L2) vocabulary knowledge and L2 comprehension skills. One of the unique aspects of listening comprehension that sets it apart from reading comprehension is the mode of delivery—language input is delivered not visually but aurally. Somewhat surprisingly, however, this difference has not always been considered, and in fact L2 listening studies are more often accompanied by written tests (of, e.g., vocabulary knowledge) than by aural tests. Few studies have systematically examined the impact of modality on comprehension skills and linguistic variables such as vocabulary either, despite the long-standing view of language skills being multimodal. In this study, therefore, I first examined the degree to which aural and written vocabulary is separate constructs. Then I examined how each of those constructs explains listening and reading comprehension skills differently. By using latent variable modeling, I also addressed limitations in previous studies, including undue influence from measurement error and unique characteristics of particular tests.One hundred eighty-five adult Japanese learners of English took four aural and four written English vocabulary tests, with parallel test formats across the modalities to allow for comparison. The effect of words was averaged out by counterbalancing eight property-matched sets of words. The participants also took listening and reading comprehension tests. The dimensionality of vocabulary knowledge was examined by comparing one-factor and multi-factor models. The unique contribution of aural and written vocabulary knowledge to listening and reading comprehension was evaluated by latent variable path analysis. The difference in the sizes of aural and written vocabulary knowledge was examined by latent means modeling. The results of the study were nuanced. Modality effects were observed in the sense that (1) a two-factor model of vocabulary knowledge with aural and written vocabulary had a significantly better fit to the data than a one-factor model, (2) aural vocabulary knowledge uniquely explained some variance in listening comprehension skills, and (3) the participants’ aural vocabulary size was significantly smaller than their written vocabulary size. However, the effects of modality were limited in the sense that (1) the aural and written vocabulary knowledge factors were very highly correlated and (2) the common part of the two factors—general vocabulary knowledge—explained much more variance in each of listening and reading comprehension skills than modality-specific knowledge. These results suggest that, although aural versus written test modality effects do seem to exist in L2 vocabulary knowledge and comprehension skills, its practical impact is small compared with that of general vocabulary knowledge at least in the context where words are presented in isolation as in the present study.
  • Thumbnail Image
    Item
    Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies
    (2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.
  • Thumbnail Image
    Item
    THE CROSS-LANGUAGE ACTIVATION OF FIRST LANGUAGE (L1) HOMONYMS TRANSLATIONS IN SECOND LANGUAGE (L2) PROCESSING: AN INVESTIGATION OF WHETHER L1 TRANSLATION ARE ACTIVATED IN L2 SENTENCE CONTEXT
    (2024) Alsalmi, Mona Othman; Jiang, Nan; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A present study aimed to investigate the role of a first language (L1) translation on a second language (L2) word processing in a sentential context by relatively advanced Arabic learners of English. The focus is on cases where a homonymous word in the L1 is realized by independent words in the L2, (e.g. Arabic قرش realized by English shark and coin). Using the visual-world paradigm, Arabic-English bilinguals and English native participants were auditorily presented with English sentences that are predictive of a target word (e.g., “shark” in Scuba divers saw the sharp teeth of a giant shark yesterday) while looking at a visual screen. The screen contained one of the three critical objects: a target object whose English name corresponded to the target word (shark; Arabic: قرش) in the target condition, an Arabic competitor object whose Arabic name shared the same Arabic translation with the target word (coin; Arabic: قرش) in the Arabic condition, or an object that was unrelated to the target word (drums; Arabic طبل) in the control condition.Compared to native speakers of English, relatively advanced Saudi learners of English made more fixations on the critical objects in the Arabic condition compared to the control condition. This study supports the potential automatic activation of L1 translations when processing sentences in L2, even in relatively proficient learners and suggests evidence for the verification model in L2 word recognition.
  • Thumbnail Image
    Item
    Future reference 'without' future morphology
    (2024) Mendes, Jéssica Viana; Hacquard, Valentine; Santorio, Paolo; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In some languages, present morphology can be used to refer to non-scheduled future events. Since this form of future reference is constrained to certain subordinate environments, like conditional antecedents (‘If John gets a new job, he played his cards right’) and relative clauses (‘Everyone who gets invited to this party is very lucky’), I propose to call the phenomenon Subordinate Future (SF). Two factors have hindered our understanding of the SF: First, the SF often occurs in modalized sentences, which makes it difficult to tease apart its contribution from that of the environment. Second, present morphology in English can express several readings; therefore, the appearance of this future is not particularly informative. This dissertation brings new intra- and cross-linguistic evidence to bear on the nature and the meaning of the SF. I observe that, in addition to temporal displacement, the SF also introduces modal displacement. Then, I argue that the source of this modality is a subjunctive mood morpheme, which is silent in English, but pronounced in Portuguese. I proceed to decompose the semantics of the subjunctive. I propose that the subjunctive should be treated as a Heimian indefinite (Heim, 1982) ranging over situations. Simply put, the role of the subjunctive is to introduce a situation variable. The motivation for my proposal comes from the behavior of the subjunctive in relative clauses, and from the anaphoric pattern of sentences with the SF. In relative clauses, the SF blocks a specific reading of the DP. Besides that, the SF seems to be able to ‘bind’ the situation variable of predicates outside of its domain of c-command, giving rise to modal donkey anaphora. These two facts would be difficult to reconcile with a quantificational treatment of the subjunctive. I then turn my attention to the temporal interpretation of the phenomenon. As Crouch (1993, 1994) observed, this future is also able to anchor the temporal interpretation of clauses outside of its domain of c-command. I propose that this effect is a byproduct of modal donkey anaphora, and demonstrate how casting my proposal in terms of situations provides a natural account of the phenomenon. I conclude with a comparison between my proposal and existing accounts.
  • Item
    A world without words: A non-lexicalist framework for psycho- and neuro-linguistics
    (2024) Krauska, Alexandra; Lau, Ellen; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items”. Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend - in a variety of languages, including English - are challenging for them to account for. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. This dissertation discusses the issues with lexicalism in linguistic theory as well as its implications in psycho- and neuro-linguistics. In addition, I propose a non-lexicalist model of language production, the “WithOut Words” (WOW) model, which does not rely on lemma representations, but instead represents that knowledge as independent mappings between meaning and syntax, and syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. Based on this, the model suggests that neural responses during language production should be modulated not just by the pieces of meaning, syntax, and form, but also by the complexity of the mapping processes which link those separate representations. This prediction is supported by the results of a novel experimental paradigm using electroencephalography (EEG) during language production, which observes greater neural responses for meaning-syntax and syntax-form mapping complexity in two separate time windows. Finally, I re-evaluate the dissociation between regular and irregular verbs in aphasia, which has been used as supporting evidence for a distinction between the grammar and the lexicon. By training recurrent neural networks and measuring their performance after lesioning, I show that the observed clinical data can be accounted for within a single mechanism. By moving away from lexicalist assumptions, the non-lexicalist framework described in this dissertation provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.
  • Thumbnail Image
    Item
    Moderating Effects of Difficulty on Individual Differences' Prediction of Intensive Second Language Proficiency Attainment
    (2024) Pulupa, Catherine Maria; Hui, Bronson; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The United States government is perennially in need of employees with proficiency in critical foreign languages to communicate with foreign counterparts and maintain relationships worldwide. In order to fulfill this need, the government devotes significant resources training federal employees to advanced levels of language proficiency through intensive courses aimed at developing communicative language skills that reflect the work that employees will perform in their work advancing the interests of the United States abroad. Notable proportions of employees fail to meet proficiency goals at the end of training, and little is known about what learner individual differences drive whether or not employees will meet their proficiency goals in order to perform their work on behalf of the United States. To this aim, the current investigation utilizes multiple analyses to explore and explain the interrelationships between learner individual differences, language difficulty, and proficiency attainment throughout training. The investigation constitutes two related analyses. First, a path-analytic approach examines associations between a cognitive (aptitude) measure and non-cognitive (motivation, familiarity with curricula, previous advanced second language learning) measures with student proficiency achievement throughout training. A second analysis builds on the first: the path-analytic model incorporates a measure of difficulty of the language studied by the students to determine how difficulty influences language learning and ultimate attainment within the context of individual differences in L2 speaking and reading. Results demonstrated consistent influence of language aptitude on proficiency attainment, and notable influences of previous L2 acquisition and the alignment of training to individuals’ language use goals. L2 difficulty moderated the relationships between individual differences and proficiency assessment scores during several points in training. The findings support an understanding of adult L2 acquisition that more fully considers learners’ goals and previous L2 experiences and consideration of the impact that difficulty can have on individual learners’ abilities to achieve target proficiency goals.
  • Thumbnail Image
    Item
    Evaluating the role of acoustic cues in identifying the presence of a code-switch
    (2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.
  • Thumbnail Image
    Item
    MODELING ADAPTABILITY MECHANISMS OF SPEECH PERCEPTION Nika Jurov
    (2024) Jurov, Nika; Feldman, Naomi H.; Idsardi, William; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Speech is a complex, redundant and variable signal happening in a noisy and ever changing world. How do listeners navigate these complex auditory scenes and continuously and effortlessly understand most of the speakers around them? Studies show that listeners can quickly adapt to new situations, accents and even to distorted speech. Although prior research has established that listeners rely more on some speech cues (or also called features or dimensions) than others, it is yet not understood how listeners weight them flexibly on a moment-to-moment basis when the input might deviate from the standard speech. This thesis computationally explores flexible cue re-weighting as an adaptation mechanism using real speech corpora. The computational framework it relies on is rate distortion theory. This framework models a channel that is optimized on a trade off between distortion and rate: on the one hand, the input signal should be reconstructed with minimal error after it goes through the channel. On the other hand, the channel needs to extract parsimonious information from the incoming data. This channel can be implemented as a neural network with a beta variational auto-encoder. We use this model to show that two mechanistic components are needed for adaptation: focus and switch. We firstly show that focus on a cue mimics humans better than cue weights that simply depend on long term statistics as has been largely assumed in the prior research. And second, we show a new model that can quickly adapt and switch weighting the features depending on the input of a particular moment. This model's flexibility comes from implementing a cognitive mechanism that has been called ``selective attention" with multiple encoders. Each encoder serves as a focus on a different part of the signal. We can then choose how much to rely on each focus depending on the moment. Finally, we ask whether cue weighting is informed by being able to separate the noise from speech. To this end we adapt a feature disentanglement adversarial training from vision to disentangle speech (noise) features from noise (speech) labels. We show that although this does not give us human-like cue weighting behavior, there is an effect of disentanglement of weighting spectral information slightly more than temporal information compared to the baselines. Overall, this thesis explores adaptation computationally and offers a possible mechanistic explanation for ``selective attention'' with focus and switch mechanisms, based on rate distortion theory. It also argues that cue weighting cannot be determined solely on speech carefully articulated in laboratories or in quiet. Lastly, it explores a way to inform speech models from a cognitive angle to make the models more flexible and robust, like human speech perception is.
  • Thumbnail Image
    Item
    Local Information in Discourse
    (2024) Kendrick, Jonathan Caleb; Williams, Alexander; Cariani, Fabrizio; Philosophy; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation argues that the interpretation of modals, expressions like “might,” “should,” and “must,” are constrained by their local context. For epistemic modals, local contexts bound the admissible domains of modal quantification. In Chapter 2, we use this fact to explain why epistemic “must” is weaker than the □ operator from epistemic modal logic. For root (i.e., non-deontic) modals, local contexts restrict the domain of quantification. In Chapter 3, we show this yields a solution to the Samaritan Paradox concerning why deontic modals do not inherit presuppositions under entailment. In Chapter 4, we propose a solution to the “if ?, ought ?” problem based on default logic. According to this solution, “ought”’s ordering source consists of default rules and the domain consists of the conclusion of the defaults triggered in the local context.