Biology Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2749

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    Item
    HOW BILINGUALS' COMPREHENSION OF CODE-SWITCHES INFLUENCES ATTENTION AND MEMORY
    (2024) Salig, Lauren; Novick, Jared; Slevc, L. Robert; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Bilinguals sometimes code-switch between their shared languages. While psycholinguistics research has focused on the challenges of comprehending code-switches compared to single-language utterances, bilinguals seem unhindered by code-switching in communication, suggesting benefits that offset the costs. I hypothesize that bilinguals orient their attention to speech content after hearing a code-switch because they draw a pragmatic inference about its meaning. This hypothesis is based on the pragmatic meaningfulness of code-switches, which speakers may use to emphasize information, signal their identity, or ease production difficulties, inter alia. By considering how code-switches may benefit listeners, this research attempts to better align our psycholinguistic understanding of code-switch processing with actual bilingual language use, while also inspiring future work to investigate how diverse language contexts may facilitate learning in educational settings. In this dissertation, I share the results of three pre-registered experiments with Spanish-English bilinguals that evaluate how hearing a code-switch affects attention and memory. Experiment 1a shows that code-switches increase bilinguals’ self-reported attention to speech content and improve memory for that information, compared to single-language equivalents. Experiment 1b demonstrates that this effect requires bilingual experience, as English-speaking monolinguals did not demonstrate increased attention upon hearing a code-switch. Experiment 2 attempts to replicate these results and establish the time course of the attentional effect using an EEG measure previously associated with attentional engagement (alpha power). However, I conclude that alpha power was not a valid measure of attention to speech content in this experiment. In Experiment 3, bilinguals again showed better memory for information heard in a code-switched context, with a larger benefit for those with more code-switching experience and when listeners believed the code-switches were natural (as opposed to inserted randomly, removing the element of speaker choice). This suggests that the memory benefit comes from drawing a pragmatic inference, which likely requires prior code-switching experience and a belief in code-switches’ communicative purpose. These experiments establish that bilingual listeners derive attentional and memory benefits from ecologically valid code-switches—challenging a simplistic interpretation of the traditional finding of “costs.” Further, these findings motivate future applied work assessing if/how code-switches might benefit learning in educational contexts.
  • Thumbnail Image
    Item
    Knowledge and Processing of Morphosyntactic Variation in African American Language and Mainstream American English
    (2023) Maher, Zachary Kevin; Edwards, Jan; Novick, Jared; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    As people from different social groups come into contact, they must accommodate differences in morphosyntax (e.g., He seem nice vs. He seems nice) in order to successfully represent and comprehend their interlocutor’s speech. Listeners usually have high comprehension across such differences, but little is known about the mechanisms behind morphosyntactic accommodation. In this dissertation, I asked what listeners know about variation in morphosyntax and how they deploy this knowledge in real-time language processing. As a test case, I focused on regularized subject-verb agreement (e.g., He seem nice, They was happy)—which is common in African American Language (AAL), but not in Mainstream American English (MAE)—and compared how listeners adjust their linguistic expectations depending on what language varieties both they and their interlocutors speak. In Experiment 1, I showed that participants who primarily speak MAE 1) recognize that some speakers use regularized subject-verb agreement, 2) evaluate that regularized subject-verb agreement is associated with AAL, and 3) predict that the subject-verb agreement rules of AAL allow for some patterns (They was happy) but not others (*He were happy). This was accomplished using a novel sentence rating task, where participants heard audio examples of a given language variety, then rated written sentences for how likely a speaker of that variety would be to say them. In Experiment 2, I showed that a similar pool of participants did not merely recognize regularized subject-verb agreement; their knowledge of variation lead them to predict that AAL speakers use regularized forms in an acoustically ambiguous context. Participants heard sentences like He sit(s) still, where it is unclear whether the verb includes a verbal -s due to a segmentation ambiguity. They were more likely to transcribe a regularized form (He sit still) when it was spoken by an AAL-speaking voice than when it was spoken by an MAE-speaking voice. Together, these results indicate that listeners have rich mental models of their interlocutors that extend beyond a general awareness of linguistic difference. In Experiment 3, I compared bidialectal speakers of AAL and MAE and monodialectal speakers of MAE. On the rating task from Experiment 1, bidialectal participants showed a greater degree of differentiation between sentences that are grammatical in AAL and sentences that are ungrammatical in AAL, compared to monodialectal participants. However, both groups of participants indicated that ungrammatical sentences are broadly more likely in AAL than MAE, contrary to usage patterns in the world. On the transcription task from Experiment 2, bidialectal participants were overall more likely to transcribe regularized subject-verb agreement, but they differentiated between AAL- and MAE-speaking voices to the same degree as monodialectal participants. Both groups were more likely to use MAE subject-verb agreement (He sits still) than regularized subject-verb agreement (He sit still). These results suggest that bidialectal listeners broadly expect regularized subject-verb agreement to a greater degree than do monodialectal listeners, rather than making stronger predictions about a given speaker. Moreover, while bidialectal listeners have a more granular sense of AAL’s grammatical rules, all listeners still favor MAE, likely reflecting MAE’s dominant status. In Experiment 4, I asked how listeners use their knowledge of variation in subject-verb agreement to guide real-time interpretation of sentences, again comparing bidialectal and monodialectal participants. Participants heard sentences like The duck(s) swim in the pond, where they must rely on the agreement morphology of the verb to determine whether the subject of the sentence is singular or plural, since a segmentation ambiguity makes it unclear whether the noun ends in -s. In MAE, only a plural interpretation is available, while in AAL, a singular interpretation is also available. Participants’ eye-movements were tracked as they looked at and selected images on a screen. Participants were more likely to look at and select a singular image if the sentence was presented in an AAL-speaking voice, compared to an MAE-speaking voice, and bidialectal participants were more likely to look at and select a singular image, compared to monodialectal participants. As with the transcription task in Experiment 3, this suggests that bidialectal participants are broadly more likely to consider the possibility that a speaker uses regularized SVA, compared to monodialectal participants, but their linguistic expectations are not more strongly differentiated based on the grammar of their interlocutor. These results make it clear that listeners have mental models of morphosyntactic variation, which can be characterized along a variety of dimensions, including the syntax, semantics, and indexicality (social meaning) of a given variable. This can serve as a foundation for future inquiry into the details of these models and the real-time switching and control dynamics as listeners adjust to different varieties in their environment.
  • Thumbnail Image
    Item
    Auditory Processing of Sequences and Song Syllables in Vocal Learning Birds
    (2021) Fishbein, Adam; Dooling, Robert J; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The ability to use speech and language is at the core of what it means to be human. How our brains manage this cognitive feat and how it evolved in our species remain mysterious, in part because of how unique speech and language seem to be. We are the only primates who can produce learned vocalizations, but vocal learning is widespread among songbirds and parrots. Just like us, those birds rely on auditory perception to learn their songs and extract information used for communication. Studies using vocal learning birds can thereby help us understand how the brain processes vocal signals and why species differ in vocal communication abilities. But while the melodic patterns of birdsong are striking to the human ear, we cannot assume that song sequences are perceived that way by the birds, nor that the features birds hear are detectable by us.In this dissertation, I investigate how songbirds (focusing on the zebra finch (Taeniopygia guttata)) and parrots (focusing on the budgerigar (Melopsittacus undulatus)) process the sequential patterns and syllable-level details of birdsong, using behavioral auditory discrimination experiments and neurophysiological recordings in the central auditory system. The results show the following: 1) zebra finches and other songbirds are much more sensitive to changes in individual elements than changes in sequence, 2) budgerigars are better than zebra finches at hearing sequence changes but are also limited in their abilities compared to humans, 3) zebra finches are highly sensitive to the acoustic differences in utterances of the same motif syllables, 4) the budgerigar central auditory system encodes sequence more strongly in some respects than the zebra finch central auditory system, and 5) both the zebra finch and budgerigar central auditory systems can encode the rapid acoustic details of sounds well beyond human hearing abilities. Together, these findings indicate that vocal learning birds may communicate more at the level of syllable details than through sequential patterns, in contrast to human speech. The results also show neurophysiological species differences in sequence processing that could help us understand the differences between humans and other primates in vocal communication.
  • Thumbnail Image
    Item
    The use of the domestic dog (Canis familiaris) as a comparative model for speech perception
    (2020) Mallikarjun, Amritha; Newman, Rochelle S; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Animals have long been used as comparative models for adult human speech perception. However, few animal models have been used to explore developmental speech perception questions. This dissertation encourages the use of domestic dogs as a behavioral model for speech perception processes. Specifically, dog models are suggested for questions about 1) the role and function of underlying processes responsible for different aspects of speech perception, and 2) the effect of language experience on speech perception processes. Chapters 2, 3, and 4 examined the contributions of auditory, attention, and linguistic processing skills to infants’ difficulties understanding speech in noise. It is not known why infants have more difficulties perceiving speech in noise, especially single-talker noise, than adults. Understanding speech in noise relies on infants’ auditory, attention, and linguistic processes. It is methodologically difficult to isolate these systems’ contributions when testing infants. To tease apart these systems, I compared dogs’ name recognition in nine- and single-talker background noise to that of infants. These studies suggest that attentional processes play a large role in infants’ difficulties in understanding speech in noise. Chapter 5 explored the reasons behind infants’ shift from a preference for vowel information (vowel bias) to consonant information (consonant bias) in word identification. This shift may occur due to language exposure, or possessing a particular lexicon size and structure. To better understand the linguistic exposure necessary for consonant bias development, I tested dogs, who have long-term linguistic exposure and a minimal vocabulary. Dogs demonstrated a vowel bias rather than a consonant bias; this suggests that a small lexicon and regular linguistic exposure, plus mature auditory processing, do not lead to consonant bias emergence. Overall, these chapters suggest that dog models can be useful for broad questions about systems underlying speech perception and about the role of language exposure in the development of certain speech perception processes. However, the studies faced limitations due to a lack of knowledge about dogs’ underlying cognitive systems and linguistic exposure. More fundamental research is necessary to characterize dogs’ linguistic exposure and to understand their auditory, attentional, and linguistic processes to ask more specific comparative research questions.
  • Thumbnail Image
    Item
    Toward a Psycholinguistic Model of Irony Comprehension
    (2018) Adler, Rachel Michelle; Novick, Jared M; Huang, Yi Ting; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation examines how listeners reach pragmatic interpretations of irony in real-time. Over four experiments I addressed limitations of prior work by using fine-grained measures of time course, providing strong contexts to support ironic interpretations, and accounting for factors known to be important for other linguistic phenomena (e.g., frequency). Experiment 1 used a visual world eye-tracking paradigm to understand how comprehenders use context and frequency information to interpret irony. While there was an overall delay for ironic utterances compared to literal ones, the speed of interpretation was modulated by frequency. Participants interpreted frequent ironic criticisms (e.g., “fabulous chef” about a bad chef) more quickly than infrequent ironic compliments (e.g., “terrible chef” about a good chef). In Experiment 2A, I tested whether comprehending irony (i.e., drawing a pragmatic inference) differs from merely computing the opposite of an utterance. The results showed that frequency of interpretation (criticisms vs. compliments) did not influence processing speed or overall interpretations for opposites. Thus, processing irony involves more than simply evaluating the truth-value condition of an utterance (e.g., pragmatic inferences about the speaker’s intentions). This was corroborated by Experiment 2B, which showed that understanding irony involves drawing conclusions about speakers in a way that understanding opposites does not. Opposite speakers were considered weirder and more confusing than ironic speakers. Given the delay in reaching ironic interpretations (Exp. 1), Experiments 3 and 4 examined the cognitive mechanics that contribute to inhibiting a literal interpretation of an utterance and/or promoting an ironic one. Experiment 3 tested whether comprehending irony engages cognitive control to resolve among competing representations (literal vs. ironic). Results showed that hearing an ironic utterance engaged cognitive control, which then facilitated performance on a subsequent high-conflict Stroop trial. Thus, comprehenders experience conflict between the literal and ironic interpretations. In Experiment 4, however, irony interpretation was not facilitated by prior cognitive control engagement. This may reflect experimental limitations or late-arriving conflict. I end by presenting a model wherein access to the literal and ironic interpretations generates conflict that is resolved by cognitive control. In addition, frequency modulates cue strength and generates delays for infrequent ironic compliments.
  • Thumbnail Image
    Item
    Language Science Meets Cognitive Science: Categorization and Adaptation
    (2017) Heffner, Christopher Cullen; Newman, Rochelle S; Idsardi, William J; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Questions of domain-generality—the extent to which multiple cognitive functions are represented and processed in the same manner—are common topics of discussion in cognitive science, particularly within the realm of language. In the present dissertation, I examine the domain-specificity of two processes in speech perception: category learning and rate adaptation. With regard to category learning, I probed the acquisition of categories of German fricatives by English and German native speakers, finding a bias in both groups towards quicker acquisition of non-disjunctive categories than their disjunctive counterparts. However, a study using an analogous continuum of non-speech sounds, in this case spectrally-rotated musical instrument sounds, did not show such a bias, suggesting that at least some attributes of the phonetic category learning process are unique to speech. For rate adaptation, meanwhile, I first report a study examining rate adaptation in Modern Standard Arabic (MSA), where consonant length is a contrastive part of the phonology; that is, where words can be distinguished from one another by the length of the consonants that make them up. I found that changing the rate of the beginning of a sentence can lead a consonant towards the end of the sentence to change in its perceived duration; a short consonant can sound like a long one, and a long consonant can sound like a short one. An analogous experiment examined rate adaptation in event segmentation, where adaptation-like effects had not previously been explored, using recordings of an actor interacting with a touchscreen. I found that the perception of actions can also be affected by the rate of previously-occurring actions. Listeners adapt to the rate at the beginning of a series of actions when deciding what they saw last in that series of actions. This suggests that rate adaptation follows similar lines across both domains. All told, this dissertation leads to a picture of domain-specificity in which both domain-general and domain-specific processes can operate, with domain-specific processes can help scaffold the use of domain-general processing.
  • Thumbnail Image
    Item
    The neural bases of the bilingual advantage in cognitive control: An investigation of conflict adaptation phenomena.
    (2014) Teubner-Rhodes, Susan; Dougherty, Michael; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The present dissertation examines the effects of bilingualism on cognitive control, the ability to regulate attention, particularly in the face of multiple, competing sources of information. Across four experiments, I assess the conflict monitoring theory of the so-called "bilingual advantage", which states that bilinguals are better than monolinguals at detecting conflict between multiple sources of information and flexibly recruiting cognitive control to resolve such competition. In Experiment 1, I show that conflict adaptation, the phenomenon that individuals get better at resolving conflict immediately after encountering conflict, occurs across domains, a pre-requisite to determining whether bilingualism can improve conflict monitoring on non-linguistic tasks. Experiments 2 and 3 compare behavioral and neural conflict adaptation effects in bilinguals and monolinguals. I find that bilinguals are more accurate at detecting initial conflicts and show corresponding increases in activation in neural regions implicated in language-switching. Finally, Experiment 4 extends the bilingual advantage in conflict monitoring to syntactic ambiguity resolution and recognition memory.