Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
10 results
Search Results
Item Semantics and pragmatics in a modular mind(2021) McCourt, Michael Sullivan; Williams, Alexander; Philosophy; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation asks how we should understand the distinction between semantic and pragmatic aspects of linguistic understanding within the framework of mentalism, on which the study of language is a branch of psychology. In particular, I assess a proposal on which the distinction between semantics and pragmatics is ultimately grounded in the modularity or encapsulation of semantic processes. While pragmatic processes involved in understanding the communicative intentions of a speaker are non-modular and highly inferential, semantic processes involved in understanding the meaning of an expression are modular and encapsulated from top-down influences of general cognition. The encapsulation hypothesis for semantics is attractive, since it would allow the semantics-pragmatics distinction to cut a natural joint in the communicating mind. However, as I argue, the case in favor of the modularity hypothesis for semantics is not particularly strong. Many of the arguments offered in its support are unsuccessful. I therefore carefully assess the relevant experimental record, in rapport with parallel debates about modular processing in other domains, such as vision. I point to several observations that raise a challenge for the encapsulation hypothesis for semantics; and I recommend consideration of alternative notions of modularity. However, I also demonstrate some principled strategies that proponents of the encapsulation hypothesis might deploy in order to meet the empirical challenge that I raise. I conclude that the available data neither falsify nor support the modularity hypothesis for semantics, and accordingly I develop several strategies that might be pursued in future work. It has also been argued that the encapsulation of semantic processing would entail (or otherwise strongly recommend) a particular approach to word meaning. However, in rapport with the literature on polysemy—a phenomenon whereby a single word can be used to express several related concepts, but not due to generality—I show that such arguments are largely unsuccessful. Again, I develop strategies that might be used, going forward, to adjudicate among the options regarding word meaning within a mentalistic linguistics.Item The Psycho-logic of Universal Quantifiers(2021) Knowlton, Tyler Zarus; Lidz, Jeffrey; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A universally quantified sentence like every frog is green is standardly thought to express a two-place second-order relation (e.g., the set of frogs is a subset of the set of green things). This dissertation argues that as a psychological hypothesis about how speakers mentally represent universal quantifiers, this view is wrong in two respects. First, each, every, and all are not represented as two-place relations, but as one-place descriptions of how a predicate applies to a restricted domain (e.g., relative to the frogs, everything is green). Second, while every and all are represented in a second-order way that implicates a group, each is represented in a completely first-order way that does not involve grouping the satisfiers of a predicate together (e.g., relative to individual frogs, each one is green).These “psycho-logical” distinctions have consequences for how participants evaluate sentences like every circle is green in controlled settings. In particular, participants represent the extension of the determiner’s internal argument (the cir- cles), but not the extension of its external argument (the green things). Moreover, the cognitive system they use to represent the internal argument differs depend- ing on the determiner: Given every or all, participants show signatures of forming ensemble representations, but given each, they represent individual object-files. In addition to psychosemantic evidence, the proposed representations provide explanations for at least two semantic phenomena. The first is the “conservativity” universal: All determiners allow for duplicating their first argument in their second argument without a change in informational significance (e.g., every fish swims has the same truth-conditions as every fish is a fish that swims). This is a puzzling gen- eralization if determiners express two-place relations, but it is a logical consequence if they are devices for forming one-place restricted quantifiers. The second is that every, but not each, naturally invites certain kinds of generic interpretations (e.g., gravity acts on every/#each object). This asymmetry can po- tentially be explained by details of the interfacing cognitive systems (ensemble and object-file representations). And given that the difference leads to lower-level con- comitants in child-ambient speech (as revealed by a corpus investigation), children may be able to leverage it to acquire every’s second-order meaning. This case study on the universal quantifiers suggests that knowing the meaning of a word like every consists not just in understanding the informational contribu- tion that it makes, but in representing that contribution in a particular format. And much like phonological representations provide instructions to the motor plan- ning system, it supports the idea that meaning representations provide (sometimes surprisingly precise) instructions to conceptual systems.Item Toward a Psycholinguistic Model of Irony Comprehension(2018) Adler, Rachel Michelle; Novick, Jared M; Huang, Yi Ting; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation examines how listeners reach pragmatic interpretations of irony in real-time. Over four experiments I addressed limitations of prior work by using fine-grained measures of time course, providing strong contexts to support ironic interpretations, and accounting for factors known to be important for other linguistic phenomena (e.g., frequency). Experiment 1 used a visual world eye-tracking paradigm to understand how comprehenders use context and frequency information to interpret irony. While there was an overall delay for ironic utterances compared to literal ones, the speed of interpretation was modulated by frequency. Participants interpreted frequent ironic criticisms (e.g., “fabulous chef” about a bad chef) more quickly than infrequent ironic compliments (e.g., “terrible chef” about a good chef). In Experiment 2A, I tested whether comprehending irony (i.e., drawing a pragmatic inference) differs from merely computing the opposite of an utterance. The results showed that frequency of interpretation (criticisms vs. compliments) did not influence processing speed or overall interpretations for opposites. Thus, processing irony involves more than simply evaluating the truth-value condition of an utterance (e.g., pragmatic inferences about the speaker’s intentions). This was corroborated by Experiment 2B, which showed that understanding irony involves drawing conclusions about speakers in a way that understanding opposites does not. Opposite speakers were considered weirder and more confusing than ironic speakers. Given the delay in reaching ironic interpretations (Exp. 1), Experiments 3 and 4 examined the cognitive mechanics that contribute to inhibiting a literal interpretation of an utterance and/or promoting an ironic one. Experiment 3 tested whether comprehending irony engages cognitive control to resolve among competing representations (literal vs. ironic). Results showed that hearing an ironic utterance engaged cognitive control, which then facilitated performance on a subsequent high-conflict Stroop trial. Thus, comprehenders experience conflict between the literal and ironic interpretations. In Experiment 4, however, irony interpretation was not facilitated by prior cognitive control engagement. This may reflect experimental limitations or late-arriving conflict. I end by presenting a model wherein access to the literal and ironic interpretations generates conflict that is resolved by cognitive control. In addition, frequency modulates cue strength and generates delays for infrequent ironic compliments.Item Language-based Techniques for Practical and Trustworthy Secure Multi-party Computations(2016) Rastogi, Aseem; Hicks, Michael; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.Item Syntactic Bootstrapping in the Acquisition of Attitude Verbs(2015) Harrigan, Kaitlyn; Lidz, Jeffrey; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Attitude verbs (e.g., think, want, hope) report mental states. Learning the meanings of attitude verbs may be difficult for language learners for several reasons; including the abstractness of the concepts that they refer to, and the linguistic properties. In this dissertation, we investigate the learning process for these words, by looking at an asymmetry that has been observed in the acquisition trajectory: want, which refers to desires, has been claimed to be acquired before think, which refers to beliefs. We explore this asymmetry in attitude verb acquisition in two ways: by comparing interpretation of think and want, controlling for several methodological differences in the way they have previously been tested; and by investigating children’s sensitivity to syntactic distribution in interpreting and learning attitude verbs. We start with an observation that previous tasks comparing interpretation of think and want often tested these verbs under different experimental conditions. Tests of think required processing additional demands; including a conflict with reality, and a conflict with the child’s own mental state. In experiments 1-3, we test interpretation of want adding these additional task demands; and find that children are still adult-like in interpreting want sooner than they have reliably shown to be adult-like in interpreting think. In Experiment 4, we directly compare think and want in the same experimental context. We still find adult-like behavior with want and not think. These studies demonstrate that the observed asymmetry between think and want reflects a real acquisition asymmetry, and is not due to experimental artifacts. After establishing in experiments 1-4 that the asymmetry between think and want reflects real acquisition facts, we explore children’s learning mechanism for attitude verbs in experiments 5 and 6. We test children’s sensitivity to syntactic distribution in hypothesizing an unknown attitude verb’s syntax. In experiment 5, we find that children use syntactic complement to interpret sentences with a potentially unknown attitude verb. In experiment 6, we show that they integrate syntactic information into their semantic representation for this new verb; and continue to hypothesize a meaning based on syntactic frame in future experiences with the same verb.Item Measuring Predicates(2014) Wellwood, Alexis; Hacquard, Valentine; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Determining the semantic content of sentences, and uncovering regularities between linguistic form and meaning, requires attending to both morphological and syntactic properties of a language with an eye to the notional categories that the various pieces of form express. In this dissertation, I investigate the morphosyntactic devices that English speakers (and speakers of other languages) can use to talk about comparisons between things: comparative sentences with, in English, "more... than", "as... as", "too", "enough", and others. I argue that a core component of all of these constructions is a unitary element expressing the concept of measurement. The theory that I develop departs from the standard degree-theoretic analysis of the semantics of comparatives in three crucial respects: first, gradable adjectives do not (partially or wholly) denote measure functions; second, degrees are introduced compositionally; and three, the introduction of degrees arises uniformly from the semantics of the expression "much". These ideas mark a return to the classic morphosyntactic analysis of comparatives found in Bresnan (1973), while incorporating and extending semantic insights of Schwarzschild (2002, 2006). Of major interest is how the dimensions for comparison observed across the panoply of comparative constructions vary, and these are analyzed as a consequence of what is measured (individuals, events, states, etc.), rather than which expressions invoke the measurement. This shift in perspective leads to the observation of a number of regularities in the mapping between form and meaning that could not otherwise have been seen. First, the notion of measurement expressed across comparative constructions is familiar from some explications of that concept in measurement theory (e.g. Berka 1983). Second, the distinction between gradable and non-gradable adjectives is formally on a par with that between mass and count nouns, and between atelic and telic verb phrases. Third, comparatives are perceived to be acceptable if the domain for measurement is structured, and to be anamolous otherwise. Finally, elaborations of grammatical form reflexively affect which dimensions for comparison are available to interpretation.Item The Syntax of Non-syntactic Dependencies(2013) Larson, Bradley Theodore; Hornstein, Norbert; Lasnik, Howard; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation I explore the nature of interpretive dependencies in human language. In particular I investigate the limits of syntactically mediated interpretive dependencies as well as non-syntactic ones. Broadly speaking I investigate the limits of grammatical dependencies and note that current theory cannot possibly handle certain dependencies. That certain dependencies evade grammatical explanation requires a rethinking of the representations of those dependencies. The results of this investigation concern the primacy and the purview of the syntax component of the grammar. In short, the purview of syntactic relations is limited to c-command and if a c-command relation holds between two related elements, a syntactic relation must hold between them, either directly or indirectly. When c-command does not hold between the related elements, a syntactic dependency is not possible and the dependency must hold at a subsequent level of representation. To show this, I explore interpretive dependencies that I argue only superficially resemble standard, syntactically-mediated relations (such as Wh-gap dependencies). I show that these dependencies are not amenable to analysis as syntactically-mediated relations. These include Coordinated-Wh Questions like those explored in Gracanin-Yuksek 2007, Right Node Raising constructions like those explored in Postal 1974, and Across-the-board constructions like those explored in Williams 1978. Each of these involves an interpretive dependency that I claim cannot be derived syntactically. The above constructions evade explanation via traditional syntactic tools as well as semantic and pragmatic means of analysis. If the above constructions involve dependencies that cannot be construed as syntactically-, semantically-, or pragmatically-mediated, it must be the case that these otherwise normal dependencies are captured via other means, whatever that may be.Item THE SEMANTICS OF PROPER NAMES AND OTHER BARE NOMINALS(2012) Izumi, Yu; Pietroski, Paul M; Philosophy; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This research proposes a unified approach to the semantics of the so-called bare nominals, which include proper names (e.g., `Mary'), mass and plural terms (e.g., `water', `cats'), and articleless noun phrases in Japanese. I argue that bare nominals themselves are monadic predicates applicable to more than one particular, but they can constitute complex referential phrases when located within an appropriate linguistic environment. Bare nominals used as the subjects or objects of sentences are some or other variant of definite descriptions, which are analyzed as non-quantificational, referential expressions. The overarching thesis is that the semantic properties of bare nominal expressions such as rigidity are not inherent in the words themselves, but derived from the basic features of complex nominal phrases.Item On Utterance Interpretation and Metalinguistic-Semantic Competence(2012) Erickson, Kent Wayne; Pietroski, Paul M; Philosophy; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study explores the role of what I call metalinguistic-semantic competence (MSC) in the processes of utterance interpretation, and in some cases expression interpretation. MSC is so-called because it is grounded in a speaker's explicit knowledge of (or beliefs about) the lexically-encoded meanings of individual words. More specifically, MSC derives, in part, from having concepts of words--or conceptsW as I distinguish them--whose representational contents, I propose, are corresponding items in a speaker's mental lexicon. The leading idea is that once acquired speakers use their conceptsW to form explicit beliefs about the meanings of words in terms of which extralinguistic concepts those words can (and cannot) coherently be used to express in ordinary conversational situations as constrained by their linguistically-encoded meanings. Or to put the claim differently, I argue that a speaker's explicit conception of word-meanings is a direct conscious reflection of his/her tacit understanding of the various ways in which lexical meanings guide and constrain without fully determining what their host words can (and cannot) be used/uttered to talk about in ordinary discourse. Such metalinguistic knowledge, I contend, quite often plays crucial role in our ability to correctly interpret what other speakers say. The first part of this work details the cognitive mechanisms underlying MSC against the backdrop of a Chomskyan framework for natural language and a Fodorian theory of concepts and their representational contents. The second part explores three ways that MSC might contribute to what I call a speaker's core linguisticsemantic competence. Specifically, I argue that MSC can help explain (i) how competent speakers acquire conceptually underspecified words with their lexical meanings, (ii) the contextual disambiguation of inherently polysemous words, and (iii) the informativeness of true natural language identity statements involving coreferential proper names. The philosophically relevant conclusion is that if any of these proposals pan out then MSC constitutes a proper explanandum of semantic theory, and hence any complete/adequate theory of semantic competence.Item It's Just Semantics: What Fiction Reveals About Proper Names(2008-04-18) Tiedke, Heidi; Pietroski, Paul M; Philosophy; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sentences like the following entail puzzles for standard systematic theories about language: (1) Bertrand Russell smoked a pipe. (2) Sherlock Holmes smoked a pipe. Prima facie, these sentences have the same semantic structure and contain expressions of the same semantic type; the only difference between them is that they contain different proper names. Intuitively, (1) and (2) are true, but they are made true and false, respectively, in different ways. Presumably (1) is true because the individual, Bertrand Russell, has or had the property of being a pipe smoker. In contrast, (2) is true for a reason something like this: the sentence 'Holmes smokes a pipe' or an equivalent thereof, or a sentence entailing this sentence, was inscribed in the Holmes novels by Arthur Conan Doyle (2002). I show that the existence of fictional names, and the truths uttered using them, are not adequately explained by any extant account of fictional discourse. A proper explanation involves giving a semantics for names that can account for both referential and fictional uses of proper names. To this end, I argue that names should not be understood as expressions that immediately refer to objects. Rather, names should be understood as expressions that encode information about a speaker's act of introducing novel uses for them. Names are not linked to objects, but to what I call "contexts of introduction". I explain how this allows room for an explanation of fictional names, and how it also accommodates Kripkean uses of proper names.