Linguistics Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2787
Browse
42 results
Search Results
Item WORD SENSE DISAMBIGUATION WITHIN A MULTILINGUAL FRAMEWORK(2003) Diab, Mona Talat; Resnik, Philip; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Word Sense Disambiguation (WSD) is the process of resolving the meaning of a word unambiguously in a given natural language context. Within the scope of this thesis, it is the process of marking text with explicit sense labels. What constitutes a sense is a subject of great debate. An appealing perspective, aims to define senses in terms of their multilingual correspondences, an idea explored by several researchers, Dyvik (1998), Ide (1999), Resnik & Yarowsky (1999), and Chugur, Gonzalo & Verdejo (2002) but to date it has not been given any practical demonstration. This thesis is an empirical validation of these ideas of characterizing word meaning using cross-linguistic correspondences. The idea is that word meaning or word sense is quantifiable as much as it is uniquely translated in some language or set of languages. Consequently, we address the problem of WSD from a multilingual perspective; we expand the notion of context to encompass multilingual evidence. We devise a new approach to resolve word sense ambiguity in natural language, using a source of information that was never exploited on a large scale for WSD before. The core of the work presented builds on exploiting word correspondences across languages for sense distinction. In essence, it is a practical and functional implementation of a basic idea common to research interest in defining word meanings in cross-linguistic terms. We devise an algorithm, SALAAM for Sense Assignment Leveraging Alignment And Multilinguality, that empirically investigates the feasibility and the validity of utilizing translations for WSD. SALAAM is an unsupervised approach for word sense tagging of large amounts of text given a parallel corpus — texts in translation — and a sense inventory for one of the languages in the corpus. Using SALAAM, we obtain large amounts of sense annotated data in both languages of the parallel corpus, simultaneously. The quality of the tagging is rigorously evaluated for both languages of the corpora. The automatic unsupervised tagged data produced by SALAAM is further utilized to bootstrap a supervised learning WSD system, in essence, combining supervised and unsupervised approaches in an intelligent way to alleviate the resources acquisition bottleneck for supervised methods. Essentially, SALAAM is extended as an unsupervised approach for WSD within a learning framework; in many of the cases of the words disambiguated, SALAAM coupled with the machine learning system rivals the performance of a canonical supervised WSD system that relies on human tagged data for training. Realizing the fundamental role of similarity for SALAAM, we investigate different dimensions of semantic similarity as it applies to verbs since they are relatively more complex than nouns, which are the focus of the previous evaluations. We design a human judgment experiment to obtain human ratings on verbs’ semantic similarity. The obtained human ratings are cast as a reference point for comparing different automated similarity measures that crucially rely on various sources of information. Finally, a cognitively salient model integrating human judgments in SALAAM is proposed as a means of improving its performance on sense disambiguation for verbs in particular and other word types in general.Item Thematic Relations Between Nouns(2001) Castillo, Juan Carlos; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)This dissertation explores some of the traditionally labeled possessive relations, and proposes a basic syntactic structure that underlies them. The two nouns act as subject and predicate in a small clause, dominated by two functional projections, where reference/agreement and contextual restrictions are checked. Looking first at container-content relations, we propose that the container is always a predicate for the content. Because in our system selection is determined in the small clause and agreement is checked in an AgrP, selection and agreement need not be determined by the same noun. Selection also distinguishes between a container and a content reading. The evidence from extraction shows that container readings are more complex than content readings. We propose that the container reading adds a higher small clause whose predicate is the feature number. Number is thus a predicate, which type-lifts mass terms to count nouns, the way classifiers do in languages without number. Evidence from Spanish and Asturian shows a three-way distinction between absence of number (mass terms), singular and plural. We also propose that nouns are not divided into rigid classes, such as mass/count. Rather, any noun may be used as mass or count, depending on whether number is added to its syntactic derivation or not. An analysis of possessor raising to both nominative and dative in Spanish also supports the idea that nouns are not divided into rigid classes with respect to their ability to enter possessive relations. Relations such as part/whole, alienable and inalienable possessions, are all analyzed as small clauses where the possessor is the subject and the possessed is the predicate. Finally, we propose a universal principle: possessor raising can occur in languages that have a structural Case in a v-projection, in addition to the Case checked by the direct object. This predicts that causative verbs in languages with possessor raising should also allow the Case checking of both the object and the subject of an embedded transitive clause. The prediction is borne out, giving rise to four types of languages, according to their Case system.Item COMPUTATIONAL ANALYSIS OF THE CONVERSATIONAL DYNAMICS OF THE UNITED STATES SUPREME COURT(2009) Hawes, Timothy; Lin, Jimmy; Resnik, Philip; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The decisions of the United States Supreme Court have far-reaching implications in American life. Using transcripts of Supreme Court oral arguments this work looks at the conversational dynamics of Supreme Court justices and links their conversational interaction with the decisions of the Court and individual justices. While several studies have looked at the relationship between oral arguments and case variables, to our knowledge, none have looked at the relationship between conversational dynamics and case outcomes. Working from this view, we show that the conversation of Supreme Court justices is both predictable and predictive. We aim to show that conversation during Supreme Court cases is patterned, this patterned conversation is associated with case outcomes, and that this association can be used to make predictions about case outcomes. We present three sets of experiments to accomplish this. The first examines the order of speakers during oral arguments as a patterned sequence, showing that cohesive elements in the discourse, along with references to individuals, provide significant improvements over our "bag-of-words" baseline in identifying speakers in sequence within a transcript. The second graphically examines the association between speaker turn-taking and case outcomes. The results presented with this experiment point to interesting and complex relationships between conversational interaction and case variables, such as justices' votes. The third experiment shows that this relationship can be used in the prediction of case outcomes with accuracy ranging from 62.5% to 76.8% for varying conditions. Finally, we offer recommendations for improved tools for legal researchers interested in the relationship between conversation during oral arguments and case outcomes, and suggestions for how these tools may be applied to more general problems.Item Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models(2009) Marton, Yuval Yehezkel; Resnik, Philip; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined. Fine granularity is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains. A novel paraphrase generation technique incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains. The model augmentation includes a novel semantic reinforcement component: In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules. The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model.Item BEYOND STATISTICAL LEARNING IN THE ACQUISITION OF PHRASE STRUCTURE(2009) Takahashi, Eri; Lidz, Jeffrey; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The notion that children use statistical distributions present in the input to acquire various aspects of linguistic knowledge has received considerable recent attention. But the roles of learner's initial state have been largely ignored in those studies. What remains unclear is the nature of learner's contribution. At least two possibilities exist. One is that all that learners do is to collect and compile accurately predictive statistics from the data, and they do not have antecedently specified set of possible structures (Elman, et al. 1996; Tomasello 2000). On this view, outcome of the learning is solely based on the observed input distributions. A second possibility is that learners use statistics to identify particular abstract syntactic representations (Miller & Chomsky 1963; Pinker 1984; Yang 2006). On this view, children have predetermined linguistic knowledge on possible structures and the acquired representations have deductive consequences beyond what can be derived from the observed statistical distributions alone. This dissertation examines how the environment interacts with the structure of the learner, and proposes a linking between distributional approach and nativist approach to language acquisition. To investigate this more general question, we focus on how infants, adults and neural networks acquire the phrase structure of their target language. This dissertation presents seven experiments, which show that adults and infants can project their generalizations to novel structures, while the Simple Recurrent Network fails. Moreover, it will be shown that learners' generalizations go beyond the stimuli, but those generalizations are constrained in the same ways that natural languages are constrained. This is compatible with the view that statistical learning interacts with inherent representational system, but incompatible with the view that statistical learning is the sole mechanism by which the existence of phrase structure is discovered. This provides novel evidence that statistical learning interacts with innate constraints on possible representations, and that learners have a deductive power that goes beyond the input data. This suggests that statistical learning is used merely as a method for mapping the surface string to abstract representation, while innate knowledge specifies range of possible grammars and structures.Item Dimensions of Ellipsis: Investigations in Turkish(2009) Ince, Atakan; Lasnik, Howard; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation examines the elliptical structures of (a) sluicing (John called someone, but I don't know who!), (b) fragment answers (A: Who did John call?, B: Mary!), (c) gapping (John is eating ice-cream, and Mary apple pie!), and (d) Right Node Raising (John cooked and Mary ate the apple pie!) in Turkish and gives a `PF-deletion'-based analysis of all these elliptical structures. As to sluicing and fragment answers, evidence in support of PF-deletion comes from P-(non-)stranding and Case Matching, respectively. Further, these elliptical structures are island-insensitive in Turkish. As to gapping, this study gives a `movement + deletion' analysis, in which remnants in the second conjunct raise to the left periphery of the second conjunct and the rest of the second conjunct is elided. One striking property of gapping in Turkish is that it is a root phenomenon; in other words, it cannot occur in complement clauses, for instance. As to Right Node Raising, again, a PF-deletion analysis is given: the identical element(s) in the first conjunct is/are elided under identity with (an) element(s) in the second conjunct. The striking property of RNR is that remnants in this elliptical structure may not be clause-mate, in contrast to other elliptical structures -where remnants can be non-clause-mate under very specific contexts. This, I suggest, is due to the fact that PF-deletion in RNR applies at a later derivational stage than in other elliptical structures. In this stage, a syntactic derivation consists of linearized (sub-)lexical forms, where there is no hierarchical representation. This also suggests that Markovian system exists in grammar. In brief, this thesis looks at different elliptical structures in Turkish, and gives arguments for PF-deletion for all these elliptical structures, which has interesting implications for the generative theory.Item On The Way To Linguistic Representation: Neuromagnetic Evidence of Early Auditory Abstraction in the Perception of Speech and Pitch(2009) Monahan, Philip Joseph; Idsardi, William J; Poeppel, David E; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The goal of this dissertation is to show that even at the earliest (non-invasive) recordable stages of auditory cortical processing, we find evidence that cortex is calculating abstract representations from the acoustic signal. Looking across two distinct domains (inferential pitch perception and vowel normalization), I present evidence demonstrating that the M100, an automatic evoked neuromagnetic component that localizes to primary auditory cortex is sensitive to abstract computations. The M100 typically responds to physical properties of the stimulus in auditory and speech perception and integrates only over the first 25 to 40 ms of stimulus onset, providing a reliable dependent measure that allows us to tap into early stages of auditory cortical processing. In Chapter 2, I briefly present the episodicist position on speech perception and discuss research indicating that the strongest episodicist position is untenable. I then review findings from the mismatch negativity literature, where proposals have been made that the MMN allows access into linguistic representations supported by auditory cortex. Finally, I conclude the Chapter with a discussion of the previous findings on the M100/N1. In Chapter 3, I present neuromagnetic data showing that the re-sponse properties of the M100 are sensitive to the missing fundamental component using well-controlled stimuli. These findings suggest that listeners are reconstructing the inferred pitch by 100 ms after stimulus onset. In Chapter 4, I propose a novel formant ratio algorithm in which the third formant (F3) is the normalizing factor. The goal of formant ratio proposals is to provide an explicit algorithm that successfully "eliminates" speaker-dependent acoustic variation of auditory vowel tokens. Results from two MEG experiments suggest that auditory cortex is sensitive to formant ratios and that the perceptual system shows heightened sensitivity to tokens located in more densely populated regions of the vowel space. In Chapter 5, I report MEG results that suggest early auditory cortical processing is sensitive to violations of a phonological constraint on sound sequencing, suggesting that listeners make highly specific, knowledge-based predictions about rather abstract anticipated properties of the upcoming speech signal and violations of these predictions are evident in early cortical processing.Item Island repair and non-repair by PF strategies(2009) Nakao, Chizuru; Hornstein, Norbert R; Lasnik, Howard; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Since Ross (1967), it has been observed that there are configurations from which otherwise unbounded movement operations cannot occur, and they are called islands. Ellipsis and resumption are known to have a peculiar property to `repair' island violations. Each chapter of this thesis discusses a case of ellipsis/resumption to examine in what cases movement out of an island becomes licit by those strategies. Chapter 2 discusses the elliptical construction called sluicing, and argues for the PF-deletion analysis of sluicing (Merchant 2001, originated from Ross 1969). I will show that ECP violations made by adjunct sluicing cannot be repaired by sluicing, unlike island violations. I will thus argue that island violations are PF-violations while ECP violations are LF violations, and that PF-deletion ameliorates only PF-deletion. Chapter 3 examines properties of stripping and argues that stripping is derived by focus movement followed by PF-deletion. I try to attribute the lack of island repair under ellipsis in stripping to the fact that focus movement is not usually overt in English. Covert movement is derived by a weak feature (Chomsky 1995), but when a focused material is included in the PF-deletion site, it undergoes last resort PF-movement to satisfy the recoverability of deletion. I claim that this PF-movement is incompatible with island-repair, speculating that island violations are ameliorated at spell-out, and post-spell-out movement is `too late' to be repaired. Chapter 4 reviews properties of Japanese sluicing, and introduces Hiraiwa and Ishihara's (2002) analysis where Japanese sluicing is derived from what they call "no da" in-situ focus construction. Under this analysis, the sluiced wh-phrase undergoes focus movement, followed by clausal deletion. I adopt the analysis of stripping to Japanese sluicing, claiming that this is another instance of the last resort focus movement at PF, which cannot ameliorate island violations. Chapter 5 discusses properties of Left Node Raising (LNR) in Japanese. Based on the fact that simple LNR shows properties distinct from Null Object Construction (NOC), I claim that LNR involves ATB-movement rather than NOC. However, the second gap of LNR behaves like a pronoun only when included inside an island. I claim that this is an instance of null resumptive pronoun.Item The predictive nature of language comprehension(2009) Lau, Ellen Frances; Phillips, Colin; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation explores the hypothesis that predictive processing--the access and construction of internal representations in advance of the external input that supports them--plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations of contextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations--the N400 effect--reflects facilitation in processing due to lexical-semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory,. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible.Item Form, meaning and context in lexical access: MEG and behavioral evidence(2009) Almeida, Diogo; Poeppel, David; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)One of the main challenges in the study of cognition is how to connect brain activity to cognitive processes. In the domain of language, this requires coordination between two different lines of research: theoretical models of linguistic knowledge and language processing on the one side and brain sciences on the other. The work reported in this dissertation attempts to link these two lines of research by focusing on one particular aspect of linguistic processing, namely lexical access. The rationale for this focus is that access to the lexicon is a mandatory step in any theory of linguistic computation, and therefore findings about lexical access procedures have consequences for language processing models in general. Moreover, in the domain of brain electrophysiology, past research on event-related brain potentials (ERPs) - electrophysiological responses taken to reflect processing of certain specific kinds of stimuli or specific cognitive processes - has uncovered different ERPs that have been connected to linguistic stimuli and processes. One particular ERP, peaking at around 400 ms post-stimulus onset (N400) has been linked to lexico-semantic processing, but its precise functional interpretation remains controversial: The N400 has been proposed to reflect lexical access procedures as well as higher order semantic/pragmatic processing. In a series of three MEG experiments, we show that access to the lexicon from print occurs much earlier than previously thought, at around 200 ms, but more research is needed before the same conclusion can be reached about lexical access based on auditory or sign language input. The cognitive activity indexed by the N400 and its MEG analogue is argued to constitute predictive processing that integrates information from linguistic and non-linguistic sources at a later, post-lexical stage.