Center for Advanced Study of Language Research Works
Permanent URI for this collection
Browse
Browsing Center for Advanced Study of Language Research Works by Title
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Analysis of Stopping Active Learning based on Stabilizing Predictions(Association for Computational Linguistics, 2013-08) Bloodgood, Michael; Grothendieck, JohnWithin the natural language processing (NLP) community, active learning has been widely investigated and applied in order to alleviate the annotation bottleneck faced by developers of new NLP systems and technologies. This paper presents the first theoretical analysis of stopping active learning based on stabilizing predictions (SP). The analysis has revealed three elements that are central to the success of the SP method: (1) bounds on Cohen’s Kappa agreement between successively trained models impose bounds on differences in F-measure performance of the models; (2) since the stop set does not have to be labeled, it can be made large in practice, helping to guarantee that the results transfer to previously unseen streams of examples at test/application time; and (3) good (low variance) sample estimates of Kappa between successive models can be obtained. Proofs of relationships between the level of Kappa agreement and the difference in performance between consecutive models are presented. Specifically, if the Kappa agreement between two models exceeds a threshold T (where T > 0), then the difference in F-measure performance between those models is bounded above by 4(1−T)/T in all cases. If precision of the positive conjunction of the models is assumed to be p, then the bound can be tightened to 4(1−T)/((p+1)T).Item Annotating Cognates and Etymological Origin in Turkic Languages(European Language Resources Association, 2012-05) Mericli, Benjamin; Bloodgood, MichaelTurkic languages exhibit extensive and diverse etymological relationships among lexical items. These relationships make the Turkic languages promising for exploring automated translation lexicon induction by leveraging cognate and other etymological information. However, due to the extent and diversity of the types of relationships between words, it is not clear how to annotate such information. In this paper, we present a methodology for annotating cognates and etymological origin in Turkic languages. Our method strives to balance the amount of research effort the annotator expends with the utility of the annotations for supporting research on improving automated translation lexicon induction.Item An Approach to Reducing Annotation Costs for BioNLP(Association for Computational Linguistics, 2008-06) Bloodgood, Michael; Vijay-Shanker, KThere is a broad range of BioNLP tasks for which active learning (AL) can significantly reduce annotation costs and a specific AL algorithm we have developed is particularly effective in reducing annotation costs for these tasks. We have previously developed an AL algorithm called ClosestInitPA that works best with tasks that have the following characteristics: redundancy in training material, burdensome annotation costs, Support Vector Machines (SVMs) work well for the task, and imbalanced datasets (i.e. when set up as a binary classification problem, one class is substantially rarer than the other). Many BioNLP tasks have these characteristics and thus our AL algorithm is a natural approach to apply to BioNLP tasks.Item Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation(Association for Computational Linguistics, 2010-07) Bloodgood, Michael; Callison-Burch, ChrisWe explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement.Item Choosing your platform for social media drug research and improving your keyword filter list(2019) Adams, Nikki; Artigiani, Eleanor Erin; Wish, Eric D.Social media research often has two things in common: Twitter is the platform used and a keyword filter list is used to extract only relevant Tweets. Here we propose that (a) alternative platforms be considered more often when doing social media research, and (b) regardless of platform, researchers use word embeddings as a type of synonym discovery to improve their keyword filter list, both of which lead to more relevant data. We demonstrate the benefit of these proposals by comparing how successful our synonym discovery method is at finding terms for marijuana and select opioids on Twitter versus a platform that can be filtered by topic, Reddit. We also find words that are not on the U.S. Drug Enforcement Agency (DEA) drug slang list for that year, some of which appear on the list the subsequent year, showing that this method could be employed to find drug terms faster than traditional means.Item Correcting Errors in Digital Lexicographic Resources Using a Dictionary Manipulation Language(Trojina Institute for Applied Slovene Studies, 2011-11) Zajic, David; Maxwell, Michael; Doermann, David; Rodrigues, Paul; Bloodgood, MichaelWe describe a paradigm for combining manual and automatic error correction of noisy structured lexicographic data. Modifications to the structure and underlying text of the lexicographic data are expressed in a simple, interpreted programming language. Dictionary Manipulation Language (DML) commands identify nodes by unique identifiers, and manipulations are performed using simple commands such as create, move, set text, etc. Corrected lexicons are produced by applying sequences of DML commands to the source version of the lexicon. DML commands can be written manually to repair one-off errors or generated automatically to correct recurring problems. We discuss advantages of the paradigm for the task of editing digital bilingual dictionaries.Item Data Cleaning for XML Electronic Dictionaries via Statistical Anomaly Detection(IEEE, 2016) Bloodgood, Michael; Strauss, BenjaminMany important forms of data are stored digitally in XML format. Errors can occur in the textual content of the data in the fields of the XML. Fixing these errors manually is time-consuming and expensive, especially for large amounts of data. There is increasing interest in the research, development, and use of automated techniques for assisting with data cleaning. Electronic dictionaries are an important form of data frequently stored in XML format that frequently have errors introduced through a mixture of manual typographical entry errors and optical character recognition errors. In this paper we describe methods for flagging statistical anomalies as likely errors in electronic dictionaries stored in XML format. We describe six systems based on different sources of information. The systems detect errors using various signals in the data including uncommon characters, text length, character-based language models, word-based language models, tied-field length ratios, and tied-field transliteration models. Four of the systems detect errors based on expectations automatically inferred from content within elements of a single field type. We call these single-field systems. Two of the systems detect errors based on correspondence expectations automatically inferred from content within elements of multiple related field types. We call these tied-field systems. For each system, we provide an intuitive analysis of the type of error that it is successful at detecting. Finally, we describe two larger-scale evaluations using crowdsourcing with Amazon’s Mechanical Turk platform and using the annotations of a domain expert. The evaluations consistently show that the systems are useful for improving the efficiency with which errors in XML electronic dictionaries can be detected.Item Detecting Structural Irregularity in Electronic Dictionaries Using Language Modeling(Trojina Institute for Applied Slovene Studies, 2011-11) Rodrigues, Paul; Zajic, David; Doermann, David; Bloodgood, Michael; Ye, PengDictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards. These standards often allow high-level repeating elements to represent lexical entries, and utilize descendants of these repeating elements to represent the structure within each lexical entry, in the form of an XML tree. In many cases, dictionaries are published that have errors and inconsistencies that are expensive to find manually. This paper discusses a method for dictionary writers to quickly audit structural regularity across entries in a dictionary by using statistical language modeling. The approach learns the patterns of XML nodes that could occur within an XML tree, and then calculates the probability of each XML tree in the dictionary against these patterns to look for entries that diverge from the norm.Item Filtering Tweets for Social Unrest(IEEE, 2017-01) Mishler, Alan; Wonus, Kevin; Chambers, Wendy; Bloodgood, MichaelSince the events of the Arab Spring, there has been increased interest in using social media to anticipate social unrest. While efforts have been made toward automated unrest prediction, we focus on filtering the vast volume of tweets to identify tweets relevant to unrest, which can be provided to downstream users for further analysis. We train a supervised classifier that is able to label Arabic language tweets as relevant to unrest with high reliability. We examine the relationship between training data size and performance and investigate ways to optimize the model building process while minimizing cost. We also explore how confidence thresholds can be set to achieve desired levels of performance.Item Interoperable Grammars(2008) Maxwell, Michael; David, AnneFor languages with significant inflectional morphology, development of a morphological parser is often a prerequisite to further computational linguistic capabilities. We focus on two difficulties for this development: the short lifetime of software such as parsing engines, and the difficulty of porting grammars to new parsing engines. We describe a methodology we have developed to promote portability, using a formal declarative grammar written in XML, which we supplement with a traditional descriptive grammar. The two grammars are combined into a single document using Literate Programming. The formal grammar is designed to be independent of a particular parsing engine’s programming language, thus helping solve the software lifetime and portability problems.Item Joint Grammar Development by Linguists and Computer Scientists(2008) Maxwell, Michael; David, AnneFor languages with inflectional morphology, development of a morphological parser is often a bottleneck for further development of computational linguistic capabilities. We focus on two difficulties: first, finding people with expertise in both computer programming and the linguistics of a particular language, and second, the short lifetime of software such as parsers. We then describe a methodology we have developed to split the task of building a parser for a language into two tasks, descriptive grammar development and formal grammar development. The two grammars are combined into a single document using Literate Programming. The formal grammar is designed not to be dependent on a particular parsing engine’s programming language, so that it can be readily ported to a new parsing engine, thus helping solve the software lifetime problem.Item A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping(Association for Computational Linguistics, 2009-06) Bloodgood, Michael; Vijay-Shanker, KA survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior.Item A Modality Lexicon and its use in Automatic Tagging(European Language Resources Association, 2010-05) Baker, Kathryn; Bloodgood, Michael; Dorr, Bonnie; Filardo, Nathaniel; Levin, Lori; Piatko, ChristineThis paper describes our resource-building results for an eight-week JHU Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation. Specifically, we describe the construction of a modality annotation scheme, a modality lexicon, and two automated modality taggers that were built using the lexicon and annotation scheme. Our annotation scheme is based on identifying three components of modality: a trigger, a target and a holder. We describe how our modality lexicon was produced semi-automatically, expanding from an initial hand-selected list of modality trigger words and phrases. The resulting expanded modality lexicon is being made publicly available. We demonstrate that one tagger—a structure-based tagger—results in precision around 86% (depending on genre) for tagging of a standard LDC data set. In a machine translation application, using the structure-based tagger to annotate English modalities on an English-Urdu training corpus improved the translation quality score for Urdu by 0.3 Bleu points in the face of sparse training data.Item A random forest system combination approach for error detection in digital dictionaries(Association for Computational Linguistics, 2012-04-23) Bloodgood, Michael; Ye, Peng; Rodrigues, Paul; Zajic, David; Doermann, DavidWhen digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria.Item Rapid Adaptation of POS Tagging for Domain Specific Uses(Association for Computational Linguistics, 2006-06) Miller, John; Bloodgood, Michael; Torii, Manabu; Vijay-Shanker, KPart-of-speech (POS) tagging is a fundamental component for performing natural language tasks such as parsing, information extraction, and question answering. When POS taggers are trained in one domain and applied in significantly different domains, their performance can degrade dramatically. We present a methodology for rapid adaptation of POS taggers to new domains. Our technique is unsupervised in that a manually annotated corpus for the new domain is not necessary. We use suffix information gathered from large amounts of raw text as well as orthographic information to increase the lexical coverage. We present an experiment in the Biological domain where our POS tagger achieves results comparable to POS taggers specifically trained to this domain.Item Semantically-Informed Syntactic Machine Translation: A Tree-Grafting Approach(2010-10) Baker, Kathryn; Bloodgood, Michael; Callison-Burch, Chris; Dorr, Bonnie; Filardo, Nathaniel; Levin, Lori; Miller, Scott; Piatko, ChristineWe describe a unified and coherent syntactic framework for supporting a semantically-informed syntactic approach to statistical machine translation. Semantically enriched syntactic tags assigned to the target-language training texts improved translation quality. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English translation task. This finding supports the hypothesis (posed by many researchers in the MT community, e.g., in DARPA GALE) that both syntactic and semantic information are critical for improving translation quality—and further demonstrates that large gains can be achieved for low-resource languages with different word order than English.Item Statistical Modality Tagging from Rule-based Annotations and Crowdsourcing(Association for Computational Linguistics, 2012-07-13) Prabhakaran, Vinodkumar; Bloodgood, Michael; Diab, Mona; Dorr, Bonnie; Levin, Lori; Piatko, Christine; Rambow, Owen; Van Durme, BenjaminWe explore training an automatic modality tagger. Modality is the attitude that a speaker might have toward an event or state. One of the main hurdles for training a linguistic tagger is gathering training data. This is particularly problematic for training a tagger for modality because modality triggers are sparse for the overwhelming majority of sentences. We investigate an approach to automatically training a modality tagger where we first gathered sentences based on a high-recall simple rule-based modality tagger and then provided these sentences to Mechanical Turk annotators for further annotation. We used the resulting set of training data to train a precise modality tagger using a multi-class SVM that delivers good performance.Item Supplemental Materials for Slevc, Davey, & Linck (2016), A new look at the 'hard problem' of bilingual lexical access: Evidence for language suppression with univalent stimuli(2015-10-14) Slevc, L. Robert; Davey, Nicholas; Linck, Jared A.This document provides the supporting documentation of the modeling procedure and interim results for the analyses reported in Slevc, Davey, and Linck (2016), A new look at "the hard problem" of bilingual lexical access: Evidence for language suppression with univalent stimuli.Item Taking into Account the Differences between Actively and Passively Acquired Data: The Case of Active Learning with Support Vector Machines for Imbalanced Datasets(Association for Computational Linguistics, 2009-06) Bloodgood, Michael; Vijay-Shanker, KActively sampled data can have very different characteristics than passively sampled data. Therefore, it’s promising to investigate using different inference procedures during AL than are used during passive learning (PL). This general idea is explored in detail for the focused case of AL with cost-weighted SVMs for imbalanced data, a situation that arises for many HLT tasks. The key idea behind the proposed InitPA method for addressing imbalance is to base cost models during AL on an estimate of overall corpus imbalance computed via a small unbiased sample rather than the imbalance in the labeled training data, which is the leading method used during PL.Item Translation memory retrieval methods(Association for Computational Linguistics, 2014-04) Bloodgood, Michael; Strauss, BenjaminTranslation Memory (TM) systems are one of the most widely used translation technologies. An important part of TM systems is the matching algorithm that determines what translations get retrieved from the bank of available translations to assist the human translator. Although detailed accounts of the matching algorithms used in commercial systems can’t be found in the literature, it is widely believed that edit distance algorithms are used. This paper investigates and evaluates the use of several matching algorithms, including the edit distance algorithm that is believed to be at the heart of most modern commercial TM systems. This paper presents results showing how well various matching algorithms correlate with human judgments of helpfulness (collected via crowdsourcing with Amazon’s Mechanical Turk). A new algorithm based on weighted n-gram precision that can be adjusted for translator length preferences consistently returns translations judged to be most helpful by translators for multiple domains and language pairs.