Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models
dc.contributor.advisor | Resnik, Philip | en_US |
dc.contributor.author | Marton, Yuval Yehezkel | en_US |
dc.contributor.department | Linguistics | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2010-02-19T06:42:53Z | |
dc.date.available | 2010-02-19T06:42:53Z | |
dc.date.issued | 2009 | en_US |
dc.description.abstract | This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined. <italic>Fine granularity</italic> is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains. A <italic>novel paraphrase generation technique</italic> incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains. The model augmentation includes a novel <italic>semantic reinforcement component:</italic> In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules. The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model. | en_US |
dc.identifier.uri | http://hdl.handle.net/1903/9861 | |
dc.subject.pqcontrolled | Language, Linguistics | en_US |
dc.subject.pqcontrolled | Computer Science | en_US |
dc.subject.pquncontrolled | computational linguistics | en_US |
dc.subject.pquncontrolled | hybrid | en_US |
dc.subject.pquncontrolled | paraphrase generation | en_US |
dc.subject.pquncontrolled | semantic distance | en_US |
dc.subject.pquncontrolled | soft constraints | en_US |
dc.subject.pquncontrolled | statistical machine translation | en_US |
dc.title | Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1