Discourse-Level Language Understanding with Deep Learning

dc.contributor.advisorBoyd-Graber, Jordanen_US
dc.contributor.advisorDaumé, Halen_US
dc.contributor.authorIyyer, Mohit Nagarajaen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2017-10-14T05:30:16Z
dc.date.available2017-10-14T05:30:16Z
dc.date.issued2017en_US
dc.description.abstractDesigning computational models that can understand language at a human level is a foundational goal in the field of natural language processing (NLP). Given a sentence, machines are capable of translating it into many different languages, generating a corresponding syntactic parse tree, marking words that refer to people or places, and much more. These tasks are solved by statistical machine learning algorithms, which leverage patterns in large datasets to build predictive models. Many recent advances in NLP are due to deep learning models (parameterized as neural networks), which bypass user-specified features in favor of building representations of language directly from the text. Despite many deep learning-fueled advances at the word and sentence level, however, computers still struggle to understand high-level discourse structure in language, or the way in which authors combine and order different units of text (e.g., sentences, paragraphs, chapters) to express a coherent message or narrative. Part of the reason is data-related, as there are no existing datasets for many contextual language-based problems, and some tasks are too complex to be framed as supervised learning problems; for the latter type, we must either resort to unsupervised learning or devise training objectives that simulate the supervised setting. Another reason is architectural: neural networks designed for sentence-level tasks require additional functionality, interpretability, and efficiency to operate at the discourse level. In this thesis, I design deep learning architectures for three NLP tasks that require integrating information across high-level linguistic context: question answering, fictional relationship understanding, and comic book narrative modeling. While these tasks are very different from each other on the surface, I show that similar neural network modules can be used in each case to form contextual representations.en_US
dc.identifierhttps://doi.org/10.13016/M2930NW6W
dc.identifier.urihttp://hdl.handle.net/1903/20159
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledartificial intelligenceen_US
dc.subject.pquncontrolledcreative languageen_US
dc.subject.pquncontrolleddeep learningen_US
dc.subject.pquncontrolledmachine learningen_US
dc.subject.pquncontrollednatural language processingen_US
dc.subject.pquncontrolledquestion answeringen_US
dc.titleDiscourse-Level Language Understanding with Deep Learningen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Iyyer_umd_0117E_18370.pdf
Size:
22.74 MB
Format:
Adobe Portable Document Format