A Neurocomputational Model of Grounded Language Comprehension and Production at the Sentence Level

dc.contributor.advisorReggia, James Aen_US
dc.contributor.authorMonner, Dereken_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2012-02-17T06:45:47Z
dc.date.available2012-02-17T06:45:47Z
dc.date.issued2011en_US
dc.description.abstractWhile symbolic and statistical approaches to natural language processing have become undeniably impressive in recent years, such systems still display a tendency to make errors that are inscrutable to human onlookers. This disconnect with human processing may stem from the vast differences in the substrates that underly natural language processing in artificial systems versus biological systems. To create a more relatable system, this dissertation turns to the more biologically inspired substrate of neural networks, describing the design and implementation of a model that learns to comprehend and produce language at the sentence level. The model's task is to ground simulated speech streams, representing a simple subset of English, in terms of a virtual environment. The model learns to understand and answer full-sentence questions about the environment by mimicking the speech stream of another speaker, much as a human language learner would. It is the only known neural model to date that can learn to map natural language questions to full-sentence natural language answers, where both question and answer are represented sublexically as phoneme sequences. The model addresses important points for which most other models, neural and otherwise, fail to account. First, the model learns to ground its linguistic knowledge using human-like sensory representations, gaining language understanding at a deeper level than that of syntactic structure. Second, analysis provides evidence that the model learns combinatorial internal representations, thus gaining the compositionality of symbolic approaches to cognition, which is vital for computationally efficient encoding and decoding of meaning. The model does this while retaining the fully distributed representations characteristic of neural networks, providing the resistance to damage and graceful degradation that are generally lacking in symbolic and statistical approaches. Finally, the model learns via direct imitation of another speaker, allowing it to emulate human processing with greater fidelity, thus increasing the relatability of its behavior. Along the way, this dissertation develops a novel training algorithm that, for the first time, requires only local computations to train arbitrary second-order recurrent neural networks. This algorithm is evaluated on its overall efficacy, biological feasibility, and ability to reproduce peculiarities of human learning such as age-correlated effects in second language acquisition.en_US
dc.identifier.urihttp://hdl.handle.net/1903/12259
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pqcontrolledLanguageen_US
dc.subject.pquncontrolledlanguage groundingen_US
dc.subject.pquncontrolledlong short term memoryen_US
dc.subject.pquncontrolledquestion answeringen_US
dc.subject.pquncontrolledrecurrent neural networken_US
dc.subject.pquncontrolledsecond language acquisitionen_US
dc.titleA Neurocomputational Model of Grounded Language Comprehension and Production at the Sentence Levelen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Monner_umd_0117E_12702.pdf
Size:
2.69 MB
Format:
Adobe Portable Document Format