A MULTIPLE REPRESENTATIONS MODEL OF THE HUMAN MIRROR NEURON SYSTEM FOR LEARNED ACTION IMITATION

dc.contributor.advisorGentili, Rodolphe Jen_US
dc.contributor.authorOh, Hyuken_US
dc.contributor.departmentNeuroscience and Cognitive Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2016-02-06T06:36:16Z
dc.date.available2016-02-06T06:36:16Z
dc.date.issued2015en_US
dc.description.abstractThe human mirror neuron system (MNS) is a fundamental sensorimotor system that plays a critical role in action observation and imitation. Despite a large body of experimental and theoretical MNS studies, the visuospatial transformation between the observed and the imitated actions has received very limited attention. Therefore, this work proposes a neurobiologically plausible MNS model, which examines the dynamics between the fronto-parietal mirror system and the parietal visuospatial transformation system during action observation and imitation. The fronto-parietal network is composed of the inferior frontal gyrus (IFG) and the inferior parietal lobule (IPL), which are postulated to generate the neural commands and the predictions for its sensorimotor consequences, respectively. The parietal regions identified as the superior parietal lobule (SPL) and the intraparietal sulcus (IPS) are postulated to encode the visuospatial transformation for enabling view-independent representations of the observed action. The middle temporal region is postulated to provide the view-dependent representations such as direction and velocity of the observed action. In this study, the SPL/IPS, IFG, and IPL are modeled with artificial neural networks to simulate the neural mechanisms underlying action imitation. The results reveal that this neural model can replicate relevant behavioral and neurophysiological findings obtained from previous action imitation studies. Specifically, the imitator can replicate the observed actions independently of the spatial relationships with the demonstrator while generating similar synthetic functional magnetic resonance imaging blood oxygenation level-dependent responses in the IFG for both action observation and execution. Moreover, the SPL/IPS can provide view-independent visual representations through mental transformation for which the response time monotonically increases as the rotation angle augments. Furthermore, the simulated neural activities reveal the emergence of both view-independent and view-dependent neural populations in the IFG. As a whole, this work suggests computational mechanisms by which visuospatial transformation processes would subserve the MNS for action observation and imitation independently of the differences in anthropometry, distance, and viewpoint between the demonstrator and the imitator.en_US
dc.identifierhttps://doi.org/10.13016/M2SQ7P
dc.identifier.urihttp://hdl.handle.net/1903/17247
dc.language.isoenen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pqcontrolledNeurosciencesen_US
dc.subject.pqcontrolledRoboticsen_US
dc.subject.pquncontrolledAction Imitationen_US
dc.subject.pquncontrolledMirror Neuron Systemen_US
dc.subject.pquncontrolledNeural Dynamicsen_US
dc.subject.pquncontrolledNeural Modelen_US
dc.subject.pquncontrolledSynthetic BOLD fMRIen_US
dc.subject.pquncontrolledView-based Representationen_US
dc.titleA MULTIPLE REPRESENTATIONS MODEL OF THE HUMAN MIRROR NEURON SYSTEM FOR LEARNED ACTION IMITATIONen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Oh_umd_0117E_16646.pdf
Size:
3.5 MB
Format:
Adobe Portable Document Format