NEURO-INSPIRED AUGMENTATIONS OF UNSUPERVISED DEEP NEURAL NETWORKS FOR LOW-SWAP EMBODIED ROBOTIC PERCEPTION

dc.contributor.advisorPerlis, Donalden_US
dc.contributor.authorShamwell, Earl Jareden_US
dc.contributor.departmentNeuroscience and Cognitive Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2018-01-23T06:42:58Z
dc.date.available2018-01-23T06:42:58Z
dc.date.issued2017en_US
dc.description.abstractDespite the 3-4 saccades the human eye undergoes per second, humans perceive a stable, visually constant world. During a saccade, the projection of the visual world shifts on the retina and the net displacement on the retina would be identical if the entire visual world were instead shifted. However, humans are able to perceptually distinguish between these two conditions and perceive a stable world in the first condition, and amoving world in the second. Through new analysis, I show how biological mechanisms theorized to enable visual positional constancy implicitly contain rich, egocentric sensorimotor representations and with appropriate modeling and abstraction, artificial surrogates for these mechanisms can enhance the performance of robotic systems. In support of this view, I have developed a new class of neuro-inspired, unsupervised, heterogeneous, deep predictive neural networks that are approximately 5,000%-22,000% faster (depending on the network configuration) than state-of-the-art (SOA)dense approaches and with comparable performance. Each model in this new family of network architectures, dubbed LightEfference (LE) (Chapter 2), DeepEfference (DE) (Chapter 2), Multi-Hypothesis DeepEfference (MHDE) (Chapter 3), and Inertial DeepEfference (IDE) (Chapter 4) respectively, achieves its substantial runtime performance increase by leveraging the embodied nature of mobile robotics and performing early fusion of freely available heterogeneous sensor and motor/intentional information. With these architectures, I show how embedding extra-visual information meant to encode an estimate of an embodied agent’s immediate intention supports efficient computations of visual constancy and odometry and greatly increases computational efficiency compared to comparable single-modality SOA approaches.en_US
dc.identifierhttps://doi.org/10.13016/M24B2X64W
dc.identifier.urihttp://hdl.handle.net/1903/20371
dc.language.isoenen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pqcontrolledNeurosciencesen_US
dc.subject.pquncontrolledBio-Inspireden_US
dc.subject.pquncontrolledDeep Learningen_US
dc.subject.pquncontrolledNeuro-Inspireden_US
dc.subject.pquncontrolledRoboticsen_US
dc.subject.pquncontrolledUnsupervised Deep Learningen_US
dc.subject.pquncontrolledVisual Odometryen_US
dc.titleNEURO-INSPIRED AUGMENTATIONS OF UNSUPERVISED DEEP NEURAL NETWORKS FOR LOW-SWAP EMBODIED ROBOTIC PERCEPTIONen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Shamwell_umd_0117E_18601.pdf
Size:
10.24 MB
Format:
Adobe Portable Document Format