NEURO-INSPIRED AUGMENTATIONS OF UNSUPERVISED DEEP NEURAL NETWORKS FOR LOW-SWAP EMBODIED ROBOTIC PERCEPTION

Loading...
Thumbnail Image

Publication or External Link

Date

2017

Citation

Abstract

Despite the 3-4 saccades the human eye undergoes per second, humans perceive a stable, visually constant world. During a saccade, the projection of the visual world shifts on the retina and the net displacement on the retina would be identical if the entire visual world were instead shifted. However, humans are able to perceptually distinguish between these two conditions and perceive a stable world in the first condition, and amoving world in the second.

Through new analysis, I show how biological mechanisms theorized to enable visual positional constancy implicitly contain rich, egocentric sensorimotor representations and with appropriate modeling and abstraction, artificial surrogates for these mechanisms can enhance the performance of robotic systems.

In support of this view, I have developed a new class of neuro-inspired, unsupervised, heterogeneous, deep predictive neural networks that are approximately 5,000%-22,000% faster (depending on the network configuration) than state-of-the-art (SOA)dense approaches and with comparable performance.

Each model in this new family of network architectures, dubbed LightEfference (LE) (Chapter 2), DeepEfference (DE) (Chapter 2), Multi-Hypothesis DeepEfference

(MHDE) (Chapter 3), and Inertial DeepEfference (IDE) (Chapter 4) respectively, achieves its substantial runtime performance increase by leveraging the embodied nature of mobile robotics and performing early fusion of freely available heterogeneous sensor and motor/intentional information. With these architectures, I show how embedding extra-visual information meant to encode an estimate of an embodied agent’s immediate intention supports efficient computations of visual constancy and odometry and greatly increases computational efficiency compared to comparable single-modality SOA approaches.

Notes

Rights