UMD Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/3

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 1 of 1
  • Thumbnail Image
    Item
    NEURO-INSPIRED AUGMENTATIONS OF UNSUPERVISED DEEP NEURAL NETWORKS FOR LOW-SWAP EMBODIED ROBOTIC PERCEPTION
    (2017) Shamwell, Earl Jared; Perlis, Donald; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Despite the 3-4 saccades the human eye undergoes per second, humans perceive a stable, visually constant world. During a saccade, the projection of the visual world shifts on the retina and the net displacement on the retina would be identical if the entire visual world were instead shifted. However, humans are able to perceptually distinguish between these two conditions and perceive a stable world in the first condition, and amoving world in the second. Through new analysis, I show how biological mechanisms theorized to enable visual positional constancy implicitly contain rich, egocentric sensorimotor representations and with appropriate modeling and abstraction, artificial surrogates for these mechanisms can enhance the performance of robotic systems. In support of this view, I have developed a new class of neuro-inspired, unsupervised, heterogeneous, deep predictive neural networks that are approximately 5,000%-22,000% faster (depending on the network configuration) than state-of-the-art (SOA)dense approaches and with comparable performance. Each model in this new family of network architectures, dubbed LightEfference (LE) (Chapter 2), DeepEfference (DE) (Chapter 2), Multi-Hypothesis DeepEfference (MHDE) (Chapter 3), and Inertial DeepEfference (IDE) (Chapter 4) respectively, achieves its substantial runtime performance increase by leveraging the embodied nature of mobile robotics and performing early fusion of freely available heterogeneous sensor and motor/intentional information. With these architectures, I show how embedding extra-visual information meant to encode an estimate of an embodied agent’s immediate intention supports efficient computations of visual constancy and odometry and greatly increases computational efficiency compared to comparable single-modality SOA approaches.