Estimation of Elevation and Azimuth in a Neuromorphic VLSI Bat Echolocation System
Abdalla, Hisham Ahmed Nabil
Horiuchi, Timothy K
MetadataShow full item record
Auditory localization is an interesting and challenging problem; the location of the sound source is not spatially encoded in the peripheral sensory system as it is in the visual or somatosensory systems. Instead it must be computed from the neural representation of the sound reaching both ears. Echolocation is a form of auditory localization, however, an important distinction is that the sound being localized is an echo of the sound emitted by the animal itself. This dissertation presents a neuromorphic VLSI circuit model of a bat echolocation system. The acoustic cues that we use in our system are the binaural interaural level differences (ILDs) and the monaural spectral cues. We have designed an artificial bat head using 3D CAD software and fabricated it using a 3D printer. The artificial bat head is capable of generating the necessary acoustic cues for localization. We have designed and fabricated an ultrasonic cochlea chip with 16 cochlear filters and 128 spiking cochlear neurons (eight neurons per cochlear filter), the cochlear filters and neurons transform the analog input into a spike-based cochlear representation. We have also designed and fabricated two feature extraction chips: a monaural spectral difference chip and a binaural ILD chip, that together can extract the localization cues from the spike-based cochlear representation. The monaural spectral difference chip consists of 240 spiking neurons; each neuron compares the activity of two cochlear filters within the same ear. The binaural ILD chip consists of 32 spiking neurons (two per cochlear filter) that model the processing that takes place in the lateral superior olive (LSO). We demonstrate that the spatiotemporal pattern of spiking outputs from the feature extraction chips can be decoded to estimate the direction (elevation and azimuth) of an ultrasonic chirp.