Mathematics
Permanent URI for this communityhttp://hdl.handle.net/1903/2261
Browse
2 results
Search Results
Item Nonlinear Sampling Theory and Efficient Signal Recovery(2020) Lin, Kung-Ching; Benedetto, John; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sampling theory investigates signal recovery from its partial information, and one of the simplest and most well-known sampling schemes is uniform linear sampling, characterized by the celebrated classical sampling theorem. However, the requirements of uniform linear sampling may not always be satisfied, sparking the need for more general sampling theories. In the thesis, we discuss the following three sampling scenarios: signal quantization, compressive sensing, and deep neural networks. In signal quantization theory, the inability of digital devices to perfectly store analog samples leads to distortion when reconstructing the signal from its samples. Different quantization schemes are proposed so as to minimize such distortion. We adapt a quantization scheme used in analog-to-digital conversion called signal decimation to finite dimensional signals. In doing so, we are able to achieve theoretically optimal reconstruction error decay rate. Compressive sensing investigates the possibility to recover high-dimensional signals from incomplete samples. It has been proven feasible as long as the signal is sufficiently sparse. To this point, all of the most successful examples follow from random constructions rather than deterministic ones. Whereas the sparsity of the signal can be almost as large as the ambient dimension for random constructions, current deterministic constructions require the sparsity to be at most the square-root of the ambient dimension. This apparent barrier is the well-known square-root bottleneck. In this thesis, we propose a new explicit sampling scheme as a possible candidate for deterministic compressive sensing. We present a partial result, while the full generality is still work in progress. For deep neural networks, one approximates signals with neural networks. To do so, many samples need to be drawn in order to find an optimal approximating neural network. A common approach is to employ stochastic gradient descent, but it is unclear if the resulting neural network is indeed optimal due to the non-convexity of the optimization scheme. We follow an alternative approach, utilizing the derivatives of the signal for stable reconstruction. In this thesis, we focus on non-smooth signals, and using weak differentiation, it is easy to obtain stable reconstruction for one-layer neural networks. We are currently working on the two-layer case, and our approach is outlined in this thesis.Item Sparse Signal Representation in Digital and Biological Systems(2016) Guay, Matthew; Czaja, Wojciech; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.