Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
3 results
Search Results
Item Nonlinear Sampling Theory and Efficient Signal Recovery(2020) Lin, Kung-Ching; Benedetto, John; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sampling theory investigates signal recovery from its partial information, and one of the simplest and most well-known sampling schemes is uniform linear sampling, characterized by the celebrated classical sampling theorem. However, the requirements of uniform linear sampling may not always be satisfied, sparking the need for more general sampling theories. In the thesis, we discuss the following three sampling scenarios: signal quantization, compressive sensing, and deep neural networks. In signal quantization theory, the inability of digital devices to perfectly store analog samples leads to distortion when reconstructing the signal from its samples. Different quantization schemes are proposed so as to minimize such distortion. We adapt a quantization scheme used in analog-to-digital conversion called signal decimation to finite dimensional signals. In doing so, we are able to achieve theoretically optimal reconstruction error decay rate. Compressive sensing investigates the possibility to recover high-dimensional signals from incomplete samples. It has been proven feasible as long as the signal is sufficiently sparse. To this point, all of the most successful examples follow from random constructions rather than deterministic ones. Whereas the sparsity of the signal can be almost as large as the ambient dimension for random constructions, current deterministic constructions require the sparsity to be at most the square-root of the ambient dimension. This apparent barrier is the well-known square-root bottleneck. In this thesis, we propose a new explicit sampling scheme as a possible candidate for deterministic compressive sensing. We present a partial result, while the full generality is still work in progress. For deep neural networks, one approximates signals with neural networks. To do so, many samples need to be drawn in order to find an optimal approximating neural network. A common approach is to employ stochastic gradient descent, but it is unclear if the resulting neural network is indeed optimal due to the non-convexity of the optimization scheme. We follow an alternative approach, utilizing the derivatives of the signal for stable reconstruction. In this thesis, we focus on non-smooth signals, and using weak differentiation, it is easy to obtain stable reconstruction for one-layer neural networks. We are currently working on the two-layer case, and our approach is outlined in this thesis.Item Compressed Sensing Beyond the IID and Static Domains: Theory, Algorithms and Applications(2017) Kazemipour, Abbas; Wu, Min; Babadi, Behtash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sparsity is a ubiquitous feature of many real world signals such as natural images and neural spiking activities. Conventional compressed sensing utilizes sparsity to recover low dimensional signal structures in high ambient dimensions using few measurements, where i.i.d measurements are at disposal. However real world scenarios typically exhibit non i.i.d and dynamic structures and are confined by physical constraints, preventing applicability of the theoretical guarantees of compressed sensing and limiting its applications. In this thesis we develop new theory, algorithms and applications for non i.i.d and dynamic compressed sensing by considering such constraints. In the first part of this thesis we derive new optimal sampling-complexity tradeoffs for two commonly used processes used to model dependent temporal structures: the autoregressive processes and self-exciting generalized linear models. Our theoretical results successfully recovered the temporal dependencies in neural activities, financial data and traffic data. Next, we develop a new framework for studying temporal dynamics by introducing compressible state-space models, which simultaneously utilize spatial and temporal sparsity. We develop a fast algorithm for optimal inference on such models and prove its optimal recovery guarantees. Our algorithm shows significant improvement in detecting sparse events in biological applications such as spindle detection and calcium deconvolution. Finally, we develop a sparse Poisson image reconstruction technique and the first compressive two-photon microscope which uses lines of excitation across the sample at multiple angles. We recovered diffraction-limited images from relatively few incoherently multiplexed measurements, at a rate of 1.5 billion voxels per second.Item Sparse Signal Representation in Digital and Biological Systems(2016) Guay, Matthew; Czaja, Wojciech; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.