Nonlinear Sampling Theory and Efficient Signal Recovery

dc.contributor.advisorBenedetto, Johnen_US
dc.contributor.authorLin, Kung-Chingen_US
dc.contributor.departmentMathematicsen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2020-07-08T05:35:19Z
dc.date.available2020-07-08T05:35:19Z
dc.date.issued2020en_US
dc.description.abstractSampling theory investigates signal recovery from its partial information, and one of the simplest and most well-known sampling schemes is uniform linear sampling, characterized by the celebrated classical sampling theorem. However, the requirements of uniform linear sampling may not always be satisfied, sparking the need for more general sampling theories. In the thesis, we discuss the following three sampling scenarios: signal quantization, compressive sensing, and deep neural networks. In signal quantization theory, the inability of digital devices to perfectly store analog samples leads to distortion when reconstructing the signal from its samples. Different quantization schemes are proposed so as to minimize such distortion. We adapt a quantization scheme used in analog-to-digital conversion called signal decimation to finite dimensional signals. In doing so, we are able to achieve theoretically optimal reconstruction error decay rate. Compressive sensing investigates the possibility to recover high-dimensional signals from incomplete samples. It has been proven feasible as long as the signal is sufficiently sparse. To this point, all of the most successful examples follow from random constructions rather than deterministic ones. Whereas the sparsity of the signal can be almost as large as the ambient dimension for random constructions, current deterministic constructions require the sparsity to be at most the square-root of the ambient dimension. This apparent barrier is the well-known square-root bottleneck. In this thesis, we propose a new explicit sampling scheme as a possible candidate for deterministic compressive sensing. We present a partial result, while the full generality is still work in progress. For deep neural networks, one approximates signals with neural networks. To do so, many samples need to be drawn in order to find an optimal approximating neural network. A common approach is to employ stochastic gradient descent, but it is unclear if the resulting neural network is indeed optimal due to the non-convexity of the optimization scheme. We follow an alternative approach, utilizing the derivatives of the signal for stable reconstruction. In this thesis, we focus on non-smooth signals, and using weak differentiation, it is easy to obtain stable reconstruction for one-layer neural networks. We are currently working on the two-layer case, and our approach is outlined in this thesis.en_US
dc.identifierhttps://doi.org/10.13016/tmyn-0u1m
dc.identifier.urihttp://hdl.handle.net/1903/26073
dc.language.isoenen_US
dc.subject.pqcontrolledMathematicsen_US
dc.subject.pquncontrolledcompressed sensingen_US
dc.subject.pquncontrolledneural networksen_US
dc.subject.pquncontrolledsampling theoryen_US
dc.subject.pquncontrolledsignal quantizationen_US
dc.subject.pquncontrolledsignal recoveryen_US
dc.titleNonlinear Sampling Theory and Efficient Signal Recoveryen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
Lin_umd_0117E_20684.pdf
Size:
1.77 MB
Format:
Adobe Portable Document Format