Mathematicshttp://hdl.handle.net/1903/22612019-04-21T04:17:24Z2019-04-21T04:17:24ZLocally Recoverable Codes From Algebraic CurvesBallentine, Sean Frederickhttp://hdl.handle.net/1903/216552019-02-02T03:37:30Z2018-01-01T00:00:00ZLocally Recoverable Codes From Algebraic Curves
Ballentine, Sean Frederick
Locally recoverable (LRC) codes have the property that erased coordinates can be recovered by retrieving a small amount of the information contained in the entire codeword. An LRC code achieves this by making each coordinate a function of a small number of other coordinates. Since some algebraic constructions of LRC codes require that $n \leq q$, where $n$ is the length and $q$ is the size of the field, it is natural to ask whether we can generate codes over a small field from a code over an extension. Trace codes achieve this by taking the field trace of every coordinate of a code. In this thesis, we give necessary and sufficient conditions for when the local recoverability property is retained when taking the trace of certain LRC codes.
This thesis also explores a subfamily of LRC codes with hierarchical locality (H-LRC) which have tiers of recoverability. We provide a general construction of codes with 2 levels of hierarchy from maps between algebraic curves and present several families from quotients of curves by a subgroup of automorphisms. We consider specific examples from rational, elliptic, Kummer, and Artin-Schrier curves and examples of asymptotically good families of H-LRC codes from curves related to the Garcia-Stichtenoth tower.
2018-01-01T00:00:00ZUNDERSTANDING EXTREME WAVES USING WAVELETS: ANALYSIS, ALGORITHMS, AND NUMERICAL STUDIESZakharov, Arseny Maksimovichhttp://hdl.handle.net/1903/216232019-02-02T03:36:18Z2018-01-01T00:00:00ZUNDERSTANDING EXTREME WAVES USING WAVELETS: ANALYSIS, ALGORITHMS, AND NUMERICAL STUDIES
Zakharov, Arseny Maksimovich
A method for studying extreme wave solutions of the 1+1D nonlinear Schr\"{o}dinger equation (NLSE) with periodic boundary conditions is presented in this work. The existing methods for solving NLSE in the periodic case usually require information about the full period. Obtaining that information may not always be possible, when the experimental data is collected outside laboratory settings. In addition, some NLSE solutions contain fine details and have extremely long periods. As such, a very large mesh would be required in order to apply numerical methods to simulate the propagation of the wave. Finally, as some solutions only experience exponential growth once in their lifetime, the number of time steps necessary to numerically recreate an extreme or Rogue wave may be significant.
The way to determine whether a solution is stable with respect to small perturbations or not (in Benjamin-Feir sense) is available in the literature. One relies on representing a solution using Riemann theta functions that depend on a set of parameters which, in particular, can be used to determine stability. An algorithm for finding those parameters is developed and is based on wavelet representation. The existence of wavelet families with compact support allows restricting the analysis of the solution to a given interval and this approach is found to work for the incomplete sets of input data. The implementation of the algorithm requires the evaluation of the integrals of wavelet triple products (triplets). A method to evaluate the values of those triplets analytically is described, which allows one to avoid the necessity of approximating the wavelets numerically. The triplet values could be precomputed independently from the specific problem. This, in turn, allows the implemented algorithm to run on desktop computers. To demonstrate the efficiency of the method, various simulations have been performed by using data obtained by the research group. The algorithm proved to be efficient and robust, correctly processing the input data even with a small-to-moderate noise in the signal, unlike curve-fitting methods that were found to fail in the presence of noise in the input. The analytical basis and algorithms developed in this dissertation can be useful for examining extreme or freak waves that arise in a number of contexts, as well as solutions with localized features in space and time.
2018-01-01T00:00:00ZData Fusion based on the Density Ratio ModelWang, Chenhttp://hdl.handle.net/1903/216022019-02-02T03:35:01Z2018-01-01T00:00:00ZData Fusion based on the Density Ratio Model
Wang, Chen
A vast amount of the statistical literature deals with a single sample coming from a distribution where the problem is to make inferences about the distribution by estimation and testing procedures. Data fusion is a process of integrating multiple data sources in the hope of getting more accurate inference than that provided by a single data sources, the expectation being that fused data are more informative than the individual original inputs. This requires appropriate statistical methods which can provide inference by using multiple data sources as input. The Density Ratio Model is a model which allows semiparametric inference about probability distributions from fused data. In this dissertation, we will discuss three different types of problems based on the Density Ratio Model. We will discuss the situation where there is a system of sensors, each producing data according to some probability distribution. The parametric connection between the distributions allows various hypothesis tests including that of equidistribution, which are very helpful in detecting abnormalities in mechanical systems. Another example of a data fusion problem is the small area estimation where borrowing strength occurs by using all data from all areas where information is available. Real data can be fused with other real data, or even with artificial data. Thus, a given sample can be fused with computer-generated data giving rise to the concept of out of sample fusion(OSF). We will see that this approach is very helpful when estimating a small threshold exceedance probability when the sample size is not large enough and consisting of values below the threshold.
2018-01-01T00:00:00ZESSAYS IN STATISTICAL ANALYSIS: ISOTONIC REGRESSION AND FILTERINGXue, Jinhanghttp://hdl.handle.net/1903/215822019-02-01T03:34:13Z2018-01-01T00:00:00ZESSAYS IN STATISTICAL ANALYSIS: ISOTONIC REGRESSION AND FILTERING
Xue, Jinhang
In many real-world applications in optimal information collection and stochastic
approximation, statistical estimators are often constructed to learn the true parameter
value of some utility functions or underlying signals. Many of these estimators
exhibit excellent empirical performance, but full analyses of their consistency
are not previously available, thus putting decision-makers in somewhat of a predicament
regarding implementation. The goal of this dissertation is to fill this blank of
missing consistency proofs.
The first part of this thesis considers the consistency of estimating a monotonic
cost function which appears in an optimal learning algorithm that incorporates
isotonic regression with a Bayesian policy known as Knowledge Gradient with
Discrete Priors (KGDP). Isotonic regression deals with regression problems under
order constraints. Previous literature proposed to estimate the cost function by
a weighted sum of a pool of candidate curves, each of which is generated by the
isotonic regression estimator based on all the previous observations that have been
collected, and the weights are calculated by KGDP. Our primary objective is to
establish the consistency of the suggested estimator. Some minor results, regarding
with the knowledge gradient algorithm and the isotonic regression estimator under
insufficient observations, are also discussed.
The second part of this thesis focuses on the convergence of the bias-adjusted
Kalman filter (BAKF). The BAKF algorithm is designed to optimize the statistical
estimation of a non-stationary signal that can only be observed with stochastic
noise. The algorithm has numerous applications in dynamic programming and signal
processing. However, a consistency analysis of the process that approximates the
underlying signal has heretofore not been available. We resolve this open issue
by showing that the BAKF stepsize satisfies the well-known conditions on almost
sure convergence of a stochastic approximation sequence, with only one additional
assumption on the convergence rate of the signal compared to those used in the
derivation of the original problem.
2018-01-01T00:00:00Z