DRUM Community: Mathematics
http://hdl.handle.net/1903/2261
2015-08-05T10:27:56ZAn Historical and Critical Development of the Theory of Legendre Polynomials Before 1900
http://hdl.handle.net/1903/16857
Title: An Historical and Critical Development of the Theory of Legendre Polynomials Before 1900
Authors: Laden, Hyman N.1938-01-01T00:00:00ZMultiscale Analysis and Diffusion Semigroups with Applications
http://hdl.handle.net/1903/16708
Title: Multiscale Analysis and Diffusion Semigroups with Applications
Authors: Yacoubou Djima, Karamatou Adjoke
Abstract: Multiscale (or multiresolution) analysis is used to represent signals or functions at increasingly high resolution. In this thesis, we develop multiresolution representa- tions based on frames, which are overcomplete sets of vectors or functions that span an inner product space.
First, we explore composite frames, which generalize certain representations capable of capturing directionality in data. We show that we can obtain composite frames for L^2(R^n) given two main ingredients: 1) dilation operators based on matrices from admissible subgroups G_A and G, and 2) a generating function that is refinable with respect to G_A and G.
We also construct frame multiresolution analyses (MRA) for L^2-functions of spaces of homogeneous type. In this instance, dilations are represented by operators that come from the discretization of a compact symmetric diffusion semigroup. The eigenvectors shared by elements of the compact symmetric diffusion semigroup can be used to define an orthonormal MRA for L^2. We introduce several frame systems that yield an equivalent MRA, notably composite diffusion frames, which are built with the composition of two "similar" compact symmetric diffusion semigroups.
The last part of this thesis is an application of Laplacian Eigenmaps (LE) to a biomedical problem: Age-Related Macular Degeneration. LE, a tool in the family of diffusion methods, uses similarities at local scales to provide global analysis of data sets. We propose a novel approach with two steps. First, we apply LE to retinal images, provided by the National Institute of Health, for feature enhancement and dimensionality reduction. Then, using an original Vectorized Matched Filtering technique, we detect retinal anomalies in eigenimages produced by the LE algorithm.2015-01-01T00:00:00ZStochastic Simulation: New Stochastic Approximation Methods and Sensitivity Analyses
http://hdl.handle.net/1903/16707
Title: Stochastic Simulation: New Stochastic Approximation Methods and Sensitivity Analyses
Authors: Chau, Marie
Abstract: In this dissertation, we propose two new types of stochastic approximation (SA) methods and study the sensitivity of SA and of a stochastic gradient method to various input parameters. First, we summarize the most common stochastic gradient estimation techniques, both direct and indirect, as well as the two classical SA algorithms, Robbins-Monro (RM) and Kiefer-Wolfowitz (KW), followed by some well-known modifications to the step size, output, gradient, and projection operator.
Second, we introduce two new stochastic gradient methods in SA for univariate and multivariate stochastic optimization problems. Under a setting where both direct and indirect gradients are available, our new SA algorithms estimate the gradient using a hybrid estimator, which is a convex combination of a symmetric finite difference-type gradient estimate and an average of two associated direct gradient estimates. We derive variance minimizing weights that lead to desirable theoretical properties and prove convergence of the SA algorithms.
Next, we study the finite-time performance of the KW algorithm and its sensitivity to the step size parameter, along with two of its adaptive variants, namely Kesten's rule and scale-and-shifted KW (SSKW). We conduct a sensitivity analysis of KW and explore the tightness of an mean-squared error (MSE) bound for quadratic functions, a relevant issue for determining how long to run an SA algorithm. Then, we propose two new adaptive step size sequences inspired by both Kesten's rule and SSKW, which address some of their weaknesses. Instead of us- ing one step size sequence, our adaptive step size is based on two deterministic sequences, and the step size used in the current iteration depends on the perceived proximity of the current iterate to the optimum. In addition, we introduce a method to adaptively adjust the two deterministic sequences.
Lastly, we investigate the performance of a modified pathwise gradient estimation method that is applied to financial options with discontinuous payoffs, and in particular, used to estimate the Greeks, which measure the rate of change of (financial) derivative prices with respect to underlying market parameters and are central to financial risk management. The newly proposed kernel estimator relies on a smoothing bandwidth parameter. We explore the accuracy of the Greeks with varying bandwidths and investigate the sensitivity of a proposed iterative scheme that generates an estimate of the optimal bandwidth.2015-01-01T00:00:00ZSTATISTICAL AND OPTIMAL LEARNING WITH APPLICATIONS IN BUSINESS ANALYTICS
http://hdl.handle.net/1903/16669
Title: STATISTICAL AND OPTIMAL LEARNING WITH APPLICATIONS IN BUSINESS ANALYTICS
Authors: Han, Bin
Abstract: Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization.
We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs.
We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%.
Given these findings, when new scenario arises, however, new data need to be acquired to update our model and decisions, which is studied under optimal learning framework. The goal becomes discovering a sequential information collection strategy that learns the best campaign design alternative as quickly as possible. Regression structure is used to learn about a set of unknown parameters, which alternates with optimization to design new data points. Such problems have been extensively studied in the ranking and selection (R&S) community, but traditional R&S procedures experience high computational costs when the decision space grows combinatorially. We present a value of information procedure for simultaneously learning unknown regression parameters and unknown sampling noise. We then develop an approximate version of the procedure, based on semi-definite programming relaxation, that retains good performance and scales better to large problems. We also prove the asymptotic consistency of the algorithm in the parametric model, a result that has not previously been available for even the known-variance case.2015-01-01T00:00:00Z