Mathematics
Permanent URI for this communityhttp://hdl.handle.net/1903/2261
Browse
161 results
Search Results
Item An Exposition of Stochastic Integrals and Their Application to Linearization Coefficients(2009) Kuykendall, John Bynum; Slud, Eric V; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Stochastic integration is introduced as a tool to address the problem of finding linearization coefficients. Stochastic, off-diagonal integration against a random spectral measure is defined and its properties discussed, followed by a proof that two formulations of Ito's Lemma are equivalent. Diagonals in R<\bold>n<\super> are defined, and their relationship to partitions of {1, ..., n} is discussed. The intuitive notion of a stochastic integral along a diagonal is formalized and calculated. The relationship between partitions and diagonals is then exploited to apply Moebius inversion to stochastic integrals over different diagonals. Diagonals along which stochastic integrals may be nonzero with positive probability are shown to correspond uniquely to diagrams. This correspondence is used to prove the Diagram Formula. Ito's Lemma and the Diagram Formula are then combined to calculate the linearization coefficients for Hermite Polynomials. Finally, future work is suggested that may allow other families of linearization coefficients to be calculated.Item Regularized Variable Selection in Proportional Hazards Model Using Area under Receiver Operating Characteristic Curve Criterion(2009) Wang, Wen-Chyi; Yang, Grace L; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The goal of this thesis is to develop a statistical procedure for selecting pertinent predictors among a number of covariates to accurately predict the survival time of a patient. There are available many variable selection procedures in the literature. This thesis is focused on a more recently developed “regularized variable selection procedure”. This procedure, based on a penalized likelihood, can simultaneously address the problem of variable selection and variable estimation which previous procedures lack. Specifically, this thesis studies regularized variable selection procedure in the proportional hazards model for censored survival data. Implementation of the procedure requires judicious determination of the amount of penalty, a regularization parameter λ, on the likelihood and the development of computational intensive algorithms. In this thesis, a new criterion of determining λ using the notion of “the area under the receiver operating characteristic curve (AUC)” is proposed. The conventional generalized cross-validation criterion (GCV) is based on the likelihood and its second derivative. Unlike GCV, the AUC criterion is based on the performance of disease classification in terms of patients' survival times. Simulations show that performance of the AUC and the GCV criteria are similar. But the AUC criterion gives a better interpretation of the survival data. We also establish the consistency and asymptotic normality of the regularized estimators of parameters in the partial likelihood of proportional hazards model. Some oracle properties of the regularized estimators are discussed under certain sparsity conditions. An algorithm for selecting λ and computing regularized estimates is developed. The developed procedure is then illustrated with an application to the survival data of patients who have cancers in head and neck. The results show that the proposed method is comparable with the conventional one.Item Abundance of escaping orbitsin a family of anti-integrable limitsof the standard map(2009) De Simoi, Jacopo; Dolgopyat, Dmitry; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We give quantitative results about the abundance of escaping orbits in a family of exact twist maps preserving Lebesgue measure on the cylinder T × R; geometrical features of maps of this family are quite similar to those of the well-known Chirikov-Taylor standard map, and in fact we believe that the techniques presented in this work can be further improved and eventually applied to studying ergodic properties of the standard map itself. We state conditions which assure that escaping orbits exist and form a full Hausdorff dimension set. Moreover, under stronger conditions we can prove that such orbits are not charged by the invariant measure. We also obtain prove that, generically, the system presents elliptic islands at arbitrarily high values of the action variable and provide estimates for their total measure.Item OPTIMAL APPROXIMATION SPACES FOR SOLVING PROBLEMS WITH ROUGH COEFFICIENTS(2009) Li, Qiaoluan Helen; Osborn, John E.; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The finite element method has been widely used to solve partial differential equations by both engineers and mathematicians for the last several decades. This is due to its well-known effectiveness when applied to a wide variety of problems. However, it has some practical drawbacks. One of them is the need for meshing. Another is that it uses polynomials as the approximation basis functions. Commonly, polynomials are also used by other numerical methods for partial differential equations, such as the finite difference method and the spectral method. Nevertheless, polynomial approximations are not always effective, especially for problems with rough coefficients. In the dissertation, a suitable approximation space for the solution of elliptic problems with rough coefficients has been found, which is named as generalized L-spline space. Theoretically, I have developed generalized L-spline approximation spaces, where L is an operator of order m with rough coefficients, have proved the interpolation error estimate, and have also proved that the generalized L-spline space is an optimal approximation space for the problem L*Lu=f with certain operator L, by using n-widths as the criteria. Numerically, two problems have been tested and the relevant error estimate results are consistent with the shown theoretical results. Meshless methods are newly developed numerical methods for solving partial differential equations. These methods partially eliminate the need of meshing. Meshless methods are considered to have great potential. However, the need for effective quadrature schemes is a major issue concerning meshless methods. In our recently published paper, we consider the approximation of the Neumann problem by meshless methods, and show that the approximation is inaccurate if nothing special (beyond accuracy) is assumed about the numerical integration. We then identify a condition - referred to as the zero row sum condition. This, together with accuracy, ensure the quadrature error is small. The row sum condition can be achieved by changing the diagonal elements of the stiffness matrix. Under row sum condition we derive an energy norm error estimate for the numerical solution with quadrature. In the dissertation, meshless methods are discussed and quadrature issue is explained. Two numerical experiments are presented in details. Both theoretical and numerical results indicate that the error has two components; one due to the meshless methods approximation and the other due to quadrature.Item Meshless Collocation Methods for the Numerical Solution of Elliptic Boundary Valued Problems and the Rotational Shallow Water Equations on the Sphere(2009) Blakely, Christopher Dallas; Osborn, John E; Baer, Ferdinand; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation thesis has three main goals: 1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; 2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; 3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.Item Class Numbers of Real Cyclotomic Fields of Conductor pq(2009) Agathocleous, Eleni; Washington, Lawrence; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The class numbers h+ of the real cyclotomic fields are very hard to compute. Methods based on discriminant bounds become useless as the conductor of the field grows and that is why other methods have been developed, which approach the problem from different angles. In this thesis we extend a method of Schoof that was designed for real cyclotomic fields of prime conductor to real cyclotomic fields of conductor equal to the product of two distinct odd primes. Our method calculates the index of a specific group of cyclotomic units in the full group of units of the field. This index has h+ as a factor. We then remove from the index the extra factor that does not come from h+ and so we have the order of h+. We apply our method to real cyclotomic fields of conductor < 2000 and we test the divisibility of h+ by all primes < 10000. Finally, we calculate the full order of the l-part of h+ for all odd primes l < 10000.Item Novel integro-differential schemes for multiscale image representation(2009) Athavale, Prashant Vinayak; Tadmor, Eitan; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Multiscale representation of a given image is the problem of constructing a family of images, where each image in this family represents a scaled version of the given image. This finds its motivation from biological vision studies. Using the hierarchical multiscale image representation proposed by Tadmor et. al. [32], an image is decomposed into sums of simpler `slices', which extract more refined information from the previous scales. This approach motivates us to propose a novel integro-differential equation (IDE), for a multiscale image representation. We examine various properties of this IDE. The advantage of formulating the IDE this way is that, although this IDE is motivated by variational approach, we no longer need to be associated with any minimization problem and can modify the IDE, suitable to our image processing needs. For example, we may need to find different scales in the image, while retaining or enhancing prominent edges, which may define boundaries of objects. We propose some edge preserving modifications to our IDE. One of the important problems in image processing is deblurring a blurred image. Images get blurred due to various reasons, such as unfocused camera lens, relative motion between the camera and the object pictured, etc. The blurring can be modeled with a continuous, linear operator. Recovering a clean image from a blurry image, is an ill-posed problem, which is solved using Tikhonov-like regularization. We propose a different IDE to solve the deblurring problem. We propose hierarchical multiscale scheme based on (BV; L1) decomposition, proposed by Chan, Esedoglu, Nikolova and Alliney [12, 25, 3]. We finally propose another hierarchical multiscale representation based on a novel weighted (BV;L1) decomposition.Item Curves and Their Applications to Factoring Polynomials(2009) Ozdemir, Enver; Washington, Lawrence C; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We present new methods for computing square roots and factorization of polynomials over finite fields. We also describe a method for computing in the Jacobian of a singular hyperelliptic curve. There is a compact representation of an element in the Jacobian of a smooth hyperelliptic curve over any field. This compact representation leads an efficient method for computing in Jacobians which is called Cantor's Algorithm. In one part of the dissertation, we show that an extension of this compact representation and Cantor's Algorithm is possible for singular hyperelliptic curves. This extension lead to the use of singular hyperelliptic curves for factorization of polynomials and computing square roots in finite fields. Our study shows that computing the square root of a number mod p is equivalent to finding any of the particular group elements in the Jacobian of a certain singular hyperelliptic curve. This is also true in the case of polynomial factorizations. Therefore the efficiency of our algorithms depends on only the efficiency of the algorithms for computing in the Jacobian of a singular hyperelliptic curve. The algorithms for computing in Jacobians of hyperelliptic curves are very fast especially for small genus and this makes our algorithms especially computing square roots algorithms competitive with the other well-known algorithms. In this work we also investigate superelliptic curves for factorization of polynomials.Item Wavelet and frame theory: frame bound gaps, generalized shearlets, Grassmannian fusion frames, and p-adic wavelets(2009) King, Emily Jeannette; Benedetto, John J; Czaja, Wojciech; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The first wavelet system was discovered by Alfréd Haar one hundred years ago. Since then the field has grown enormously. In 1952, Richard Duffin and Albert Schaeffer synthesized the earlier ideas of a number of illustrious mathematicians into a unified theory, the theory of frames. Interest in frames as intriguing objects in their own right arose when wavelet theory began to surge in popularity. Wavelet and frame analysis is found in such diverse fields as data compression, pseudo-differential operator theory and applied statistics. We shall explore five areas of frame and wavelet theory: frame bound gaps, smooth Parseval wavelet frames, generalized shearlets, Grassmannian fusion frames, and p-adic wavlets. The phenomenon of a frame bound gap occurs when certain sequences of functions, converging in L^2 to a Parseval frame wavelet, generate systems with frame bounds that are uniformly bounded away from 1. In the 90's, Bin Han proved the existence of Parseval wavelet frames which are smooth and compactly supported on the frequency domain and also approximate wavelet set wavelets. We discuss problems that arise when one attempts to generalize his results to higher dimensions. A shearlet system is formed using certain classes of dilations over R^2 that yield directional information about functions in addition to information about scale and position. We employ representations of the extended metaplectic group to create shearlet-like transforms in dimensions higher than 2. Grassmannian frames are in some sense optimal representations of data which will be transmitted over a noisy channel that may lose some of the transmitted coefficients. Fusion frame theory is an incredibly new area that has potential to be applied to problems in distributed sensing and parallel processing. A novel construction of Grassmannian fusion frames shall be presented. Finally, p-adic analysis is a growing field, and p-adic wavelets are eigenfunctions of certain pseudo-differential operators. A construction of a p-adic wavelet basis using dilations that have not yet been used in p-adic analysis is given.Item Numerical solution of eigenvalue problems with spectral transformations(2009) Xue, Fei; Elman, Howard; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This thesis is concerned with inexact eigenvalue algorithms for solving large and sparse algebraic eigenvalue problems with spectral transformations. In many applications, if people are interested in a small number of interior eigenvalues, a spectral transformation is usually employed to map these eigenvalues to dominant ones of the transformed problem so that they can be easily captured. At each step of the eigenvalue algorithm (outer iteration), the matrix-vector product involving the transformed linear operator requires the solution of a linear system of equations, which is generally done by preconditioned iterative linear solvers inexactly if the matrices are very large. In this thesis, we study several efficient strategies to reduce the computational cost of preconditioned iterative solution (inner iteration) of the linear systems that arise when inexact Rayleigh quotient iteration, subspace iteration and implicitly restarted Arnoldi methods are used to solve eigenvalue problems with spectral transformations. We provide new insights into a special type of preconditioner with ``tuning'' that has been studied in the literature and propose new approaches to use tuning for solving the linear systems in this context. We also investigate other strategies specific to eigenvalue algorithms to further reduce the inner iteration counts. Numerical experiments and analysis show that these techniques lead to significant savings in computational cost without affecting the convergence of outer iterations to the desired eigenpairs.