# Mathematics

## Permanent URI for this community

## Browse

### Browsing Mathematics by Title

Now showing 1 - 20 of 627

###### Results Per Page

###### Sort Options

- ItemAbsolutely Continuous Spectrum for Parabolic Flows/Maps(2016) Simonelli, Lucia Dora; Forni, Giovanni; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This work is devoted to creating an abstract framework for the study of certain spectral properties of parabolic systems. Specifically, we determine under which general conditions to expect the presence of absolutely continuous spectral measures. We use these general conditions to derive results for spectral properties of time-changes of unipotent flows on homogeneous spaces of semisimple groups regarding absolutely continuous spectrum as well as maximal spectral type; the time-changes of the horocycle flow are special cases of this general category of flows. In addition we use the general conditions to derive spectral results for twisted horocycle flows and to rederive spectral results for skew products over translations and Furstenberg transformations.
- ItemAbsolutely Periodic Billiard Orbits of Arbitrarily High Order in Smooth Strictly Convex Domains(2023) Callis, Keagan Graham; Kaloshin, Vadim Y; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Billiard orbits in smooth (C∞) strictly convex domains in R2 are a special class ofsmooth area preserving twist diffeomorphisms of the cylinder. These maps are determined by the domain Ω on which the billiard orbit resides, and properties of the billiard map can thus lead to conclusions on various mathematical objects which involve the same domain. For instance, properties of the periodic orbits of the billiard map such as (1) the degeneracy of a periodic orbit or (2) the measure of the set of all periodic orbits can lead to conclusions on the asymptotic expansion of the Laplace spectrum of the domain. In this work we show that by an arbitrarily small perturbation in the C∞ norm of the domain can create a domain containing a periodic orbit which is highly degenerate. This result can be viewed as extending Newhouse phenomena which was previously obtained within the class of smooth area preserving diffeomorphisms to the more restricted class of billiard maps. The methods used to carry these perturbations over to the class of billiard maps is by perturbations of the domain Ω. Thus in this work we also explore how high order perturbations of the boundary of the domain, specifically the curvature κ of the boundary, effect the higher order jets of the corresponding billiard map. The billiard map near the boundary is almost integrable for smooth strictly convex domains. We use this fact to perform a small preliminary perturbation which yields a domains with a periodic orbit containing a (quadratic) Homoclinic Tangency. The main technique in obtaining Newhouse phenomena is by unfolding generically these Homoclinic Tangencies. We thus show how one is able to unfold these Homoclinic Tangencies by perturbations of the curvature. At the same time, we show how one is able to perform these perturbations without destroying other billiard orbits in consideration.
- ItemAbstract Elementary Classes with Lowenheim-Skolem Number Cofinal with Omega(2008-08-03) Johnson, Gregory Mitchell; Kueker, David W; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)An abstract elementary class is a class $\aec$ of structures for some vocabulary $L$ together with a ``strong substructure'' relation $\prec_{\aec}$ on $\aec$ satisfying certain axioms. Abstract elementary classes include elementary classes with elementary substructure and classes axiomatizable in $L_{\infty,\omega}$ with elementary substructure relative to some fragment of $L_{\infty,\omega}$. For every abstract elementary class there is some number $\kappa$, called the L\"owenheim-Skolem number, so that every structure in the class has a strong substructure of cardinality $\leq \kappa$. We study abstract elementary classes with L\"owenheim-Skolem number $\kappa$, where $\kappa$ is cofinal with $\omega$, which have finite character. We generalize results obtained by Kueker for $\kappa=\omega$. In particular we show that $\aec$ is closed under $L_{\infty,\kappa}$-elementary equivalence and obtain sufficient conditions for $\aec$ to be $L_{\infty,\kappa}$-axiomatizable. The results depend on developing an appropriate concept of $\kappa$-a.e.
- ItemAbundance of escaping orbitsin a family of anti-integrable limitsof the standard map(2009) De Simoi, Jacopo; Dolgopyat, Dmitry; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We give quantitative results about the abundance of escaping orbits in a family of exact twist maps preserving Lebesgue measure on the cylinder T × R; geometrical features of maps of this family are quite similar to those of the well-known Chirikov-Taylor standard map, and in fact we believe that the techniques presented in this work can be further improved and eventually applied to studying ergodic properties of the standard map itself. We state conditions which assure that escaping orbits exist and form a full Hausdorﬀ dimension set. Moreover, under stronger conditions we can prove that such orbits are not charged by the invariant measure. We also obtain prove that, generically, the system presents elliptic islands at arbitrarily high values of the action variable and provide estimates for their total measure.
- ItemAdapting Swarm Intelligence For The Self-Assembly And Optimization Of Networks(2011) Martin, Charles E; Reggia, James A; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)While self-assembly is a fairly active area of research in swarm intelligence and robotics, relatively little attention has been paid to the issues surrounding the construction of network structures. Here, methods developed previously for modeling and controlling the collective movements of groups of agents are extended to serve as the basis for self-assembly or "growth" of networks, using neural networks as a concrete application to evaluate this novel approach. One of the central innovations incorporated into the model presented here is having network connections arise as persistent "trails" left behind moving agents, trails that are reminiscent of pheromone deposits made by agents in ant colony optimization models. The resulting network connections are thus essentially a record of agent movements. The model's effectiveness is demonstrated by using it to produce two large networks that support subsequent learning of topographic and feature maps. Improvements produced by the incorporation of collective movements are also examined through computational experiments. These results indicate that methods for directing collective movements can be extended to support and facilitate network self-assembly. Additionally, the traditional self-assembly problem is extended to include the generation of network structures based on optimality criteria, rather than on target structures that are specified a priori. It is demonstrated that endowing the network components involved in the self-assembly process with the ability to engage in collective movements can be an effective means of generating computationally optimal network structures. This is confirmed on a number of challenging test problems from the domains of trajectory generation, time-series forecasting, and control. Further, this extension of the model is used to illuminate an important relationship between particle swarm optimization, which usually occurs in high dimensional abstract spaces, and self-assembly, which is normally grounded in real and simulated 2D and 3D physical spaces.
- ItemAdaptive Finite Element Methods for Variational Inequalities: Theory and Applications in Finance(2007-05-29) Zhang, Chensong; Nochetto, Ricardo H; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We consider variational inequalities (VIs) in a bounded open domain Omega subset IR^d with a piecewise smooth obstacle constraint. To solve VIs, we formulate a fully-discrete adaptive algorithm by using the backward Euler method for time discretization and the continuous piecewise linear finite element method for space discretization. The outline of this thesis is the following. Firstly, we introduce the elliptic and parabolic variational inequalities in Hilbert spaces and briefly review general existence and uniqueness results. Then we focus on a simple but important example of VI, namely the obstacle problem. One interesting application of the obstacle problem is the American-type option pricing problem in finance. We review the classical model as well as some recent advances in option pricing. These models result in VIs with integro-differential operators. Secondly, we introduce two classical numerical methods in scientific computing: the finite element method for elliptic partial differential equations (PDEs) and the Euler method for ordinary different equations (ODEs). Then we combine these two methods to formulate a fully-discrete numerical scheme for VIs. With mild regularity assumptions, we prove optimal a priori convergence rate with respect to regularity of the solution for the proposed numerical method. Thirdly, we derive an a posteriori error estimator and show its reliability and efficiency. The error estimator is localized in the sense that the size of the elliptic residual is only relevant in the approximate noncontact region, and the approximability of the obstacle is only relevant in the approximate contact region. Based on this new a posteriori error estimator, we design a time-space adaptive algorithm and multigrid solvers for the resulting discrete problems. In the end, numerical results for $d=1,2$ show that the error estimator decays with the same rate as the actual error when the space meshsize and the time step tend to zero. Also, the error indicators capture the correct local behavior of the errors in both the contact and noncontact regions.
- ItemThe Adelic Differential Graded Algebra for Surfaces(2017) Kelly, Sean James; Ramachandran, Niranjan; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)For any variety X/k, we consider the Beilinson–Huber adeles AX as a differ- ential graded k-algebra and examine the category Moddg of differential graded A-modules. We characterize the modules associated to certain quasi-coherent sheaves and define an adelic Chern class c(M) for modules which are graded free of rank 1. We study the intersection pairing in terms of a cup product and prove a version of the Bloch–Quillen formula that respects this cup product. Fesenko [6] proved Serre duality and the Riemann–Roch theorem for surfaces using a topological duality on the adeles. On the other hand, Mattuck–Tate [18] and Grothendieck [12] provided proofs of the Riemann hypothesis for curves using the Riemann–Roch theorem for surfaces by studying the graph of the Frobenius morphism on the surface S = C × C . Therefore the combined results of Fesenko and Mattuck–Tate–Grothendieck can be said to provide an adelic proof of the Riemann hypothesis for a curve C over a finite field. We apply the results of this thesis to the adelic intersection pairing, and state a version of the Hodge index theorem which implies the Riemann hypothesis for curves.
- ItemAdiabatic quantum computation: Noise in the adiabatic theorem and using the Jordan-Wigner transform to find effective Hamiltonians(2008-04-22) O'Hara, Michael James; O'Leary, Dianne P; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This thesis explores two mathematical aspects of adiabatic quantum computation. Adiabatic quantum computation depends on the adiabatic theorem of quantum mechanics, and (a) we provide a rigorous formulation of the adiabatic theorem with explicit definitions of constants, and (b) we bound error in the adiabatic approximation under conditions of noise and experimental error. We apply the new results to a standard example of violation of the adiabatic approximation, and to a superconducting flux qubit. Further, adiabatic quantum computation requires large ground-state energy gaps throughout a Hamiltonian evolution if it is to solve problems in polynomial time. We identify a class of random Hamiltonians with non-nearest-neighbor interactions and a ground-state energy gap of $\mathcal{O}(1/\sqrt{n})$, where $n$ is the number of qubits. We also identify two classes of Hamiltonians with non-nearest-neighbor interactions whose ground state can be found in polynomial time with adiabatic quantum computing. We then use the Jordan-Wigner transformation to derive equivalent results for Hamiltonians defined using Pauli operators.
- ItemADJUSTMENT FOR DENSITY METHOD TO ESTIMATE RANDOM EFFECTS IN HIERARCHICAL BAYES MODELS(2018) Cao, Lijuan; Lahiri, Partha; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The Adjustment for Density Method (ADM) has received considerable attention in recent years. The method was proposed about thirty years back in approximating a complex univariate density by a density from the Pearson family of distributions. The ADM has been developed to approximate posterior distributions of hyper-parameters, shrinkage parameters and random effects of a few well-known univariate hierarchical Bayesian models. This dissertation advances the ADM to approximate posterior distributions of hyper-parameters, shrinkage parameters, synthetic probabilities and multinomial probabilities associated with a multinomial-Dirichlet-logit Bayesian hierarchical model. The method is adapted so it can be applied to weighted counts. We carefully propose prior for the hyper-parameters of the multinomial-Dirichlet-logit model so as to ensure propriety of posterior of relevant parameters of the model and to achieve good small sample properties. Following general guidelines of the ADM for univariate distributions, we devise suitable adjustments to the posterior density of the hyper-parameters so that adjusted posterior modes lie in the interior of the parameter space and to reduce the bias in the point estimates. Beta distribution approximations are employed when approximating the posterior distributions of the individual shrinkage factors and Dirichlet distribution approximations are used when approximating the posterior distributions of the synthetic probabilities. The parameters of the beta or the Dirichlet posterior density are approximated carefully so the method approximates the exact posterior densities accurately. Simulation studies demonstrate that our proposed approach in estimating the multinomial probabilities in the multinomial-Dirichlet-logit model is accurate in estimation, fast in speed and has better operating characteristics compared to other existing procedures. We consider two applications of our proposed hierarchical Bayes model using complex survey and Big Data. In the first example, we consider small area gender proportions using a binomial-beta-logit model. The proposed method improves on a rival method in terms of smaller margins of error. In the second application, we demonstrate how small area multi-category race proportions estimates, obtained by direct method applied on Twitter data, can be improved by the proposed method. This dissertation ends with a discussion on future research in the area of ADM.
- ItemAdversarial Robustness and Robust Meta-Learning for Neural Networks(2020) Goldblum, Micah; Czaja, Wojciech; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Despite the overwhelming success of neural networks for pattern recognition, these models behave categorically different from humans. Adversarial examples, small perturbations which are often undetectable to the human eye, easily fool neural networks, demonstrating that neural networks lack the robustness of human classifiers. This thesis comprises a sequence of three parts. First, we motivate the study of defense against adversarial examples with a case study on algorithmic trading in which robustness may be critical for security reasons. Second, we develop methods for hardening neural networks against an adversary, especially in the low-data regime, where meta-learning methods achieve state-of-the-art results. Finally, we discuss several properties of the neural network models we use. These properties are of interest beyond robustness to adversarial examples, and they extend to the broad setting of deep learning.
- ItemAffine Pavings of Hessenberg Ideal Fibers(2020) Xue, Ke; Brosnan, Patrick; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We define certain closed subvarieties of the flag variety, Hessenberg ideal fibers, and prove that they are paved by affines. Hessenberg ideal fibers are a natural generalization of Springer fibers. In type $G_2$, we give explicit descriptions of all Hessenberg ideal fibers, study some of their geometric properties and use them to completely classify Tymoczko's dot actions of the Weyl group on the cohomology of regular semisimple Hessenberg varieties.
- ItemAn Agent-Based Modeling Approach to Reducing Pathogenic Transmission in Medical Facilities and Community Populations(2012) Barnes, Sean; Golden, Bruce; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The spread of infectious diseases is a significant and ongoing problem in human populations. In hospitals, the cost of patients acquiring infections causes many downstream effects, including longer lengths of stay for patients, higher costs, and unexpected fatalities. Outbreaks in community populations cause more significant problems because they stress the medical facilities that need to accommodate large numbers of infected patients, and they can lead to the closing of schools and businesses. In addition, epidemics often require logistical considerations such as where to locate clinics or how to optimize the distribution of vaccinations and food supplies. Traditionally, mathematical modeling is used to explore transmission dynamics and evaluate potential infection control measures. This methodology, although simple to implement and computationally efficient, has several shortcomings that prevent it from adequately representing some of the most critical aspects of disease transmission. Specifically, mathematical modeling can only represent groups of individuals in a homogenous manner and cannot model how transmission is affected by the behavior of individuals and the structure of their interactions. Agent-based modeling and social network analysis are two increasingly popular methods that are well-suited to modeling the spread of infectious diseases. Together, they can be used to model individuals with unique characteristics, behavior, and levels of interaction with other individuals. These advantages enable a more realistic representation of transmission dynamics and a much greater ability to provide insight to questions of interest for infection control practitioners. This dissertation presents several agent-based models and network models of the transmission of infectious diseases at scales ranging from hospitals to networks of medical facilities and community populations. By employing these methods, we can explore how the behavior of individual healthcare workers and the structure of a network of patients or healthcare facilities can affect the rate and extent of hospital-acquired infections. After the transmission dynamics are properly characterized, we can then attempt to differentiate between different types of transmission and assess the effectiveness of infection control measures.
- ItemAlgorithm Development for Hyperspectral Anomaly Detection(2008-08-12) Rosario, Dalton Souza; Chellappa, Ramalingam; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this process, one can achieve a desirably low cumulative probability of taking target samples by chance and using them as background samples. This probability is modeled by the binomial distribution family, where the only target related parameter--the proportion of target pixels potentially covering the imagery--is shown to be robust. PRS requires a suitable scoring algorithm to compare samples, although applying PRS with the new two-step univariate detectors is shown to outperform existing multivariate detectors.
- ItemAn Algorithmic Approach to Invariant Rational Functions(2014) Hennessy, Angela Marie; Washington, Lawrence C; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)For any field K and group A acting on K(x0, x1,..., xn-1), the fixed field consists of the elements fixed under the action of A, and is denoted K(x0, x1,..., xn-1)^A. Noether's problem seeks to answer whether K(x0, x1,..., xn-1)^A/K is a purely transcendental field extension. Lenstra gave necessary and sufficient conditions for a purely transcendental extension when A is any finite abelian group, however his proof is often regarded as non-constructive. Masuda constructed generators of the fixed field when a certain ideal is principal in an integral group ring, and exhibited results when A is a cyclic group of order n <= 7 and n=11. We prove, however, that the ideal in question is not principal when n=13, 17, 19 or 23, and thus Masuda's techniques cannot be used. We use Lenstra's proof as a basis of an algorithm which computes generators of the fixed field. We demonstrate this algorithm for groups of order n=3, 5, 7, 11, 13, 17, 19 and 23. Swan gave the first negative answer to Noether's problem for the cyclic group of order 47 over Q. Leshin quantifies the degree to which a field extension fails to be purely transcendental by defining the degree of irrationality. We use the fact that the class group of the maximal real subfield Q(zeta_23+zeta_23^-1) is trivial to construct a field extension of degree 47 below the fixed field that is purely transcendental over K, thus providing a new bound for the degree of irrationality.
- ItemAlgorithms and Generalizations for the Lovasz Local Lemma(2015) Harris, David; Srinivasan, Aravind; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The Lovasz Local Lemma (LLL) is a cornerstone principle of the probabilistic method for combinatorics. This shows that one can avoid a large of set of “bad-events” (forbidden configurations of variables), provided the local conditions are satisfied. The original probabilistic formulation of this principle did not give efficient algorithms. A breakthrough result of Moser & Tardos led to an framework based on resampling variables which turns nearly all applications of the LLL into efficient algorithms. We extend and generalize the algorithm of Moser & Tardos in a variety of ways. We show tighter bounds on the complexity of the Moser-Tardos algorithm, particularly its parallel form. We also give a new, faster parallel algorithm for the LLL. We show that in some cases, the Moser-Tardos algorithm can converge even thoughthe LLL itself does not apply; we give a new criterion (comparable to the LLL) for determining when this occurs. This leads to improved bounds for k-SAT and hypergraph coloring among other applications. We describe an extension of the Moser-Tardos algorithm based on partial resampling, and use this to obtain better bounds for problems involving sums of independent random variables, such as column-sparse packing and packet-routing. We describe a variant of the partial resampling algorithm specialized to approximating column-sparse covering integer programs, a generalization of set-cover. We also give hardness reductions and integrality gaps, showing that our partial resampling based algorithm obtains nearly optimal approximation factors. We give a variant of the Moser-Tardos algorithm for random permutations, one of the few cases of the LLL not covered by the original algorithm of Moser & Tardos. We use this to develop the first constructive algorithms for Latin transversals and hypergraph packing, including parallel algorithms. We analyze the distribution of variables induced by the Moser-Tardos algorithm. We show it has a random-like structure, which can be used to accelerate the Moser-Tardos algorithm itself as well as to cover problems such as MAX k-SAT in which we only partially avoid bad-events.
- ItemAlgorithms for Solving Linear and Polynomial Systems of Equations over Finite Fields with Applications to Cryptanalysis(2007-06-07) Bard, Gregory Van; Washington, Lawrence C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation contains algorithms for solving linear and polynomial systems of equations over GF(2). The objective is to provide fast and exact tools for algebraic cryptanalysis and other applications. Accordingly, it is divided into two parts. The first part deals with polynomial systems. Chapter 2 contains a successful cryptanalysis of Keeloq, the block cipher used in nearly all luxury automobiles. The attack is more than 16,000 times faster than brute force, but queries 0.62 × 2^32 plaintexts. The polynomial systems of equations arising from that cryptanalysis were solved via SAT-solvers. Therefore, Chapter 3 introduces a new method of solving polynomial systems of equations by converting them into CNF-SAT problems and using a SAT-solver. Finally, Chapter 4 contains a discussion on how SAT-solvers work internally. The second part deals with linear systems over GF(2), and other small fields (and rings). These occur in cryptanalysis when using the XL algorithm, which converts polynomial systems into larger linear systems. We introduce a new complexity model and data structures for GF(2)-matrix operations. This is discussed in Appendix B but applies to all of Part II. Chapter 5 contains an analysis of "the Method of Four Russians" for multiplication and a variant for matrix inversion, which is log n faster than Gaussian Elimination, and can be combined with Strassen-like algorithms. Chapter 6 contains an algorithm for accelerating matrix multiplication over small finite fields. It is feasible but the memory cost is so high that it is mostly of theoretical interest. Appendix A contains some discussion of GF(2)-linear algebra and how it differs from linear algebra in R and C. Appendix C discusses algorithms faster than Strassen's algorithm, and contains proofs that matrix multiplication, matrix squaring, triangular matrix inversion, LUP-factorization, general matrix in- version and the taking of determinants, are equicomplex. These proofs are already known, but are here gathered into one place in the same notation.
- ItemALGORITHMS FOR THE ALIGNMENT AND VISUALIZATION OF GENOME MAPPING DATA WITH APPLICATIONS TO STRUCTURAL VARIANT DETECTION(2015) Mendelowitz, Lee M.; Pop, Mihai; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Optical mapping and nanocoding are single molecule restriction mapping systems for interrogating genomic structure at a scale that cannot currently be achieved using DNA sequencing methods. In these mapping experiments, large DNA molecules approximately 500 kb are stretched, immobilized or confined, and then digested with a restriction endonuclease that cuts or nicks the DNA at its cognate sequence. The cut/nick sites are then observed through fluorescent microscopy and machine vision is used to estimate the length of the DNA fragments between consecutive sites. This produces, for each molecule, a barcode-like pattern comprising the ordered list of restriction fragment lengths Despite the promise of the optical mapping and nanocoding systems, there are few open source tools for working with the data generated by these platforms. Most analyses rely on custom in-house software pipelines using proprietary software. In this dissertation we present open source software tools for the alignment and vizualization of restriction mapping data. In this work we first present a review of the optical mapping and nanocoding systems and provide an overview of the current methods for aligning and assembling consensus restriction maps and their related applications. Next, we present the Maligner software for the alignment of a query restriction pattern to a reference pattern. Alignment is a fundamental problem which is the first step in many downstream analyses, such as consensus map assembly or structural variant calling. The Maligner software features both a sensitive dynamic programming implementation and a faster but less sensitive index based mode of alignment. We compare the Maligner software to other available tools for the task of aligning a sequence contig assembly to a reference optical map and for aligning single molecule maps to a reference. Next, we present a portable data visualization web application for visualizing pairwise alignments of restriction maps. Finally, we present updates to the Maligner software to support partial alignments of single molecule maps, allowing for the clustering of compatible split map alignments to identify structural variants.
- ItemAnalysis and correction of compositional bias in sparse sequencing count data(Springer Nature, 2018-11-06) Kumar, M. Senthil; Slud, Eric V.; Okrah, Kwame; Hicks, Stephanie C.; Hannenhalli, Sridhar; Bravo, Héctor CorradaCount data derived from high-throughput deoxy-ribonucliec acid (DNA) sequencing is frequently used in quantitative molecular assays. Due to properties inherent to the sequencing process, unnormalized count data is compositional, measuring relative and not absolute abundances of the assayed features. This compositional bias confounds inference of absolute abundances. Commonly used count data normalization approaches like library size scaling/rarefaction/subsampling cannot correct for compositional or any other relevant technical bias that is uncorrelated with library size. We demonstrate that existing techniques for estimating compositional bias fail with sparse metagenomic 16S count data and propose an empirical Bayes normalization approach to overcome this problem. In addition, we clarify the assumptions underlying frequently used scaling normalization methods in light of compositional bias, including scaling methods that were not designed directly to address it.
- ItemAnalysis and Extension of Non-Commutative NTRU(2007-08-06) Truman, Kathryn; Washington, Lawrence C; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We discuss the ring based public-key cryptosystem known as non-commutative NTRU. The original system is defined over the group ring R = Z[D_N] (where D_N is the dihedral group of order 2N) and uses a commutative subring R_0 = {a in R | Y a = a Y} where Y is an element of order two for D_N. This system was broken by Coppersmith. To do this he uses properties of the subset R_1 = { a in R | Y a = - a Y }. He is able to create a 'fake' private key using R_1 and R_0. This 'fake' private key then allows him to create a map d: R -> R that is used to break the system. The present discussion first analyzes the original system and the attack on the system. We also determine what groups the original system can be defined over, and therefore when Coppersmith's attack will work. Second we extend this system to other group rings. The groups have two generators that do not commute, but the generators will have prime orders larger than two. We still work with the ring Z[G] and the subring of elements that commute with Y, the generator with smaller order. We then extend the attack on the system. This is where the key difference arises. We develop the representation theory of these more general groups so that we can break the system. Also to break the system we need to look at subsets of Z[G] where conjugation by Y (of order k) multiplies the elements by a k^th root of unity. We denote these subsets by R_1, R_2, ..., R_{k-1}. One of the main results is to show that these R_j are principal R_0 modules. This allows us to define a similar map d: R -> R. Since a primitive k^th root of unity does not always exist modulo q it is necessary to work in an extension ring to break the system in some cases. But when d: R -> R is created it maps into the original group ring, which allows us to break the system. Finally we give a few examples.
- ItemAn Analysis of Chaum's Voter-Verifiable Election Scheme(2005-05-03) Staub, Julie Ann; Katz, Jonathan; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Chaum's Voter-Verifiable election scheme introduces a new direction for electronic voting. The scheme eliminates the need to trust any machinery or authority, and instead relies on mathematical proof to certify the trustworthiness of an election. Audits at every stage of the election create transparency that should restore voter confidence in the election process. We survey and categorize the field of electronic voting, and place Chaum's scheme within this context. We then define a framework of formal requirements of a voting system. We present Chaum's scheme itself, and give an analysis. Based on our technical analysis, we find the scheme to be secure. However, after considering other implementation concerns, we recognize various minor obstacles limiting its widespread adoption in today's elections. Despite this, we believe that the substance of the scheme is promising and maybe an improved, simpler variant might better suit future elections.