# Mathematics Theses and Dissertations

## Permanent URI for this collection

## Browse

### Recent Submissions

Item Ensemble Kalman Inverse Parameter Estimation for Human and Nature Dynamics Two(2023) Karpovich, Maia; Kalnay, Eugenia; Mote, Safa; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Since the widespread development of agriculture 10,000 years ago and particularly since the Industrial Revolution beginning in the 18th century, the coupled Earth and Human systems have experienced transformative change. The world's population and Gross Domestic Product have each increased by factors of at least eight in the last two centuries, powered by the intensive use of fossil energy and fossil water. This has had dramatic repercussions on the Earth system's stability, threatened by Human system activities such as habitat destruction, global warming, and depletion of Regenerating and Nonrenewable energy resources that increasingly alter environmental feedbacks. To analyze these changes, we have developed the second generation of the Human and Nature Dynamics model, HANDY2. HANDY2 is designed to simulate the dynamics of energy resources and population over the Industrial era from 1700 to 2200, flexibly incorporating real-world observations of population and energy consumption in an expanded suite of mechanisms that track capital investment, labor force allocation, class mobility, and extraction and production technologies. The use of automated Ensemble Kalman Inversion (EnKI) estimation for HANDY2's parameters allows us to accurately capture the rapid 20th-century rise in the use of phytomass and fossil fuels, as well as the global enrichment of Elites that puts pressure on natural resources and Commoners. EnKI-derived HANDY2 ensembles project that current world policies may lead to a collapse in the world's population by 2200 caused by rapid depletion of resources. However, this collapse can be prevented by a combination of actions taken to support voluntary family planning, lower economic inequality, and most importantly, invest in the rapid expansion of Renewable energy extraction.Item TWO-PHASE FLOW OF COMPRESSIBLE VISCOUS DIBLOCK COPOLYMER FLUID(2023) Ye, Anqi; Trivisa, Konstantina; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A diblock copolymer is a linear-chain molecule consisting of two types of monomer.Mathematical models for diblock copolymers can aid researchers in studying the material properties of products as upholstery foam, adhesive tape and asphalt additive. Such models incorporate a variety of factors, including concentration difference, connectivity of the subchains, and chemical potential. We consider a flow of two macroscopically immiscible, viscous compressible diblock copolymer fluids. We first give the derivation of this model on the basis of a local dissipation inequality. Second, we prove that there exist weak solutions to this model. The proof of existence relies on constructing an approximating system by means of time-discretization and vanishing dissipation. We then prove that the solutions to these approximating schemes converge to a solution to the original problem. We also cast thought on the large-time behavior with regularity assumption on the limit.Item Student Choice Among Large Group, Small Group, and Individual Learning Environments in a Community College Mathematics Mini-Course(1986) Baldwin, Eldon C.; Davidson, Neil; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, MD)This study describes the development and implementation of a model for accommodation of preferences for alternative instructional environments. The study was stimulated by the existence of alternative instructional modes, and the absence of a procedure for accommodation of individual student differences which utilized these alternative modes. The Choice Model evolved during a series of pilot studies employing three instructional modes; individual (JM), small group (SGM), and large group (LGM). Three instructors were each given autonomy in designing one learning environment, each utilizing her/his preferred instructional mode. One section of a mathematics course was scheduled for one hundred students. On the first day the class was divided alphabetically into three orientation groups, each assigned to a separate class room. During the first week, the instructors described their respective environments to each group, using video taped illustrations from a previous semester. Environmental preferences were then assessed using take-home student questionnaires. In the final pilot, fifty-five students were oriented to all three environments. Each student was then assigned to his/her preferred learning environment. The distribution of environmental preferences was 24% for IM, 44% for SGM, and 33% for LGM. The following student characteristics were also investigated: 1)sex, 2)age, 3)academic background, 4)mathematics achievement, 5)mathematics attitude, 6)mathematics interest, 7)self-concept, 8)communication apprehension. and 9)interpersonal relations orientation. This investigation revealed several suggestive preference patterns: 1)Females and students with weak academic backgrounds tended to prefer the SGM environment. 2)Students with higher levels of communication apprehension tended to avoid the SGM environment. 3)New college students and students with negative mathematics attitudes tended to avoid the IM environment. 4)Students with higher grades in high school tended to prefer the LGM environment. Student preferences were successfully accommodated, and student evaluations of the Choice Model were generally positive. The literature suggests that opportunities to experience choice in education tend to enhance student growth and development; adaptation and institutionalization of the Model were addressed from this perspective. Additional studies with larger samples were recommended to further investigate environmental preferences with respect t o student and instructor characteristics of gender, age, race, socioeconomic background, academic background, and learning style.Item DISTRIBUTION OF PRIME ORDER IDEAL CLASSES OF QUADRATIC CLASS GROUPS(2023) Wedige, Melanka Saroad; Ramachandran, Niranjan; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The Cohen-Lenstra heuristics predicts that for any odd prime k, the k-part of quadraticclass groups occur randomly on the space of k-groups with respect to a natural probability measure. However, apart from the first moments of the 3-torsion part of quadratic class groups, consequences of these heuristics still remain highly conjectural. The quadratic ideal classes have geometric representations on the modular curve as CM-points in the case of negative discriminants and as closed primitive geodesics in the case of positive discriminants. We mainly focus on the asymptotic distribution of these geometrical objects. As motivation, it is seen that in the case of imaginary quadratic fields, knowledge on the (joint) distribution of k-order CM-points leads in-general to the resolution of the Cohen-Lenstra conjectures on moments of the k-part of class groups. As a first step, inspired by the works of Duke, Hough conjectures that the k-order CM-points are equidistributed on the modular curve. Although the case with k = 3 was resolved by Hough himself, k > 3 remains unresolved. In this dissertation, we revisit Hough’s conjectures, with empirical evidence. We were able to reprove the conjecture for k = 3, and even stronger to show that the result holds along certain subfamilies of imaginary quadratic fields defined by local behaviors of their discriminants. In addition, we study the case for k > 3. We introduce a heuristics model, and show that this model agrees with Hough’s conjectures. We also show that the difference between the actual asymptotics and the heuristic model reduces down to the distribution of solutions to certain quadratic congruences. We, then again inspired by Duke’s work, investigate an analog for the real quadratic fields. Backed by empirical evidence, we go on to conjecture the asymptotic behavior of the length of k-order geodesics on the modular curve. In addition, based on a theorem and its proof by Siegel, we prove certain results that may shed light on a probable proof direction of these conjectures.Item GRAPH-BASED DATA FUSION WITH APPLICATIONS TO MAGNETIC RESONANCE IMAGING(2023) Emidih, Jeremiah; Czaja, Wojciech; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This thesis is concerned with development and applications of data fusion methods in thecontext of Laplacian eigenmaps. Multimodal data can be challenging to work with using classical statistical and signal processing techniques. Graphs provide a reference frame for the study of otherwise structure-less data. We combine spectral methods on graphs and geometric data analysis in order to create a novel data fusion model. We also provide examples of applications of this model to bioinformatics, color transformation and superresolution, and magnetic resonance imaging.Item A Multifaceted Quantification of Bias in Large Language Models(2023) Sotnikova, Anna; Daumé III, Hal; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Language models are rapidly developing, demonstrating impressive capabilities in comprehending, generating, and manipulating text. As they advance, they unlock diverse applications across various domains and become increasingly integrated into our daily lives. Nevertheless, these models, trained on vast and unfiltered datasets, come with a range of potential drawbacks and ethical issues. One significant concern is the potential amplification of biases present in the training data, generating stereotypes and reinforcing societal injustices when language models are deployed. In this work, we propose methods to quantify biases in large language models. We examine stereotypical associations for a wide variety of social groups characterized by both single and intersectional identities. Additionally, we propose a framework for measuring stereotype leakage across different languages within multilingual large language models. Finally, we introduce an algorithm that allows us to optimize human data collection in conditions of high levels of human disagreement.Item Modeling the fracture of polymer networks(2023) Tao, Manyuan; Cameron, Maria; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation is devoted to modeling the fracture of highly elastic materials that consist of polymer networks, such as elastomers and hydrogels. These polymer materials are composed of long polymer chains of repeating molecular units, which are crosslinked to form a three-dimensional network structure. A polymer network fractures by breaking covalent bonds, but the experimentally measured strength of a polymer network is orders of magnitude lower than the strength of covalent bonds. In this dissertation, we develop mesoscale models to understand what are the necessary ingredients leading to a large reduction in the strength of polymer networks observed in experiments. We hypothesize that the large reduction in strength is caused by statistical variation in lengths of polymer chains and a J-shaped stress-stretch relationship. The polymer chain carries entropic forces for most of the extension and carries covalent forces only for a narrow range of the extension. As a result, the statistical distribution of chain lengths causes only a small fraction of polymer chains to be highly stressed when the network is near fracture. We test this hypothesis using two mesoscale models: an idealized parallel chain model and a two-dimensional network model. Both models assume a statistical distribution for the lengths of polymer chains. Polymer chains are represented by freely-jointed chains that feature a nonlinear J-shaped stress-stretch relationship. The parallel chain model allows for simple calculations and is amenable for analysis by analytical tools. The network model accounts for the effect of stress concentration and is amenable for numerical simulations. Our models show that the combination of a J-shaped stress-stretch relationship and a distribution of chain lengths leads to a large reduction in strength, while keeping the variability in strength small from sample to sample. The large scatter in chain lengths causes a reduction in strength by up to two orders of magnitude, which explains a portion of the giant discrepancy between the experimentally measured strength of hydrogels and the strength of covalent bonds. Furthermore, our models demonstrate a power law relationship between the strength and the scatter in chain lengths. We provide an analytical derivation of the power law by taking advantage of the simplicity of the parallel chain model. In addition to studying macroscopic fracture properties, we further investigate the microscopic characteristics and the breaking mechanism of the polymer network, using the network model. By examining the characteristics of shortest paths, we find that the links traversed by a large number of shortest paths are more likely to break. Finally, we connect the microstructure of the network to the macroscopic mechanical properties. It is observed that the strength of the network correlates with the growth of holes during deformation.Item A VARIATIONAL APPROACH TO CLUSTERING WITH LIPSCHITZ DECISION FUNCTIONS(2023) Zhou, Xiaoyu; Slud, Eric; Mathematical Statistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation proposes an objective function based clustering approach using Lipschitzfunctions to represent the clustering function. We establish some mathematical properties including two optimality conditions and a uniqueness result; some statistical properties including two consistency results; and some computational development. This work is a step forward building upon existing work about Lipschitz classifiers to proceed from classification to clustering, also covering more theoretical and computational aspects. The mathematical contents strongly suggest further future analysis of the method. The general objective function might be of independent interest.Item The Effect of Behavioral Objectives on Measures of Learning and Forgetting on High School Algebra(1972) Loh, Elwood Lockert; Walbesser, Henry H.; Mathematics and Education; Digital Repository at the University of Maryland; University of Maryland (College Park, MD)During the past decade, the number of educators who advocate the use of behavioral objectives in education has increased. The increase in the number of advocates of behavioral objectives has been followed by an increasing awareness of the need for empirical research to give credence to such a viewpoint. At present, there is not a substantial number of research studies in which behavioral objectives have been used as a manipulated variable. In previously reported learning studies in which behavioral objectives have been used as an experimental variable, measures of learning and measures of forgetting have been derived from achievement scores. The results obtained in the learning studies have not been singular in support of the use of behavioral objectives, however, the results obtained in forgetting studies have consistently supported their use. This two part study investigated the effect of presenting behavioral objectives to students during the initial phase of a learning program. There were six criterion variables observed: index of learning, rate of learning, index of forgetting, rate of forgetting, index of retention, and index of efficiency. Two 2-year algebra one classes with a total of 52 students were randomly partitioned into two treatment groups for the learning phase of the study. The classes were further randomly partitioned into three retention groups for the forgetting phase of the study. The instructional materials were programmed within the framework of a learning hierarchy. The use of the learning hierarchy facilitated the use of a procedure for separating behaviors not yet possessed by a student from behaviors previously acquired. This was accomplished by presenting students with preassessment tasks prior to instruction for a behavior in the learning hierarchy. If the subject's response to the preassessment task indicated that he possessed the behavior, instruction was not given for that behavior. If the response indicated that the subject had not previously acquired the behavior, instruction was presented. The measures of the time needed to acquire the behavior were subsequently used to compute the six experimental measures. Three retention periods of 7 calendar days, 14 calendar days, and 15 to 21 calendar days were used for the forgetting phase of the study. The results of the three retention periods were pooled for the two forgetting measures, the index of retention, and the index of efficiency. The data collected in the study were analyzed by six separate tests using a one-way analysis of variance. A 0.05 level of significance was used for each of the six tests. The following results were obtained: 1. The index of learning for students who were informed of behavioral objectives during the initial phases of the learning program was not greater than the index of learning for students who were not so informed. 2. The rate of learning for students who were informed of behavioral objectives during the initial phases of the learning program was not greater than the rate of learning for students who were not so informed. 3. The index of forgetting for students who were informed of behavioral objectives during the initial phases of the learning program was not less than the index of forgetting for students who were not so informed. 4. The rate of forgetting for students who were informed of behavioral objectives during the initial phases of the learning program was not less than the rate of forgetting for students who were not so informed. 5. The index of retention for students who were informed of behavioral objectives during the initial phases of the learning program was not greater than the index of retention for students who were not so informed. 6. The index of efficiency for students who were informed of behavioral objectives during the initial phases of the learning program was not greater than the index of efficiency for students who were not so informed. It was concluded that the results of the study do not support the use of behavioral objectives as a procedure for improving either measures of learning or measures of forgetting which are functions of the time needed to reach criterion in a learning program using programmed instruction for teaching an algebraic topic to below average mathematics students in senior high school. It was recommended that further research is needed to determine a reliable and valid procedure for measuring learning and forgetting. It was also recommended that alternatives to programmed instruction be considered for learning and forgetting studies.Item DISSECTING TUMOR CLONALITY IN LIVER CANCER: A PHYLOGENY ANALYSIS USING COMPUTATIONAL AND STATISTICAL TOOLS(2023) Kacar, Zeynep; Slud, Eric ES; Levy, Doron DL; Mathematical Statistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Liver cancer is a heterogeneous disease characterized by extensive genetic and clonaldiversity. Understanding the clonal evolution of liver tumors is crucial for developing effective treatment strategies. This dissertation aims to dissect the tumor clonality in liver cancer using computational and statistical tools, with a focus on phylogenetic analysis. Through advancements in defining and assessing phylogenetic clusters, we gain a deeper understanding of the survival disparities and clonal evolution within liver tumors, which can inform the development of tailored treatment strategies and improve patient outcomes. The thesis begins by providing an overview of sources of heterogeneity in liver cancer and data types, from Whole-Exome (WEX) and RNA sequencing (RNA-seq) read-counts by gene to derived quantities such as Copy Number Alterations (CNAs) and Single Nucleotide Variants (SNVs). Various tools for deriving copy-numbers are discussed and compared. Additionally, comparison of survival distributions is discussed. The central data analyses of the thesis concern the derivation of distinct clones and clustered phylogeny types from the basic genomic data in three independent cancer cohorts, TCGA-LIHC, TIGER-LC and NCI-MONGOLIA. The SMASH (Subclone multiplicity allocation and somatic heterogeneity) algorithm is introduced for clonality analysis, followed by a discussion on clustering analysis of nonlinear tumor evolution trees and the construction of phylogenetic trees for liver cancer cohorts. Identification of drivers of tumor evolution, and the immune cell micro-environment of tumors are also explored. In this research, we employ survival analysis tools to investigate and document survival differences between groups of subjects defined from phylogenetic clusters. Specifically, we introduce the log-rank test and its modifications for generic right-censored survival data, which we then apply to survival follow-up data for the subjects in the studied cohorts, clustered based on their genomic data. The final chapter of this thesis takes a significant step forward by extending an existing methodology for covariate-adjustment in the two-sample log-rank test to a K-sample scenario, with a specific focus on the already defined phylogeny cluster groups. This extension is not straightforward because the computation of the test statistic for K-sample and its asymptotic null distribution do not follow directly from the two-sample case. Using these extended tools, we conduct an illustrative data analysis with real data from the TIGER-LC cohort, which comprises subjects with analyzed and clustered genomic data, leading to defined phylogenetic clusters associated with two different types of liver cancer. By applying the extended methodology to this dataset, we aim to effectively assess and validate the survival curves of the defined clusters.Item NEW STATISTICAL METHODS FOR HIGH-DIMENSIONAL INTERCONNECTED DATA WITH UNIFORM BLOCKS(2023) Yang, Yifan; Chen, Shuo; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Empirical analyses of high-dimensional biomedical data, including genomics, proteomics, microbiome, and neuroimaging data, consistently reveal the presence of strong modularity in the dependence patterns. In these analyses, highly correlated features often form a few distinct communities or modules, which can be interconnected with each other. While the interconnected community structure has been extensively studied in biomedical research (e.g., gene co-expression networks), its potential to assist in statistical modeling and inference remains largely unexplored. To address this research gap, we propose novel statistical models and methods that capitalize on the prevalent community structures observed in large covariance and precision matrices derived from high-dimensional biomedical interconnected data. The first objective of this dissertation is to delve into the algebraic properties of the proposed interconnected community structures at the population level. Specifically, this pattern partitions the population covariance matrix into uniform (i.e., equal variances and covariances) blocks. To accomplish this objective, we introduce a block Hadamard product representation in Chapter 2, which relies on two lower-dimensional "coordinate" matrices and a pre-specific vector.This representation enables the explicit expressions of the square or power, determinant, inverse, eigendecomposition, canonical form, and the other matrix functions of the original larger-dimensional matrix on the basis of these lower-dimensional "coordinate" matrices. Estimating a covariance matrix is central to high-dimensional data analysis. Our second objective is to consistently estimate a large covariance or precision matrix having an interconnected community structure with uniform blocks. In Chapter 3, we derive the best-unbiased estimators for covariance and precision matrices in closed forms and provide theoretical results on their asymptotic properties. Our proposed method improves the accuracy of covariance and precision matrix estimation and demonstrates superior performance compared to the competing methods in both simulations and real data analyses. In Chapter 4, our goal is to investigate the effects of alcohol intake (as an exposure) on metabolomics outcome features. However, similar to other omics data, metabolomic outcomes often consist of numerous features that exhibit a structured dependence pattern, such as a co-expression network with interconnected modules. Effectively addressing this dependence structure is crucial for accurate statistical inferences and the identification of alcohol intake-related metabolomic outcomes. Nevertheless, incorporating the structured dependence patterns into multivariate outcome regression models remains difficulties in accurate estimation and inference. To bridge this gap, we propose a novel multivariate regression model that accounts for the correlations among outcome features using a network structure composed of interconnected modules. Additionally, we derive closed-form estimators of regression parameters and provide inference tools. Extensive simulation analysis demonstrates that our approach yields much-improved sensitivity with a well-controlled discovery rate when benchmarking against existing multivariate regression models. Confirmatory factor analysis (CFA) models play a crucial role in revealing underlying latent common factors within sets of correlated variables. However, their implementation often relies on a strong prior theory to categorize variables into distinct classes, which is frequently unavailable (e.g., in omics data analysis scenarios). To address this limitation, in Chapter 5, we propose a novel strategy based on network analysis that allows data-driven discovery to substitute for the lacking prior theory. By leveraging the detected interconnected community structure, our approach offers an elegant statistical interpretation and yields closed-form uniformly minimum variance unbiased estimators for all unknown matrices. To evaluate the effectiveness of our proposed estimation procedure, we compare it to conventional numerical methods and thoroughly validate it through extensive Monte Carlo simulations and real-world applications.Item Adversarial Robustness and Fairness in Deep Learning(2023) Cherepanova, Valeriia; Goldstein, Tom; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)While deep learning has led to remarkable advancements across various domains, the widespread adoption of neural network models has brought forth significant challenges such as vulnerability to adversarial attacks and model unfairness. These challenges have profound implications for privacy, security, and societal impact, requiring thorough investigation and development of effective mitigation strategies. In this work we address both these challenges. We study adversarial robustness of deep learning models and explore defense mechanisms against poisoning attacks. We also explore the sources of algorithmic bias and evaluate existing bias mitigation strategies in neural networks. Through this work, we aim to contribute to the understanding and enhancement of both adversarial robustness and fairness of deep learning systems.Item Decentralized Transportation Model In Vehicle Sharing(2023) Li, Ying; Ryzhov, Ilya; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation introduces the concept of decentralization to address the rebalancing challenges in bike-sharing systems and proposes a model known as the Decentralized Route Assignment Problem (DRAP) under specific assumptions. The primary contributions of this research include the formulation of the DRAP and the derivation of theoretical results that facilitate its transformation into a lower-dimensional global optimization problem. This transformation enables efficient exploration using modern search methods. An extended version of DRAP, called DRAP-EA, is also proposed for further analysis by introducing more agents into the system. Various solution approaches, such as branch-and-cut, hill-climbing, and simulated annealing, are explored and customized to enhance their performance in the context of rebalancing. Two simulated annealing methods, Gurobi with warm-start, and an extension of the local search algorithm are implemented on 24 instances derived from a comprehensive case study for experimental evaluation. The experimental results consistently demonstrate the superior performance of the simulated annealing methods. Furthermore, a comparison between SA and SA-PS is conducted, and the obtained solutions are visualized to help further explore the spatial patterns and traffic flows within the bike-sharing system.Item Statistical Network Analysis of High-Dimensional Neuroimaging Data With Complex Topological Structures(2023) Lu, Tong; Chen, Shuo SC; Mathematical Statistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation contains three projects that collectively tackle statistical challenges in the field of high-dimensional brain connectome data analysis and enhance our understanding of the intricate workings of the human brain. Project 1 proposes a novel network method for detecting brain-disease-related alterations in voxel-pair-level brain functional connectivity with spatial constraints, thus improving spatial specificity and sensitivity. Its effectiveness is validated through extensive simulations and real data applications in nicotine addiction and schizophrenia studies. Project 2 introduces a multivariate multiple imputation method specifically designed for voxel-level neuroimaging data in high dimensions based on Bayesian models and Markov chain Monte Carlo processes. According to both synthetic data and real neurovascular water exchange data extracted from a neuroimaging dataset in a schizophrenia study, our method indicates high imputation accuracy and computational efficiency. Project 3 develops a multi-level network model based on graph combinatorics that captures vector-to-matrix associations between brain structural imaging measures and functional connectomic networks. The validity of the proposed model is justified through extensive simulations and a real structure-function imaging dataset from UK Biobank. These three projects contribute innovative methodologies and insights that advance neuroimaging data analysis, including improvements in spatial specificity, statistical power, imputation accuracy, and computational efficiency when revealing the brain’s complex neurological patterns.Item Liquid Crystal Variational Problems: Modeling, Numerical Analysis, and Computation(2023) Bouck, Lucas; Nochetto, Ricardo H; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation is concerned with the numerical analysis and computation of variationalmodels related to liquid crystals (LCs) and liquid crystal polymer networks (LCNs) as well as modeling of LCNs. We first present a finite element method and projection free gradient flow to minimize theFrank-Oseen energy of nematic liquid crystals. The Frank-Oseen model is a continuum model that represents the liquid crystal with a vector field that must satisfy a nonconvex unit-length constraint pointwise. We prove convergence of minimizers of the discrete problem to minimizers of the continuous problem using the framework of Gamma-convergence. The convergence analysis has no restrictions on the elastic constants or regularity of the solution beyond that required for existence of minimizers. Due to the low regularity requirement, the method can capture point defects. We also propose a projection free gradient flow algorithm to compute critical points of the discrete energy. The gradient flow is conditionally stable under a mild restriction on the numerical parameters. We finally present computations illustrating the influence of the elastic constants on point defects as well as the influence of external magnetic fields. The second part of this dissertation is concerned with modeling, numerical analysis, andcomputation of thin LCNs. We first begin from a classical 3D energy of LCN and use Kirchhoff- Love asymptotics to derive a reduced 2D membrane model. We then prove many properties of the membrane model including a pointwise metric condition that zero energy states must satisfy and construct a formal method to approximate configurations of LCN from higher degree defects that approximately match this pointwise condition. To conclude, we develop a finite element method to minimize the stretching energy. A key component of the discrete energy is to introduce a regularization that is inspired by a bending energy for LCN, which is also derived in this dissertation. We prove convergence of minimizers of the discrete problem to zero energy states of the continuous problem in the spirit of Gamma-convergence. To compute critical points of the discrete problem, we propose a fully implicit gradient flow with Newton sub-iteration and study its superlinear convergence under suitable assumptions. We finish with many simulations that highlight interesting features of LCNs, including configurations arising from LC defects and nonisometric origami.Item A combinatorial study of affine Deligne-Lusztig varieties(2023) Sadhukhan, Arghya; He, Xuhua; Adams, Jeffrey; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We consider affine Deligne-Lusztig varieties $X_w(b)$ and certain unions $X(\mu,b)$ in the affine flag variety of a connected reductive group. They were first introduced by Rapoport to facilitate the study of mod-$p$ reduction of Shimura varieties and moduli spaces of shtukas. We improve upon certain existing results in the study of affine Deligne-Lusztig varieties by weakening the hypothesis to prove them. Such results include a description of generic Newton points in Iwahori double cosets in the loop group of a split reductive group, covering relations in the associated Iwahori-Weyl group, and a dimension formula for $X(\mu,b)$ in the case of a quasi-split group. As an application of the work on generic Newton point formula, we obtain a description of the dimension for $X(\mu,b)$ associated with the maximal element $b$ in its natural range, under a mild hypothesis on $\mu$ but no further restrictions on the group.Item Classification of Closed Conformally Flat Lorentzian 3-Manifolds with Unipotent Holonomy(2023) Lee, Nakyung; Melnick, Karin; Goldman, William; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A conformally flat manifold is a manifold that is locally conformally equivalent to a flat affine space. In this thesis, we classify closed conformally flat Lorentzian manifolds of dimension three whose holonomy group is unipotent. More specifically, we show that such a manifold is finitely covered by either $S^2\times S^1$ or a parabolic torus bundle. Furthermore, we show that such a manifold is Kleinian and is essential if and only if it can be covered by $S^2\times S^1$.Item Proportional Hazards Model for Right Censored Survival Data with Longitudinal Covariates(2023) Shi, Yuyin; Ren, Joan Jian-Jian; Mathematical Statistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The proportional hazards model is one of the most widely used tools in analyzing survival data. In medical and epidemiological studies, the interrelationship between time-to-event variable and longitudinal covariates is often the primary research interest. Thus, joint modeling of survival data and longitudinal data has received very much attention in statistical literature, but it's a considerably difficult problem due to censoring on the survival time and that the longitudinal covariate process is in fact a completely unknown and not completely observed stochastic process. Up to now, all existing works made parametric or semi-parametric assumptions on the longitudinal covariate process, and resulting inferences critically depends on validity of these not justifiable assumptions. This dissertation does not make any parametric or semi-parametric assumptions on the longitudinal covariate process. We use the empirical likelihood method to derive the maximum likelihood estimator (MLE) for the proportional hazards model based on right censored survival data with longitudinal covariates. Computation algorithm is developed here and our simulation studies show that our MLE performs very well.Item QUANTUM COMBINATORIAL OPTIMIZATION ALGORITHMS FOR PACKING PROBLEMS IN CLASSICAL COMPUTING AND NETWORKING(2023) Unsal, Cem Mehmet; Oruc, Yavuz A; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In Computer Engineering, packing problems play a central role in many aspects of hardware control. The field aims to maximize computer processing speed, network throughput, and dependability in industry applications. Many of these constrained maximization problems can be expressed as packing problems in integer programming when working with restrictions such as latency, memory size, race conditions, power, and component availability. Some of the most crucial of these integer programming problems are NP-hard for the global optimum. Therefore, real-world applications heavily rely on heuristics and meta-heuristics to find good solutions. With recent developments in quantum meta-heuristic methods and promising results in experimental quantum computing systems, quantum computing is rapidly becoming more and more relevant for complex real-world combinatorial optimization tasks. This thesis is about applications of quantum combinatorial optimization algorithms in classical computer engineering problems. These include novel quantum computing techniques that respect the constraints of state-of-the-art experimental quantum systems. This thesis includes five projects. FASTER QUANTUM CONCENTRATION VIA GROVER'S SEARCH:One of the most important challenges in information networks is to gather data from a larger set of nodes to a smaller set of nodes. This can be done via the use of a concentrator architecture in the connection topology. This chapter is a proof-of-concept that demonstrates a quantum-based controller in large interconnection networks can asymptotically perform this task faster. We specifically present quantum algorithms for routing concentration assignments on full-capacity fat-and-slim concentrators, bounded fat-and-slim concentrators, and regular fat-and-slim concentrators. Classically, the concentration assignment takes O(n) time on all these concentrators, where n is the number of inputs. Powered by Grover's quantum search algorithm, our algorithms take O(√(nc) ln(c)) time, where c is the capacity of the concentrator. Thus, our quantum algorithms are asymptotically faster than their classical counterparts when (c ln^2(c))=o(n). In general, c = n^μ satisfies (c ln^2(c))=o(n), implying a time complexity of O(n^(0.5(1+ μ )) ln (n)), for any μ, 0 < μ < 1. QUANTUM ADVERSARIAL LEARNING IN EMULATION OF MONTE-CARLO METHODS FOR MAX-CUT APPROXIMATION: QAOA IS NOT OPTIMAL:One of the leading candidates for near-term quantum advantage is the class of Variational Quantum Algorithms. However, these algorithms suffer from classical difficulty in optimizing the variational parameters as the number of parameters increases. Therefore, it is important to understand the expressibility and power of various ansätze to produce target states and distributions. To this end, we apply notions of emulation to Variational Quantum Annealing and the Quantum Approximate Optimization Algorithm (QAOA) to show that variational annealing schedules with equivalent numbers of parameters outperform QAOA. Our Variational Quantum Annealing schedule is based on a novel polynomial parameterization that can be optimized in a similar gradient-free way as QAOA, using the same physical ingredients. In order to compare the performance of ansätze types, we have developed statistical notions of Monte-Carlo methods. Monte-Carlo methods are computer programs that generate random variables that approximate a target number that is computationally hard to calculate exactly. While the most well-known Monte-Carlo method is Monte-Carlo integration (e.g., Diffusion Monte-Carlo or path-integral quantum Monte-Carlo), QAOA is itself a Monte-Carlo method that finds good solutions to NP-complete problems such as Max-cut. We apply these statistical Monte-Carlo notions to further elucidate the theoretical framework around these quantum algorithms. SCHEDULING JOBS IN A SHARED HIGH-PERFORMANCE COMPUTER WITH A NISQ COMPUTER:Several quantum approximation algorithms for NP-hard optimization problems have been described in the literature. The properties of quantum approximation algorithms have been well-explored for optimization problems of Ising type with 2-local Hamiltonians. A wide range of optimization problems can be mapped to Ising problems. However, the mapping overhead of many problem instances puts them out of the reach of Noisy Intermediate-scale Quantum (NISQ) devices. In this chapter, we develop a way of mapping constrained optimization problems to higher-order spin interactions to put a larger set of problem instances within reach of spin interaction devices with potential NISQ applications. We demonstrate the growth in the practicable set of problem instances by comparing the resource requirements as a function of coupling. As an example, we have demonstrated our techniques on the problem of scheduling jobs in a high-performance computer queue with limited memory and CPUs. PROTEIN STRUCTURES WITH OSCILLATING QPACKER:A significant challenge in designing proteins for therapeutic purposes is determining the structure of a protein to find the sidechain identities given a protein backbone. This problem can be easily and efficiently encoded as a quadratic binary optimization problem. There has been a significant effort to find ways to solve these problems in the field of quantum information, both exactly and approximately. An important initiative has applied experimental quantum annealing platforms to solve this problem and got promising results. This project is about optimizing the annealing schedule for the sidechain identity problem, inspired by cutting-edge developments in the algorithmic theory of quantum annealing. ON THE COMPLEXITY OF GENERALIZED DISCRETE LOGARITHM PROBLEM:The Generalized Discrete Logarithm Problem (GDLP) is an extension of the Discrete Logarithm Problem where the goal is to find x∈ℤ_s such that g^x mod s=y for a given g,y∈ℤ_s. The generalized discrete logarithm is similar, but instead of a single base element, it uses a number of base elements that do not necessarily commute. In this chapter, we prove that GDLP is NP-hard for symmetric groups. The lower-bound complexity of GDLP has been an open question since GDLP was defined in 2008 until our proof. Furthermore, we prove that GDLP remains NP-hard even when the base elements are permutations of at most three elements. Lastly, we discuss the implications and possible implications of our proofs in classical and quantum complexity theory.Item I: SUFFICIENT CONDITIONS FOR LOCAL SCALING LAWS IN 3-D TURBULENCE II: WELL-POSEDNESS FOR NONLINEAR STOCHASTIC KINETIC EQUATIONS(2023) Papathanasiou, Stavros; Bedrossian, Jacob; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Incompressible fluids at high Reynolds number quickly transition to turbulence. This implies that precise predictions for many flows in the physical world are extremely difficult - if not outright impossible. The field of statistical fluid mechanics aims to mitigate this difficulty by studying averaged quantities associated with turbulent flows. In the mathematical physics literature, turbulent flow is often described in the language of stochastic partial differential equations. The appropriate models are stochastic perturbations of well-known deterministic equations of fluid mechanics, so that the apparent randomness of turbulent flow is modeled via the tools of stochastic analysis. In the first part of this dissertation, this point of view of stochastic fluid mechanics is employed. We focus on the three dimensional case, with the goal of obtaining a conditional theorem for Kolmogorov's celebrated 4/5 law to hold in the presence of boundaries. The dimensionality enforces the use of a weak notion of solution to our model, in particular we work with families of \say{stationary martingale solutions} to the stochastic Navier-Stokes equations parametrized by the inverse Reynolds number. The main result of the first part of this dissertation provides a sufficient condition for a local version of the 4/5 law in the limit of infinite Reynolds number. In the second part of the dissertation, the focus is shifted to kinetic theory. Kinetic equations have played a prominent role in statistical mechanics since the 19th century; typically, the kinetic viewpoint represents an intermediate step of coarse-graining between the particle level, governed by Newtonian or Hamiltonian mechanics, and the hydrodynamic level, governed by continuum or fluid mechanics. The solution of a kinetic equation represents the normalized phase-space density of a large number of particles which might be interacting and potentially diffusing. The evolution of the density of an ensemble of particles interacting electrostatically is modeled by the Vlasov-Poisson equation. Thermal noise on the particles is modeled by the inclusion of a kinetic Fokker-Planck term. To incorporate the effect of macroscopic fluctuating force fields into kinetic modeling, we perturb the Vlasov-Poisson-Fokker-Planck equation by a stochastic kinetic transport term. We modify and exploit a popular scheme of stochastic fluid mechanics relying on the Gy\"ongy-Krylov lemma and construct local strong solutions to the stochastic Vlasov-Poisson-Fokker-Planck equation.