Mathematics Theses and Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 564
  • Item
    Logarithmic connections on arithmetic surfaces and cohomology computation
    (2022) Dykas, Nathan; Ramachandran, Niranjan; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    De Rham cohomology is important across a broad range of mathematical fields. The good properties of de Rham cohomology on smooth and complex manifolds are also shared by those schemes which most closely resemble complex manifolds, namely schemes that are (1) smooth, (2) proper, and (3) defined over the complex numbers or other another field of characteristic zero. In the absence of one or more of those three properties, one observes more pathological behavior. In particular, for affine morphisms $X/S$, the groups $\Hop^i(X/S)$ may be infinitely generated. In this case, when $S = \Sp(k), \op{char}(k) > 0$, the \textit{Cartier isomorphism} allows one to view the groups as finite dimensional over a different base: $\OO_{X^{(p)}}$. However when $S$ is a Dedekind ring of mixed characteristic, there is no good substitute for the Cartier isomorphism. In this work we explore a method of calculating the de Rham cohomology of some affine schemes which occur as the complement of certain divisors on arithmetic surfaces over a Dedekind scheme of mixed characteristic. The main tool will be (Koszul) connections on vector bundles, whose primary role is to generalize the exterior derivative $\OO_X \xrightarrow{\D{}} \Omega_{X/S}^1$ to a map $\mathcal{F} \xrightarrow{\nabla} \Omega_{X/S}^1\otimes\mathcal{F}$ defined on more general quasi-coherent modules $\mathcal{F}$. Given an suitable arithmetic surface $X$ and divisor $D$ with complement $U=X\setminus D$, the de Rham cohomology $\Hop^1(U/S)$ is infinitely generated. We use a natural filtration $\op{Fil}^\bullet\OO_U$ to construct a filtration $\op{Fil}^\bullet\Hop^1(U/S)$. We show that associated graded of this filtration is the direct sum of finitely generated modules, and we give a formula to calculate them in terms of the structure sheaf $\OO_D$ of the divisor as well as the different ideal $\mathcal{D}_D \subset \OO_D$ of the finite, flat extension $D/S$.
  • Item
    SENSITIVITY ANALYSIS AND STOCHASTIC OPTIMIZATIONS IN STOCHASTIC ACTIVITY NETWORKS
    (2022) Wan, Peng; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Activity networks are a powerful tool for modeling and analysis in project management, and in many other applications, such as circuit design and parallel computing. An activity network can be represented by a directed acyclic graph with one source node and one sink node. The directed arcs between nodes in an activity network represent the precedence relationships between different activities in the project. In a stochastic activity network (SAN), the arc lengths are random variables. This dissertation studies stochastic gradient estimators for SANs using Monte Carlo simulation, and the application of stochastic gradient estimators to network optimization problems. A new algorithm called Threshold Arc Criticality (TAC) for estimating the arc criticalities of stochastic activity networks is proposed. TAC is based on the following result: given the length of all arcs in a SAN except for the one arc of interest, that arc is on the critical path (longest path) if and only if its length is greater than a threshold. By applying Infinitesimal Perturbation Analysis (IPA) to TAC, an unbiased estimator of the derivative of the arc criticalities with respect to parameters of arc length distributions can be derived. The stochastic derivative estimator can be used for sensitivity analysis of arc criticalities via simulation. Using TAC, a new IPA gradient estimator of the first and second moments of project completion time (PCT) is proposed. Combining the new PCT stochastic gradient estimator with a Taylor series approximation, a functional estimation procedure for estimating the change in PCT moments caused by a large perturbation in an activity duration's distribution parameter is proposed and applied to optimization problems involving time-cost tradeoffs. In activity networks, crashing an activity means reducing the activity's duration (deterministic or stochastic) by a given percentage with an associated cost. A crashing plan of a project aims to shorten the PCT by reducing the duration of a set of activities under a limited budget. A disruption is an event that occurs at an uncertain time. Examples of disruptions are natural disasters, electrical outages, labor strikes, etc. For an activity network, a disruption may cause delays in unfinished activities. Previous work formulates finding the optimal crashing plan of an activity network under a single disruption as a two-stage stochastic mixed integer programming problem and applies a sample average approximation technique for finding the optimal solution. In this thesis, a new stochastic gradient estimator is derived and a gradient-based simulation optimization algorithm is applied to the problem of optimizing crashing under disruption.
  • Item
    Working in Reverse: Advancing Inverse Optimization in the Fields of Equilibrium and Infrastructure Modeling
    (2022) Allen, Stephanie Ann; Gabriel, Steven A; Dickerson, John P; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Transportation and infrastructure modeling allows us to pursue societal aims such as improved disaster management, traffic flow, and water allocation. Equilibrium programming enables us to represent the entities involved in these applications such that we can learn more about their dynamics. These entities include transportation users and market players. However, determining the parameters in these models can be a difficult task because the entities involved in these equilibrium processes may not be able to articulate or to disclose the parameterizations that motivate them. The field of inverse optimization (IO) offers a potential solution to this problem by taking observed equilibria to these systems and using them to parameterize equilibrium models. In this dissertation, we explore the use of inverse optimization to parameterize multiple new or understudied subclasses of equilibrium problems as well as expand inverse optimization's application to new infrastructure domains. In the first project of our dissertation, our contribution to the literature is to propose that IO can be used to parameterize cost functions in multi-stage stochastic programs for disaster management and can be used in disaster support systems. We demonstrate in most of our experiments that using IO to obtain the hidden cost parameters for travel on a road network changes the protection decisions made on that road network when compared to utilizing the mean of the parameter range for the hidden parameters (also referred to as ``uniform cost''). The protection decisions made under the IO cost parameterizations versus the true cost parameterizations are similar for most of the experiments, thus lending credibility to the IO parameterizations. In the second project of our dissertation, we extend a well-known framework in the IO community to the case of jointly convex generalized Nash equilibrium problems (GNEPs). We demonstrate the utility of this framework in a multi-player transportation game in which we vary the number of players, the capacity level, and the network topology in the experiments as well as run experiments assuming the same costs among players and different costs among players. Our promising results provide evidence that our work could be used to regulate traffic flow toward aims such as reduction of emissions. In the final project of our dissertation, we explore the general parameterization of the constant vector in linear complementarity problems (LCPs), which are mathematical expressions that can represent optimization, game theory, and market models (Gabriel et al., 2012). Unlike the limited previous work on inverse optimization for LCPs, we characterize theoretical considerations regarding the inverse optimization problem for LCPs, prove that a previously proposed IO solution model can be dramatically simplified, and handle the case of multiple solution data points for the IO LCP problem. Additionally, we use our knowledge regarding LCPs and IO in a water market allocation case study, which is an application not previously explored in the IO literature, and we find that charging an additional tax on the upstream players enables the market to reach a system optimal. In sum, this dissertation contributes to the inverse optimization literature by expanding its reach in the equilibrium problem domain and by reaching new infrastructure applications.
  • Item
    Causal Survival Analysis – Machine Learning Assisted Models: Structural Nested Accelerated Failure Time Model and Threshold Regression
    (2022) Chen, Yiming; Lee, Mei-Ling ML; Mathematical Statistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Time-varying confounding for intervention complicates causal survival analysis when the data are collected in a longitudinal manner. Traditional survival models that only adjust for time-dependent covariates provide a biased causal conclusion for the intervention effect. Some techniques have been developed to address this challenge. Nevertheless, these existing methods may still lack power, and suffer from computational burden given high dimensional data with a temporally connected nature. The first part of this dissertation focuses on one of the methods that deal with time-varying confounding, the Structural Nested Model and associated G-estimation. Two Neural Networks (GE-SCORE and GE-MIMIC) were proposed to estimate the Structural Nested Accelerated Failure Time Model. The proposed algorithms can provide less biased and individualized intervention causal effect estimation. The second part explored the causal interpretations and applications of the First-Hitting-Time based Threshold Regression Model using a Wiener process. Moreover, a Neural Network expansion of this specific type of Threshold Regression (TRNN) was explored for the first time.
  • Item
    Bayesian Methods and Their Application in Neuroimaging Data
    (2022) Ge, Yunjiang; Kedem, Benjamin; Chen, Shuo; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The functional magnetic resonance imaging (fMRI) technique is widely used in the medical field because it allows the in vivo investigations of human cognition, emotions, and behaviors at the neural level. One primary objective is to study brain activation, which can be achieved through a conventional two-stage approach. We consider the individualized voxel-specific modeling in the first stage and group-level inference in the second stage. Existing methods, in general, rely on pre-determined parameters or domain knowledge, which may not properly incorporate the unique features from different studies or cohorts, and thus also leads to some gaps in the inference for activated regions. This dissertation focuses on Bayesian approaches to fill the gaps in statistical inference at all levels, as well as accounting for the various information carried out by the data. Cluster-wise statistical inference is the most widely used technique for fMRI data analyses. It consists of two steps: i) primary thresholding that excludes less significant voxels by a pre-specified cut-off (e.g., p<0.001); and ii) cluster-wise thresholding that is often obtained by counting the number of intra-cluster voxels which surpass a voxel-level statistical significance threshold. The selection of the primary threshold is critical because it determines both statistical power and false discovery rate. However, in most existing statistical packages, the primary threshold is selected based on prior knowledge (e.g., p<0.001) without considering the information in the data. Thus, in the first project, we propose a data-driven approach to algorithmically select the optimal primary threshold based on an empirical Bayes framework. We evaluate the proposed model using extensive simulation studies and real fMRI data. In the simulation, we show that our method can effectively increase statistical power while controlling the false discovery rate. We then investigate the brain response to the dose effect of chlorpromazine in patients with schizophrenia by analyzing fMRI scans and generating consistent results. In Chapter 3, we focus on controlling the FWER by conducting cluster-level inference. The cluster-extent measure can be sub-optimal regarding the power and false positive error rate because the supra-threshold voxel count neglects the voxel-wise significance levels and ignores the dependence between voxels. Based on the information that a cluster carries, we provide a new Integrated Cluster-wise significance Measure (ICM) for cluster-level significance determination in cluster-wise fMRI analysis by integrating cluster extent, voxel-level significance (e.g., p-values), and activation dependence between within-cluster voxels. We develop a computationally efficient strategy for ICM based on probabilistic approximation theories. Consequently, the computational load for ICM-based cluster-wise inference (e.g., permutation tests) is affordable. We validate the proposed method via extensive simulations and then apply it to two fMRI data sets. The results demonstrate that ICM can improve power with well-controlled FWER. The above chapters focus on the cluster-extent thresholding method, while the Bayesian hierarchical model can also efficiently handle high-dimensional neuroimaging data. Existing methods provide voxel-specific and pre-determined regional (region of interest (ROI)) inference. However, the activation clusters may be across multiple ROIs or vary from studies and study cohorts. To provide the inference and build the bridge between voxels, unknown activation clusters, targeted regions, and the whole brain, we propose the Dirichlet Process Mixture model with Spatial Constraint (DPMSC) in Chapter 4. The spatial constraint is based on the Euclidean distance between two voxels in the brain space. With such a constraint added at each iteration in Markov Chain Monte Carlo (MCMC), our DPMSC can efficiently remove the single voxel or small noise clusters, as well as provide a whole contiguous cluster that belongs to the same component in the mixture model. Finally, we provide a real data example and simulation studies based on various dataset features.