Mathematics
Permanent URI for this communityhttp://hdl.handle.net/1903/2261
Browse
88 results
Search Results
Item Modeling Imatinib-Treated Chronic Myelogenous Leukemia and the Immune System(2019) Peters, Cara Disa; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Chronic myelogenous leukemia can be considered as a chronic condition thanks to the development of tyrosine kinase inhibitors in the early 2000s. Most CML patients are able to manage the disease, but unending treatment can affect quality of life. The focus of much clinical research has thus transitioned to treatment cessation, where many clinical trials have demonstrated that treatment free remission is possible. While there are a lot of existing questions surrounding the criteria for cessation candidates, much evidence indicates the immune system plays a significant role. Mathematical modeling provides a complementary component to clinical research. Existing models well-describe the dynamics of CML in the first phase of treatment where most patients experience a biphasic decline in the BCR-ABL ratio. The Clapp model is one of the first to incorporate the immune system and capture the often-seen oscillations in the BCR-ABL ratio that occur later in therapy. However, these models are far from capable of being used in a predictive manner and do not fully capture the dynamics surrounding treatment cessation. Based on clinical research demonstrating the importance of immune response, we hypothesize that a mathematical model of CML should include a more detailed description of the immune system. We therefore present a new model that is an extension of the Clapp model. The model is then fit to patient data and determined to be a good qualitative description of CML dynamics. With this model it can be shown that treatment free remission is possible. However, the model introduces new parameters that must be correctly identified in order for it to have predictive power. We next consider the parameter identification problem. Since the dynamics of CML can be considered in two phases, the biphasic decline of and oscillations in the BCR-ABL ratio, we hypothesize that parameter values may differ over the course of treatment and look to identify which parameters are most variable by refitting the model to different windows of data. It is determined that parameters associated with immune response and regulation are most difficult to identify and could be key to selecting good treatment cessation candidates. To increase the predictive power of our model, we consider data assimilation techniques which are successfully used in weather forecasting. The extended Kalman filter is used to assimilate CML patient data. Although we determine that the EKF is not the ideal technique for our model, it is shown that data assimilation methods in general hold promising value to the search for a predictive model of CML. In order to have the most success, new techniques should be considered, data should be collected more frequently, and immune assay data should be made available.Item Developments in Lagrangian Data Assimilation and Coupled Data Assimilation to Support Earth System Model Initialization(2019) Sun, Luyu; Carton, James A.; Penny, Stephen G.; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The air-sea interface is one of the most physically active interfaces of the Earth's environments and significantly impacts the dynamics in both the atmosphere and ocean. In this doctoral dissertation, developments are made to two types of Data Assimilation (DA) applied to this interface: Lagrangian Data Assimilation (LaDA) and Coupled Data Assimilation (CDA). LaDA is a DA method that specifically assimilates position information measured from Lagrangian instruments such as Argo floats and surface drifters. To make a better use of this Lagrangian information, an augmented-state LaDA method is proposed using Local Ensemble Transform Kalman Filter (LETKF), which is intended to update the ocean state (T/S/U/V) at both the surface and at depth by directly assimilating the drifter locations. The algorithm is first tested using "identical twin" Observing System Simulation Experiments (OSSEs) in a simple double gyre configuration with the Geophysical Fluid Dynamics Laboratory (GFDL) Modular Ocean Model version 4.1 (MOM4p1). Results from these experiments show that with a proper choice of localization radius, the estimation of the state is improved not only at the surface, but throughout the upper 1000m. The impact of localization radius and model error in estimating accuracy of both fluid and drifter states are investigated. Next, the algorithm is applied to a realistic eddy-resolving model of the Gulf of Mexico (GoM) using Modular Ocean Model version 6 (MOM6) numerics, which is related to the 1/4-degree resolution configuration in transition to operational use at NOAA/NCEP. Atmospheric forcing is first used to produce the nature run simulation with forcing ensembles created using the spread provided by the 20 Century Reanalysis version 3 (20CRv3). In order to assist the examination on the proposed LaDA algorithm, an updated online drifter module adapted to MOM6 is developed, which resolves software issues present in the older MOM4p1 and MOM5 versions of MOM. In addition, new attributions are added, such as: the output of the intermediate trajectories and the interpolated variables: temperature, salinity, and velocity. The twin experiments with the GoM also show that the proposed algorithm provides positive impacts on estimating the ocean state variables when assimilating the drifter position together with surface temperature and salinity. Lastly, an investigation of CDA explores the influence of different couplings on improving the simultaneous estimation of atmosphere and ocean state variables. Synchronization theory of the drive-response system is applied together with the determination of Lyapunov Exponents (LEs) as an indication of the error convergence within the system. A demonstration is presented using the Ensemble Transform Kalman Filter on the simplified Modular Arbitrary-Order Ocean-Atmosphere Model, a three-layer truncated quasi-geostrophic model. Results show that strongly coupled data assimilation is robust in producing more accurate state estimates and forecasts than traditional approaches of data assimilation.Item Topics in Stochastic Optimization(2019) Sun, Guowei N/A; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this thesis, we work with three topics in stochastic optimization: ranking and selection (R&S), multi-armed bandits (MAB) and stochastic kriging (SK). For R&S, we first consider the problem of making inferences about all candidates based on samples drawn from one. Then we study the problem of designing efficient allocation algorithms for problems where the selection objective is more complex than the simple expectation of a random output. In MAB, we use the autoregressive process to capture possible temporal correlations in the unknown reward processes and study the effect of such correlations on the regret bounds of various bandit algorithms. Lastly, for SK, we design a procedure for dynamic experimental design for establishing a good global fit by efficiently allocating simulation budgets in the design space. The first two Chapters of the thesis work with variations of the R&S problem. In Chapter 1, we consider the problem of choosing the best design alternative under a small simulation budget, where making inferences about all alternatives from a single observation could enhance the probability of correct selection. We propose a new selection rule exploiting the relative similarity between pairs of alternatives and show its improvement on selection performance, evaluated by the Probability of Correct Selection, compared to selection based on collected sample averages. We illustrate the effectiveness by applying our selection index on simulated R\&S problems using two well-known budget allocation policies. In Chapter 2, we present two sequential allocation frameworks for selecting from a set of competing alternatives when the decision maker cares about more than just the simple expected rewards. The frameworks are built on general parametric reward distributions and assume the objective of selection, which we refer to as utility, can be expressed as a function of the governing reward distributional parameters. The first algorithm, which we call utility-based OCBA (UOCBA), uses the Delta-technique to find the asymptotic distribution of a utility estimator to establish the asymptotically optimal allocation by solving the corresponding constrained optimization problem. The second, which we refer to as utility-based value of information (UVoI) approach, is a variation of the Bayesian value of information (VoI) techniques for efficient learning of the utility. We establish the asymptotic optimality of both allocation policies and illustrate the performance of the two algorithms through numerical experiments. Chapter 3 considers the restless bandit problem where the rewards on the arms are stochastic processes with strong temporal correlations that can be characterized by the well-known stationary autoregressive-moving-average time series models. We argue that despite the statistical stationarity of the reward processes, a linear improvement in cumulative reward can be obtained by exploiting the temporal correlation, compared to policies that work under the independent reward assumption. We introduce the notion of temporal exploration-exploitation trade-off, where a policy has to balance between learning more recent information to track the evolution of all reward processes and utilizing currently available predictions to gain better immediate reward. We prove a regret lower bound characterized by the bandit problem complexity and correlation strength along the time index and propose policies that achieve a matching upper bound. Lastly, Chapter 4 proposes a fully sequential experimental design procedure for the stochastic kriging (SK) methodology of fitting unknown response surfaces from simulation experiments. The procedure first estimates the current SK model performance by jackknifing the existing data points. Then, an additional SK model is fitted on the jackknife error estimates to capture the landscape of the current SK model performance. Methodologies for balancing exploration and exploitation trade-off in Bayesian optimization are employed to select the next simulation point. Compared to existing experimental design procedures relying on the posterior uncertainty estimates from the fitted SK model for evaluating model performance, our method is robust to the SK model specifications. We design a dynamic allocation algorithm, which we call kriging-based dynamic stochastic kriging (KDSK), and illustrate its performance through two numerical experiments.Item Low-Rank Solution Methods for Discrete Parametrized Partial Differential Equations(2019) Su, Tengfei; Elman, Howard C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Stochastic partial differential equations are widely used to model physical problems with uncertainty. For numerical treatment, the stochastic Galerkin discretization in general gives rise to large, coupled algebraic systems that are computationally expensive to solve. In this thesis, we develop efficient iterative algorithms to reduce the costs, by taking advantage of the structures of the systems and computing low-rank approximations to the discrete solutions. We demonstrate this idea by exploring three types of problems: (i) the stochastic diffusion equation, in which the diffusion coefficient is a random field; (ii) a collection of stochastic eigenvalue problems arising from models of diffusion and fluid dynamics; (iii) stochastic version of the time-dependent incompressible Navier--Stokes equations with an uncertain viscosity. These problems range from a relatively straightforward linear elliptic problem for which we are able to obtain rigorous results on convergence rates for solvers, to more complex models that include eigenvalue computations and nonlinear and time-dependent computations. For the diffusion problem, we propose a low-rank multigrid method for solving the linear system obtained from the stochastic Galerkin discretization. In the algorithm, the iterates are represented as low-rank matrices, with which the associated computations become much cheaper. We conduct a rigorous error analysis for the convergence of the low-rank multigrid method. Numerical experiments show significant cost savings from low-rank approximation. We design a low-rank variant of the inverse subspace iteration algorithm for stochastic eigenvalue problems. We apply low-rank iterative methods to efficiently solve the large algebraic systems required at each step of the algorithm, and show that the costs of other computations, including the Gram--Schmidt process and the Rayleigh quotient, are also greatly reduced. The accuracy of the solutions and efficiency of the algorithm are illustrated in numerical tests. For the time-dependent Navier--Stokes problem, we consider an all-at-once formulation where the discrete solutions at all the time steps are represented in a three-dimensional tensor. In the nonlinear iteration, we compute low-rank tensor approximations to explore further reduction in memory and computation. Effective mean-based preconditioners are derived for the all-at-once systems. The low-rank algorithm is able to efficiently handle large-size problems.Item Harmonic Analysis and Machine Learning(2018) Pekala, Michael; Czaja, Wojciech; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation considers data representations that lie at the interesection of harmonic analysis and neural networks. The unifying theme of this work is the goal for robust and reliable machine learning. Our specific contributions include a new variant of scattering transforms based on a Haar-type directional wavelet, a new study of deep neural network instability in the context of remote sensing problems, and new empirical studies of biomedical applications of neural networks.Item ERROR ANALYSIS OF NUMERICAL METHODS FOR NONLINEAR GEOMETRIC PDEs(2019) Li, Wenbo; Nochetto, Ricardo H; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation presents the numerical treatment of two classes of nonlinear geometric problems: fully nonlinear elliptic PDEs and nonlinear nonlocal PDEs. For the fully nonlinear elliptic PDEs, we study three problems: Monge-Amp\`{e}re equations, computation of convex envelopes and optimal transport with quadratic cost. We develop two-scale methods for both the Monge-Amp\`{e}re equation and the convex envelope problem with Dirichlet boundary conditions, and prove rates of convergence in the $L^{\infty}$ norm for them. Our technique hinges on the discrete comparison principle, construction of barrier functions and geometric properties of the problems. We also derive error estimates for numerical schemes of the optimal transport problem with quadratic cost, which can be written as a so-called second boundary value problem for the Monge-Amp\`{e}re equation. This contains a new weighted $L^2$ error estimate for the fully discrete linear programming method based on quantitative stability estimates for optimal plans. For the nonlinear nonlocal PDEs, we focus on the computation and numerical analysis of nonlocal minimal graphs of order $s \in (0,1/2)$ in a bounded domain. This can be reinterpreted as a Dirichlet problem for a nonlocal, nonlinear, degenerate operator of order $s + 1/2$, whose numerical treatment is in its infancy. We propose a finite element discretization and prove its convergence together with error estimates for two different notions of error. Several interesting numerical experiments are also presented and discussed, which might shed some light on theoretical questions about this emerging research topic.Item Stochastic processes on graphs: learning representations and applications(2019) Bohannon, Addison Woodford; Balan, Radu V; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this work, we are motivated by discriminating multivariate time-series with an underlying graph topology. Graph signal processing has developed various tools for the analysis of scalar signals on graphs. Here, we extend the existing techniques to design filters for multivariate time-series that have non-trivial spatiotemporal graph topologies. We show that such a filtering approach can discriminate signals that cannot otherwise be discriminated by competing approaches. Then, we consider how to identify spatiotemporal graph topology from signal observations. Specifically, we consider a generative model that yields a bilinear inverse problem with an observation-dependent left multiplication. We propose two algorithms for solving the inverse problem and provide probabilistic guarantees on recovery. We apply the technique to identify spatiotemporal graph components in electroencephalogram (EEG) recordings. The identified components are shown to discriminate between various cognitive task conditions in the data.Item Central Compact-Reconstruction WENO Methods(2018) Cooley, Kilian; Baeder, James D; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)High-order compact upwind schemes produce block-tridiagonal systems due to performing the reconstruction in the characteristic variables, which is necessary to avoid spurious oscillations. Consequently they are less efficient than their non-compact counterparts except on high-frequency features. Upwind schemes lead to many practical drawbacks as well, so it is desirable to have a compact scheme that is more computationally efficient at all wavenumbers that does not require a characteristic decomposition. This goal cannot be achieved by upwind schemes so we turn to the central schemes, which by design require neither a Riemann solver nor a determination of upwind directions by characteristic decomposition. In practice, however, central schemes of fifth or higher order apparently need the characteristic decomposition to fully avoid spurious oscillations. The literature provides no explanation for this fact that is entirely convincing; however, a comparison of two central WENO schemes suggests one. Pursuing that possibility leads to the first main contribution of this work, which is the development of a fifth-order, central compact-reconstruction WENO (CCRWENO) method. That method retains the key advantages of central and compact schemes but does not entirely avoid characteristic variables as was desired. The second major contribution is to establish that the role of characteristic variables is to to make flux Jacobians within a stencil more diagonally dominant, having ruled out some plausible alternative explanations. The CCRWENO method cannot inherently improve the diagonal dominance without compromising its key advantages, so some strategies are explored for modifying the CCRWENO solution to prevent the spurious oscillations. Directions for future investigation and improvement are proposed.Item UNDERSTANDING EXTREME WAVES USING WAVELETS: ANALYSIS, ALGORITHMS, AND NUMERICAL STUDIES(2018) Zakharov, Arseny Maksimovich; Balachandran, Balakumar; Trivisa, Konstantina; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A method for studying extreme wave solutions of the 1+1D nonlinear Schr\"{o}dinger equation (NLSE) with periodic boundary conditions is presented in this work. The existing methods for solving NLSE in the periodic case usually require information about the full period. Obtaining that information may not always be possible, when the experimental data is collected outside laboratory settings. In addition, some NLSE solutions contain fine details and have extremely long periods. As such, a very large mesh would be required in order to apply numerical methods to simulate the propagation of the wave. Finally, as some solutions only experience exponential growth once in their lifetime, the number of time steps necessary to numerically recreate an extreme or Rogue wave may be significant. The way to determine whether a solution is stable with respect to small perturbations or not (in Benjamin-Feir sense) is available in the literature. One relies on representing a solution using Riemann theta functions that depend on a set of parameters which, in particular, can be used to determine stability. An algorithm for finding those parameters is developed and is based on wavelet representation. The existence of wavelet families with compact support allows restricting the analysis of the solution to a given interval and this approach is found to work for the incomplete sets of input data. The implementation of the algorithm requires the evaluation of the integrals of wavelet triple products (triplets). A method to evaluate the values of those triplets analytically is described, which allows one to avoid the necessity of approximating the wavelets numerically. The triplet values could be precomputed independently from the specific problem. This, in turn, allows the implemented algorithm to run on desktop computers. To demonstrate the efficiency of the method, various simulations have been performed by using data obtained by the research group. The algorithm proved to be efficient and robust, correctly processing the input data even with a small-to-moderate noise in the signal, unlike curve-fitting methods that were found to fail in the presence of noise in the input. The analytical basis and algorithms developed in this dissertation can be useful for examining extreme or freak waves that arise in a number of contexts, as well as solutions with localized features in space and time.Item HYBRID ROUTING MODELS UTILIZING TRUCKS OR SHIPS TO LAUNCH DRONES(2018) Poikonen, Stefan Allan; Golden, Bruce L; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Technological advances for unmanned aerial vehicles, commonly referred to as drones, have opened the door to a number of new and interesting applications in areas including military, healthcare, communications, cinematography, emergency response, and logistics. However, limitations due to battery capacity, maximum take-off weight, finite range of wireless communications, and legal regulations have restricted the effective operational range of drones in many practical applications. Several hybrid operational models involving one or more drones launching from a larger vehicle, which may be a ship, truck, or airplane, have emerged to help mitigate these range limitations. In particular, the drones utilize the larger vehicle as both a mobile depot and a recharging or refueling platform. In this dissertation, we describe routing models that leverage the tandem of one or more drones with a larger vehicle. In these models, there is generally a set of targets that should be visited in an efficient (usually time-minimizing) manner. By using multiple vehicles, these targets may be visited in parallel thereby reducing the total time to visit all targets. The vehicle routing problem with drones (VRPD) and traveling salesman problem with a drone (TSP-D) consider hybrid truck-and-drone models of delivery, where the goal is to minimize the time required to deliver a set of packages to their respective customers and return the truck(s) and drone(s) to the origin depot. In both problems, the drone can carry one homogeneous package at a time. Theoretical analysis, exact solution methods, heuristic solution methods, and computational results are presented. In the mothership and drone routing problem (MDRP), we consider the case where the larger launch vehicle is free to move in Euclidean space (the open seas) and launch a drone to visit one target location at a time, before returning to the ship to pick up new cargo or refuel. The mothership and high capacity drone routing problem (MDRP-HC) is a generalization of the mothership and drone routing problem, which allows the drone to visit multiple targets consecutively before returning to the ship. MDRP and MDRP-HC contain elements of both combinatorial optimization and continuous optimization. In the multi-visit drone routing problem (MVDRP), a drone can visit multiple targets consecutively before returning to the truck, subject to energy constraints that take into account the weight of packages carried by the drone.