Computer Science Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2756

Browse

Search Results

Now showing 1 - 10 of 99
  • Thumbnail Image
    Item
    Modeling Imatinib-Treated Chronic Myelogenous Leukemia and the Immune System
    (2019) Peters, Cara Disa; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Chronic myelogenous leukemia can be considered as a chronic condition thanks to the development of tyrosine kinase inhibitors in the early 2000s. Most CML patients are able to manage the disease, but unending treatment can affect quality of life. The focus of much clinical research has thus transitioned to treatment cessation, where many clinical trials have demonstrated that treatment free remission is possible. While there are a lot of existing questions surrounding the criteria for cessation candidates, much evidence indicates the immune system plays a significant role. Mathematical modeling provides a complementary component to clinical research. Existing models well-describe the dynamics of CML in the first phase of treatment where most patients experience a biphasic decline in the BCR-ABL ratio. The Clapp model is one of the first to incorporate the immune system and capture the often-seen oscillations in the BCR-ABL ratio that occur later in therapy. However, these models are far from capable of being used in a predictive manner and do not fully capture the dynamics surrounding treatment cessation. Based on clinical research demonstrating the importance of immune response, we hypothesize that a mathematical model of CML should include a more detailed description of the immune system. We therefore present a new model that is an extension of the Clapp model. The model is then fit to patient data and determined to be a good qualitative description of CML dynamics. With this model it can be shown that treatment free remission is possible. However, the model introduces new parameters that must be correctly identified in order for it to have predictive power. We next consider the parameter identification problem. Since the dynamics of CML can be considered in two phases, the biphasic decline of and oscillations in the BCR-ABL ratio, we hypothesize that parameter values may differ over the course of treatment and look to identify which parameters are most variable by refitting the model to different windows of data. It is determined that parameters associated with immune response and regulation are most difficult to identify and could be key to selecting good treatment cessation candidates. To increase the predictive power of our model, we consider data assimilation techniques which are successfully used in weather forecasting. The extended Kalman filter is used to assimilate CML patient data. Although we determine that the EKF is not the ideal technique for our model, it is shown that data assimilation methods in general hold promising value to the search for a predictive model of CML. In order to have the most success, new techniques should be considered, data should be collected more frequently, and immune assay data should be made available.
  • Thumbnail Image
    Item
    Mathematical Models of Underlying Dynamics in Acute and Chronic Immunology
    (2019) Wyatt, Asia Alexandria; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    During an immune response, it is understood that there are key differences between the cells and cytokines that are present in a primary response versus those present in subsequent responses. Specifically, after a primary response, memory cells are present and drive the clearance of antigen in these later immune responses. When comparing acute infections to chronic infections, there are also differences in the dynamics of the immune system. In this dissertation, we develop three mathematical models to explore these differences in the immune response to acute and chronic infections through the creation, activation, regulation, and long term maintenance of T cells. We mimic this biological behavior through the use of delayed differential equation (DDE) models. The first model explores the dynamics of adaptive immunity in primary and secondary responses to acute infections. It is shown that while we observe similar amounts of antigen stimulation from both immune responses, with the incorporation of memory T cells, we see an increase in both the amount of effector T cells present and the speed of activation of the immune system in the secondary response. We conclude that our model is robust and can be applied to study different types of antigen from viral to bacterial. Extending our work to chronic infections, we develop our second and third models to explore breast cancer dormancy and T cell exhaustion. For our breast cancer dormancy model, we find that our model behaves similar to acute infections, but with constant antigen stimulation. Moreover, we observe the importance of immune protection on the long term survival of breast cancer cells. In our third model we find that while memory T cells play a major role in the effectiveness of the immune system in acute infection, in chronic infections, over long periods of time, T cell exhaustion prevents proper immune function and clearance of antigen. We also observe how the lack of long term maintenance of memory T cells plays an important role in the final outcome of the system. Finally, we propose two potential extensions to the three models developed: creating a simplified acute infection model and creating a combined breast cancer dormancy model with T cell exhaustion.
  • Thumbnail Image
    Item
    ERROR ANALYSIS OF NUMERICAL METHODS FOR NONLINEAR GEOMETRIC PDEs
    (2019) Li, Wenbo; Nochetto, Ricardo H; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation presents the numerical treatment of two classes of nonlinear geometric problems: fully nonlinear elliptic PDEs and nonlinear nonlocal PDEs. For the fully nonlinear elliptic PDEs, we study three problems: Monge-Amp\`{e}re equations, computation of convex envelopes and optimal transport with quadratic cost. We develop two-scale methods for both the Monge-Amp\`{e}re equation and the convex envelope problem with Dirichlet boundary conditions, and prove rates of convergence in the $L^{\infty}$ norm for them. Our technique hinges on the discrete comparison principle, construction of barrier functions and geometric properties of the problems. We also derive error estimates for numerical schemes of the optimal transport problem with quadratic cost, which can be written as a so-called second boundary value problem for the Monge-Amp\`{e}re equation. This contains a new weighted $L^2$ error estimate for the fully discrete linear programming method based on quantitative stability estimates for optimal plans. For the nonlinear nonlocal PDEs, we focus on the computation and numerical analysis of nonlocal minimal graphs of order $s \in (0,1/2)$ in a bounded domain. This can be reinterpreted as a Dirichlet problem for a nonlocal, nonlinear, degenerate operator of order $s + 1/2$, whose numerical treatment is in its infancy. We propose a finite element discretization and prove its convergence together with error estimates for two different notions of error. Several interesting numerical experiments are also presented and discussed, which might shed some light on theoretical questions about this emerging research topic.
  • Thumbnail Image
    Item
    PROBLEMS ORIGINATING FROM THE PLANNING OF AIR TRAFFIC MANAGEMENT INITIATIVES
    (2018) Estes, Alexander; Ball, Michael O; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    When weather affects the ability of an airport to accommodate flights, a ground delay program is used to control the rate at which flights arrive at the airport. This prevents excessive congestion at the airport. In this thesis, we discuss several problems arising from the planning of these programs. Each of these problems provides insight that can be applied in a broader setting, and in each case we develop generalizations of these results in a wider context. We show that a certain type of greedy policy is optimal for planning a ground delay program when no air delays are allowed. More generally, we characterize the conditions under which policies are optimal for a dynamic stochastic transportation problem. We also provide results that ensure that certain assignments are optimal, and we apply these results to the problem of matching drivers to riders in an on-demand ride service. When flights are allowed to take air delays, then a greedy policy is no longer optimal, but flight assignments can be produced by solving an integer program. We establish the strength of an existing formulation of this problem, and we provide a new, more scalable formulation that has the same strength properties. We show that both of these methods satisfy a type of equity property. These formulations are a special case of a dynamic stochastic network flow problem, which can be modeled as a deterministic flow problem on a hypergraph. We provide strong formulations for this general class of hypergraph flow problems. Finally, we provide a method for summarizing a dataset of ground delay programs. This summarization consists of a small subset of the original data set, whose elements are referred to as "representative" ground delay programs. More generally, we define a new class of data exploration methods, called "representative region selection" methods. We provide a framework for evaluating the quality of these methods, and we demonstrate statistical properties of these methods.
  • Thumbnail Image
    Item
    Modeling the Transfer of Drug Resistance in Solid Tumors
    (2018) Becker, Matthew Harrington; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    ABC efflux transporters are a key factor leading to multidrug resistance in cancer. Overexpression of these transporters significantly decreases the efficacy of anti-cancer drugs. Along with selection and induction, drug resistance may be trans- ferred between cells, which is the focus of this dissertaion. Specifically, we consider the intercellular transfer of P-glycoprotein (P-gp), a well-known ABC transporter that was shown to confer resistance to many common chemotherapeutic drugs. In a recent paper, Dura ́n et al. studied the dynamics of mixed cultures of resistant and sensitive NCI-H460 (human non-small cell lung cancer) cell lines [1]. As expected, the experimental data showed a gradual increase in the percentage of resistance cells and a decrease in the percentage of sensitive cells. The experimental work was accompanied with a mathematical model that assumed P-gp transfer from resistant cells to sensitive cells, rendering them temporarily resistant. The mathematical model provided a reasonable fit to the experimental data. In this dissertation we develop three new mathematical model for the transfer of drug resistance between cancer cells. Our first model is based on incorporating a resistance phenotype into a model of cancer growth [2]. The resulting model for P-gp transfer, written as a system of integro-differential equations, follows the dynamics of proliferating, quiescent, and apoptotic cells, with a varying resistance phenotype. We show that this model provides a good match to the dynamics of the experimental data of [1]. The mathematical model further suggests that resistant cancer cells have a slower division rate than the sensitive cells. Our second model is a reaction-diffusion model with sensitive, resistant, and temporarily resistant cancer cells occupying a 2-dimensional space. We use this model as another extension of [1]. We show that this model, with competition and diffusion in space, provides an even better fit to the experimental data [1]. We incorporate a cytotoxic drug and study the effects of varying treatment protocols on the size and makeup of the tumor. We show that constant infusion leads to a small but highly resistant tumor, while small doses do not do enough to control the overall growth of the tumor. Our final model extends [3], an integro-differential equation with resistance modeled as a continuous variable and a Boltzmann type integral describing the transfer of P-gp expression. We again extend the model into a 2-dimensional spatial domain and incorporate competition inhibited growth. The resulting model, written as a single partial differential equation, shows that over time the resistance transfer leads to a uniform distribution of resistance levels, which is consisten with the results of [3]. We include a cytotoxic agent and determine that, as with our second model, it alone cannot successfully eradicate the tumor. We briefly present a second extension wherein we include two distinct transfer rules. We show that there is no qualitative difference between the single transfer rule and the two-transfer rule model.
  • Thumbnail Image
    Item
    Decision Making Under Uncertainty: New Models and Applications
    (2018) Jie, Cheng; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In the settings of decision-making-under-uncertainty problems, an agent takes an action on the environment and obtains a non-deterministic outcome. Such problem settings arise in various applied research fields such as financial engineering, business analytics and speech recognition. The goal of the research is to design an automated algorithm for an agent to follow in order to find an optimal action according to his/her preferences.Typically, the criterion for selecting an optimal action/policy is a performance measure, determined jointly by the agent's preference and the random mechanism of the agent's surrounding environment. The random mechanism is reflected through a random variable of the outcomes attained by a given action, and the agent's preference is captured by a transformation on the potential outcomes from the set of possible actions. Many decision-making-under-uncertainty problems formulate the performance measure objective function and develop optimization schemes on that objective function. Although the idea on the high-level seems straightforward, there are many challenges, both conceptually and computationally, that arise in the process of finding the optimal action. The thesis studies a special class of performance measure defined based on Cumulative Prospect Theory (CPT), which has been used as an alternative to expected-utility based performance measure for evaluating human-centric systems. The first part of the thesis designs a simulation-based optimization framework on the CPT-based performance measure. The framework includes a sample-based estimator for the CPT-value and stochastic approximation algorithms for searching the optimal action/policy. We prove that, under reasonable assumptions, the CPT-value estimator is asymptotically consistent and our optimization algorithms are asymptotically converging to the optimal point. The second part of the thesis introduces an abstract dynamic programming framework whose transitional measure is defined through the CPT-value. We also provide sufficient conditions under which the CPT-driven dynamic programming would attain a unique optimal solution. Empirical experiments presented in the last part of thesis illustrate that the CPT-estimator is consistent and that the CPT-based performance measure may lead to an optimal policy very different from those obtained using traditional expected utility.
  • Thumbnail Image
    Item
    Frontiers in Lattice Cryptography and Program Obfuscation
    (2017) Apon, Daniel Christopher; Katz, Jonathan; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation, we explore the frontiers of theory of cryptography along two lines. In the first direction, we explore Lattice Cryptography, which is the primary sub-area of post-quantum cryptographic research. Our first contribution is the construction of a deniable attribute-based encryption scheme from lattices. A deniable encryption scheme is secure against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. An attribute-based encryption scheme allows ``fine-grained'' access to ciphertexts, allowing for a decryption access policy to be embedded in ciphertexts and keys. We achieve both properties simultaneously for the first time from lattices. Our second contribution is the construction of a digital signature scheme that enjoys both short signatures and a completely tight security reduction from lattices. As a matter of independent interest, we give an improved method of randomized inversion of the G gadget matrix, which reduces the noise growth rate in homomorphic evaluations performed in a large number of lattice-based cryptographic schemes, without incurring the high cost of sampling discrete Gaussians. In the second direction, we explore Cryptographic Program Obfuscation. A program obfuscator is a type of cryptographic software compiler that outputs executable code with the guarantee that ``whatever can be hidden about the internal workings of program code, is hidden.'' Indeed, program obfuscation can be viewed as a ``universal and cryptographically-complete'' tool. Our third contribution is the first, full-scale implementation of secure program obfuscation in software. Our toolchain takes code written in a C-like programming language, specialized for cryptography, and produces secure, obfuscated software. Our fourth contribution is a new cryptanalytic attack against a variety of ``early'' program obfuscation candidates. We provide a general, efficiently-testable property for any two branching programs, called partial inequivalence, which we show is sufficient for launching an ``annihilation attack'' against several obfuscation candidates based on Garg-Gentry-Halevi multilinear maps.
  • Thumbnail Image
    Item
    Capturing the Large Scale Behavior of Many Particle Systems Through Coarse-Graining
    (2017) Punshon-Smith, Samuel; Levermore, Charles D; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation is concerned with two areas of investigation: the first is understanding the mathematical structures behind the emergence of macroscopic laws and the effects of small scales fluctuations, the second involves the rigorous mathematical study of such laws and related questions of well-posedness. To address these areas of investigation the dissertation involves two parts: Part I concerns the theory of coarse-graining of many particle systems. We first investigate the mathematical structure behind the Mori-Zwanzig (projection operator) formalism by introducing two perturbative approaches to coarse-graining of systems that have an explicit scale separation. One concerns systems with little dissipation, while the other concerns systems with strong dissipation. In both settings we obtain an asymptotic series of `corrections' to the limiting description which are small with respect to the scaling parameter, these corrections represent the effects of small scales. We determine that only certain approximations give rise to dissipative effects in the resulting evolution. Next we apply this framework to the problem of coarse-graining the locally conserved quantities of a classical Hamiltonian system. By lumping conserved quantities into a collection of mesoscopic cells, we obtain, through a series of approximations, a stochastic particle system that resembles a discretization of the non-linear equations of fluctuating hydrodynamics. We study this system in the case that the transport coefficients are constant and prove well-posedness of the stochastic dynamics. Part II concerns the mathematical description of models where the underlying characteristics are stochastic. Such equations can model, for instance, the dynamics of a passive scalar in a random (turbulent) velocity field or the statistical behavior of a collection of particles subject to random environmental forces. First, we study general well-posedness properties of stochastic transport equation with rough diffusion coefficients. Our main result is strong existence and uniqueness under certain regularity conditions on the coefficients, and uses the theory of renormalized solutions of transport equations adapted to the stochastic setting. Next, in a work undertaken with collaborator Scott-Smith we study the Boltzmann equation with a stochastic forcing. The noise describing the forcing is white in time and colored in space and describes the effects of random environmental forces on a rarefied gas undergoing instantaneous, binary collisions. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. Tightness of the appropriate quantities is proved by an extension of the Skorohod theorem to non-metric spaces.
  • Thumbnail Image
    Item
    Nonlinear Analysis of Phase Retrieval and Deep Learning
    (2017) Zou, Dongmian; Balan, Radu V; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Nonlinearity causes information loss. The phase retrieval problem, or the phaseless reconstruction problem, seeks to reconstruct a signal from the magnitudes of linear measurements. With a more complicated design, convolutional neural networks use nonlinearity to extract useful features. We can model both problems in a frame-theoretic setting. With the existence of a noise, it is important to study the stability of the phaseless reconstruction and the feature extraction part of the convolutional neural networks. We prove the Lipschitz properties in both cases. In the phaseless reconstruction problem, we show that phase retrievability implies a bi-Lipschitz reconstruction map, which can be extended to the Euclidean space to accommodate noises while remaining to be stable. In the deep learning problem, we set up a general framework for the convolutional neural networks and provide an approach for computing the Lipschitz constants.
  • Thumbnail Image
    Item
    APPROXIMATION ALGORITHMS FOR FACILITY LOCATION AND CLUSTERING PROBLEMS
    (2017) Trinh, Khoa; Srinivasan, Aravind; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Facility Location (FL) problems are among the most fundamental problems in combinatorial optimization. FL problems are also closely related to Clustering problems. Generally, we are given a set of facilities, a set of clients, and a symmetric distance metric on these facilities and clients. The goal is to ``open'' the ``best'' subset of facilities, subject to certain budget constraints, and connect all clients to the opened facilities so that some objective function of the connection costs is minimized. In this dissertation, we consider generalizations of classical FL problems. Since these problems are NP-hard, we aim to find good approximate solutions in polynomial time. We study the classic $k$-median problem which asks to find a subset of at most $k$ facilities such that the sum of connection costs of all clients to the closest facility is as small as possible. Our main result is a $2.675$-approximation algorithm for this problem. We also consider the Knapsack Median (KM) problem which is a generalization of the $k$-median problem. In the KM problem, each facility is assigned an opening cost. A feasible set of opened facilities should have the total opening cost at most a given budget. The main technical challenge here is that the natural LP relaxation has unbounded integrality gap. We propose a $17.46$-approximation algorithm for the KM problem. We also show that, after a preprocessing step, the integrality gap of the residual instance is bounded by a constant. The next problem is a generalization of the $k$-center problem, which is called the Knapsack Center (KC) problem and has a similar budget constraint as in the KM problem. Here we want to minimize the maximum distance from any client to its closest opened facility. The KC problem is well-known to be $3$-approximable. However, the current approximation algorithms for KC are deterministic and it is not hard to construct instances in which almost all clients have the worst-possible connection cost. Unfairness also arises in this context: certain clients may consistently get connected to distant centers. We design a randomized algorithm which guarantees that the expected connection cost of ``most'' clients will be at most $(1+2/e) \approx 1.74$ times the optimal radius and the worst-case distance remains the same. We also show a similar result for the $k$-center problem: all clients have expected approximation ratio about $1.592$ with a deterministic upper-bound of $3$ in the worst case. It is well-known that a few \emph{outliers} (very distant clients) may result in a very large optimal radius in the center-type problems. One way to deal with this issue is to cover only some $t$ out of $n$ clients in the so-called robust model. In this thesis, we give tight approximation algorithms for both robust $k$-center and robust matroid center problems. We also introduce a lottery model in which each client $j$ wants to be covered with probability at least $p_j \in [0,1]$. We then give randomized approximation algorithms for center-type problems in this model which match the worst-case bounds of the robust model and slightly violate the coverage and fairness constraints. Several of our results for FL problems in this thesis rely on novel dependent rounding schemes. We develop these rounding techniques in the general setting and show that they guarantee new correlation properties. Given the wide applicability of the standard dependent rounding, we believe that our new techniques are of independent interests.