Computer Science Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2756

Browse

Search Results

Now showing 1 - 10 of 17
  • Thumbnail Image
    Item
    MATRIX REDUCTION IN NUMERICAL OPTIMIZATION
    (2011) Park, Sungwoo; O'Leary, Dianne P; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Matrix reduction by eliminating some terms in the expansion of a matrix has been applied to a variety of numerical problems in many different areas. Since matrix reduction has different purposes for particular problems, the reduced matrices also have different meanings. In regression problems in statistics, the reduced parts of the matrix are considered to be noise or observation error, so the given raw data are purified by the matrix reduction. In factor analysis and principal component analysis (PCA), the reduced parts are regarded as idiosyncratic (unsystematic) factors, which are not shared by multiple variables in common. In solving constrained convex optimization problems, the reduced terms correspond to unnecessary (inactive) constraints which do not help in the search for an optimal solution. In using matrix reduction, it is both critical and difficult to determine how and how much we will reduce the matrix. This decision is very important since it determines the quality of the reduced matrix and the final solution. If we reduce too much, fundamental properties will be lost. On the other hand, if we reduce too little, we cannot expect enough benefit from the reduction. It is also a difficult decision because the criteria for the reduction must be based on the particular type of problem. In this study, we investigatematrix reduction for three numerical optimization problems. First, the total least squares problem uses matrix reduction to remove noise in observed data which follow an underlying linear model. We propose a new method to make the matrix reduction successful under relaxed noise assumptions. Second, we apply matrix reduction to the problem of estimating a covariance matrix of stock returns, used in financial portfolio optimization problem. We summarize all the previously proposed estimation methods in a common framework and present a new and effective Tikhonov method. Third, we present a new algorithm to solve semidefinite programming problems, adaptively reducing inactive constraints. In the constraint reduction, the Schur complement matrix for the Newton equations is the object of the matrix reduction. For all three problems, we propose appropriate criteria to determine the intensity of the matrix reduction. In addition, we verify the correctness of our criteria by experimental results and mathematical proof.
  • Thumbnail Image
    Item
    Compartmental and Simulation Models for Evaluating MedKit Prepositioning Strategies for Anthrax Attack Response
    (2011) Houck, Michelle Lee; Herrmann, Jeffrey W; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Since the 2001 anthrax attacks, public health officials have become concerned with planning for a potential large scale attack. Researchers have worked to model attack scenarios in order to evaluate various response policies. One response policy that has been proposed is to preemptively distribute antibiotics to the public in the form of MedKits before an attack occurs. Despite numerous models and studies, there has not been a model to study the effect of distributing MedKits on the expected number of deaths in an attack. We develop a discrete-time compartmental difference equation model to analyze the policy. The results show that distributing any number of MedKits reduces the number of deaths expected in all scenarios tested. We analyze under what circumstances the MedKits have the largest life-saving impact. We also develop a stochastic transition model to demonstrate the accuracy of the MedKits model results.
  • Thumbnail Image
    Item
    DATA ASSIMILATION OF THE GLOBAL OCEAN USING THE 4D LOCAL ENSEMBLE TRANSFORM KALMAN FILTER (4D-LETKF) AND THE MODULAR OCEAN MODEL (MOM2)
    (2011) Penny, Stephen G.; Kalnay, Eugenia; Carton, Jim; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The 4D Local Ensemble Transform Kalman Filter (4D-LETKF), originally designed for atmospheric applications, has been adapted and applied to the Geophysical Fluid Dynamics Laboratory's (GFDL) Modular Ocean Model (MOM2). This new ocean assimilation system provides an estimation of the evolving errors in the global oceanic domain for all state variables. Multiple configurations of LETKF have been designed to manage observation coverage that is sparse relative to the model resolution. An Optimal Interpolation (OI) method, implemented through the Simple Ocean Data Assimilation (SODA) system, has also been applied to MOM2 for use as a benchmark. Retrospective 7-year analyses using the two systems are compared for validation. The oceanic 4D-LETKF assimilation system is demonstrated to be an effective method for data assimilation of the global ocean as determined by comparisons of global and regional `observation minus forecast' RMS, as well as comparisons with temperature/salinity relationships and independent observations of altimetry and velocity.
  • Thumbnail Image
    Item
    Prioritizing Patients: Stochastic Dynamic Programming for Surgery Scheduling and Mass Casualty Incident Triage
    (2011) Herring, William L.; Herrmann, Jeffrey W; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The research presented in this dissertation contributes to the growing literature on applications of operations research models to problems in healthcare through the development and analysis of mathematical models for two fundamental problems facing nearly all hospitals: the single-day surgery scheduling problem and planning for triage in the event of a mass casualty incident. Both of these problems can be understood as sequential decision-making processes aimed at prioritizing between different classes of patients under significant uncertainty and are modeled using stochastic dynamic programming. Our study of the single-day surgery scheduling problem represents the first model to capture the sequential nature of the operating room (OR) manager's decisions during the transition between the generality of cyclical block schedules (which allocate OR time to surgical specialties) and the specificity of schedules for a particular day (which assign individual patients to specific ORs). A case study of the scheduling system at the University of Maryland Medical Center highlights the importance of the decision to release unused blocks of OR time and use them to schedule cases from the surgical request queue (RQ). Our results indicate that high quality block release and RQ decisions can be made using threshold-based policies that preserve a specific amount of OR time for late-arriving demand from the specialties on the block schedule. The development of mass casualty incident (MCI) response plans has become a priority for hospitals, and especially emergency departments and trauma centers, in recent years. Central to all MCI response plans is the triage process, which sorts casualties into different categories in order to facilitate the identification and prioritization of those who should receive immediate treatment. Our research relates MCI triage to the problem of scheduling impatient jobs in a clearing system and extends earlier research by incorporating the important trauma principle that patients' long-term (post-treatment) survival probabilities deteriorate the longer they wait for treatment. Our results indicate that the consideration of deteriorating survival probabilities during MCI triage decisions, in addition to previously studied patient characteristics and overall patient volume, increases the total number of expected survivors.
  • Thumbnail Image
    Item
    Estimation of Expected Returns, Time Consistency of A Stock Return Model, and Their Application to Portfolio Selection
    (2010) Ma, Huaqiang; Madan, Dilip B; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Longer horizon returns are modeled by two approaches, which have different impact on skewness and excess kurtosis. The Levy approach, which considers the random variable at longer horizon as the cumulants of i.i.d random variables from shorter horizons, tends to decrease skewness and excess kurtosis in a faster rate along the time horizon than the real data implies. On the other side, the scaling approach keeps skewness and excess kurtosis constant along the time horizon. The combination of these two approaches may have a better performance than each one of them. This empirical work employs the mixed approach to study the returns at five time scales, from one-hour to two-week. At all time scales, the mixed model outperforms the other two in terms of the KS test and numerous statistical distances. Traditionally, the expected return is estimated from the historical data through the classic asset pricing models and their variations. However, because the realized returns are so volatile, it requires decades or even longer time period of data to attain relatively accurate estimates. Furthermore, it is questionable to extrapolate the expected return from the historical data because the return is determined by future uncertainty. Therefore, instead of using the historical data, the expected return should be estimated from data representing future uncertainty, such as the option prices which are used in our method. A numeraire portfolio links the option prices to the expected return by its striking feature, which states that any contingent claim's price, if denominated by this portfolio, is the conditional expectation of its denominated future payoffs under the physical measure. It contains the information of the expected return. Therefore, in this study, the expected returns are estimated from the option calibration through the numeraire portfolio pricing method. The results are compared to the realized returns through a linear regression model, which shows that the difference of the two returns is indifferent to the major risk factors. This demonstrates that the numeraire portfolio pricing method provides a good estimator for the expected return. The modern portfolio theory is well developed. However, various aspects are questioned in the implementation, e.g., the expected return is not properly estimated using historical data, the return distribution is assumed to be Gaussian, which does not reflect the empirical facts. The results from the first two studies can be applied to this problem. The constructed portfolio using this estimated expected return is superior to the reference portfolios with expected return estimated from historical data. Furthermore, this portfolio also outperforms the market index, SPX.
  • Thumbnail Image
    Item
    Mathematical modeling of drug resistance and cancer stem cells dynamics
    (2010) Tomasetti, Cristian; Levy, Doron; Dolgopyat, Dmitry; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation we consider the dynamics of drug resistance in cancer and the related issue of the dynamics of cancer stem cells. Our focus is only on resistance which is caused by random genetic point mutations. A very simple system of ordinary differential equations allows us to obtain results that are comparable to those found in the literature with one important difference. We show that the amount of resistance that is generated before the beginning of the treatment, and which is present at some given time afterward, always depends on the turnover rate, no matter how many drugs are used. Previous work in the literature indicated no dependence on the turnover rate in the single drug case while a strong dependence in the multi-drug case. We develop a new methodology in order to derive an estimate of the probability of developing resistance to drugs by the time a tumor is diagnosed and the expected number of drug-resistant cells found at detection if resistance is present at detection. Our modeling methodology may be seen as more general than previous approaches, in the sense that at least for the wild-type population we make assumptions only on their averaged behavior (no Markov property for example). Importantly, the heterogeneity of the cancer population is taken into account. Moreover, in the case of chronic myeloid leukemia (CML), which is a cancer of the white blood cells, we are able to infer the preferred mode of division of the hematopoietic cancer stem cells, predicting a large shift from asymmetric division to symmetric renewal. We extend our results by relaxing the assumption on the average growth of the tumor, thus going beyond the standard exponential case, and showing that our results may be a good approximation also for much more general forms of tumor growth models. Finally, after reviewing the basic modeling assumptions and main results found in the mathematical modeling literature on chronic myeloid leukemia (CML), we formulate a new hypothesis on the effects that the drug Imatinib has on leukemic stem cells. Based on this hypothesis, we obtain new insights on the dynamics of the development of drug resistance in CML.
  • Thumbnail Image
    Item
    Detecting and Correcting Errors in Genome Assemblies
    (2010) Subramanian, Poorani; Yorke, James A; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Genome assemblies have various types of deficiencies or misassemblies. This work is aimed at detecting and correcting a type of misassembly that we call Compression/Expansion or CE misassemblies whereby a section of sequence has been erroneously omitted or inserted in the assembly. Other types of deficiencies include gaps in the genome sequence. We developed a statistic for identifying Compression/Expansion misassemblies called the CE statistic. It is based on examining the placement of mate pairs of reads in the assembly. In addition to this, we developed an algorithm that is aimed at closing gaps and validating and/or correcting CE misassemblies detected by the CE statistic. This algorithm is similar to a shooting algorithm used in solving two-point boundary value problems in partial differential equations. We call this algorithm the Shooting Method. The Shooting Method finds all possible ways to assemble a local region of the genome contained between two target reads. We use a combination of the CE statistic and Shooting Method to detect and correct some CE misassemblies and close gaps in genome assemblies. We tested our techniques both on faux and real data. Applying this technique to 22 bacterial draft assemblies for which the finished genome sequence is known, we were able to identify 5 out of 8 real CE misassemblies. We applied the Shooting Method to a de novo assembly of the Bos taurus genome made from Sanger data. We were able to close 9,863 gaps out of 58,386. This added 8.34 Mbp of sequence to the assembly, and resulted in a 7 % increase of N50 contig size.
  • Thumbnail Image
    Item
    COMPUTATIONALLY TRACTABLE STOCHASTIC INTEGER PROGRAMMING MODELS FOR AIR TRAFFIC FLOW MANAGEMENT
    (2010) Glover, Charles N.; Ball, Michael O; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A primary objective of Air Traffic Flow Management (ATFM) is to ensure the orderly flow of aircraft through airspace, while minimizing the impact of delays and congestion on airspace users. A fundamental challenge of ATFM is the vulnerability of the airspace to changes in weather, which can lower the capacities of different regions of airspace. Considering this uncertainty along with the size of the airspace system, we arrive at a very complex problem. The development of efficient algorithms to solve ATFM problems is an important and active area of research. Responding to predictions of bad weather requires the solution of resource allocation problems that assign a combination of ground delay and route adjustments to many flights. Since there is much uncertainty associated with weather predictions, stochastic models are necessary. We address some of these problems using integer programming (IP). In general, IP models can be difficult to solve. However, if "strong" IP formulations can be found, then problems can be solved quickly by state of the art IP solvers. We start by describing a multi-period stochastic integer program for the single airport stochastic dynamic ground holding problem. We then show that the linear programming relaxation yields integer optimal solutions. This is a fairly unusual property for IP formulations that can significantly reduce the complexity of the corresponding problems. The proof is achieved by defining a new class of matrices with the Monge property and showing that the formulation presented belongs to this class. To further improve computation times, we develop alternative compact formulations. These formulations are extended to show that they can also be used to model different concepts of equity and fairness as well as efficiency. We explore simple rationing methods and other heuristics for these problems both to provide fast solution times, but also because these methods can embody inherent notions of fairness. The initial models address problems that seek to restrict flow into a single airport. These are extended to problems where stochastic weather affects en route traffic. Strong formulations and efficient solutions are obtained for these problems as well.
  • Thumbnail Image
    Item
    Integer Programming-Based Heuristics for Vehicle Routing Problems
    (2010) Gulczynski, Damon John; Golden, Bruce; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The vehicle routing problem (VRP) has been an active field of study by operations researchers for over 50 years. Many practical applications have been presented in the literature, and many solution techniques have been developed. We discuss, develop, and computationally test integer programming-based heuristics for several variants of the standard VRP. We use integer programming to model the split delivery VRP with minimum delivery amounts, the multi-depot split delivery VRP, the period VRP, the standard VRP, and the multi-depot VRP. We apply our heuristics to benchmark problems from the literature and generate many new problems with high-quality, visually-estimated solutions. Our heuristics produce high-quality solutions in a reasonable amount of computer time. Overall, our new IP-based heuristics are very competitive with the best methods found in the VRP literature to date.
  • Thumbnail Image
    Item
    MOLECULAR DYNAMICS SIMULATION OF DICARBOXYLIC ACID COATED AQUEOUS AEROSOL: STRUCTURE AND PROCESSING OF WATER VAPOR
    (2010) Ma, Xiaofei; Zachariah, Michael R; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Low molecular weight dicarboxylic acids constitute a significant fraction of water-soluble organic aerosols in the atmosphere. They have a potential contribution to the formation of cloud condensation nuclei (CCN) and are involved in a series of chemical reactions occurring in atmosphere. In this work, molecular dynamics simulation method was used to probe the structure and the interfacial properties of the dicarboxylic acid coated aqueous aerosol. Low molecular weight dicarboxylic acids of various chain lengths and water solubility were chosen to coat a water droplet consisting of 2440 water molecules. For malonic acid coated aerosol, the surface acid molecules dissolved into the water core and form an ordered structure due to the hydrophobic interactions. For other nanoaerosols coated with low solubility acids, phase separation between water and acid molecules was observed. To study the water processing of the coated aerosols, the water vapor accommodation factors were calculated.