Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    Item
    Formalized Application of Systems Engineering Processes to the Development of the Purple Line SharePoint Test Tracking Tool
    (2020) Mehta, Hanish Gaurang; Baras, John; Systems Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Testing & Commissioning (T&C) for a $2 Billion project generally consists of more than ten thousand tests, and the Purple Line Light Rail system being constructed in Maryland is no exception. The Purple Line Light Rail is expected to have at least twenty thousand tests conducted in its T&C phase over the next 3-4 years. Given the number of tests, their pre-requirements, resources (manpower, equipment, facilities) and the test reporting procedures to be used to comply with the Maryland Transit Administration (MTA), the Purple Line Transit Constructors (PLTC) felt a need for an online system that could be used to log and track tests. This Thesis focuses on the formalized application of Systems Engineering processes, in accordance with ISO/IEC/IEEE 15288:2015, to the development of this test tracking tool. The Stakeholder requirements given by MTA and PLTC are converted to System requirements, and a Test Plan for the tool is developed in parallel. The tool is designed by PLTC in collaboration with a subcontractor to meet the System requirements and will be tested before going live.
  • Thumbnail Image
    Item
    THE SYNDEMIC EFFECT OF PSYCHOSOCIAL AND STRUCTURAL FACTORS ON HIV TESTING AMONG BLACK MEN AND THE MODERATING EFFECT OF SEXUAL IDENTITY
    (2018) Turpin, Rodman Emory; Dyer, Typhanye; Epidemiology and Biostatistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Black populations experience the highest incidence and prevalence of HIV in the United States. It has been posited that numerous structural and psychosocial factors contribute to HIV disparities among Black populations, these factors can have an adverse effect on healthcare utilization, including HIV testing. Given the burden of HIV rates among Black men, especially Black gay and bisexual men, it is important to consider possible barriers to HIV testing in this population. Syndemic theory posits a mutually reinforcement of social and structural conditions that cumulatively affects disease outcomes. While syndemic theory has been applied to HIV acquisition, this framework has not been utilized for HIV testing. We tested for a syndemic of depression, poverty, and a lack of healthcare access impacting HIV testing and tested sexual identity as a moderator of healthcare access in a nationally representative sample of Black men. Participants with 2 or 3 syndemic factors were significantly more likely to have never been HIV tested compared to those with 0 or 1 (49.2% to 31.7%). Having 3 syndemic factors was associated with greater prevalence of never having been HIV tested (aPR=1.46, 95% CI 1.09, 1.95). Gay/bisexual identity moderated the association between health insurance and ever having been HIV tested in adjusted models (aPR=4.36; 95% CI 1.40, 13.62), with not having health insurance being associated with HIV testing among gay/bisexual participants only (aPR=4.84, 95% CI 1.19, 19.70). Using latent class analysis, four syndemic classes were identified as significant predictors of having never been HIV tested. In adjusted log-binomial models, compared to the class with the lowest proportion of syndemic factors, the highest prevalence of never having been HIV tested was among the class with the highest proportions of syndemic component factors (aPR=2.27, 95% CI 1.83, 2.82). Overall, there is evidence of a syndemic of depression, poverty, and a lack of healthcare access that negatively affects HIV testing among Black men, with a lack of healthcare access being a significantly greater barrier to HIV testing among gay/bisexual men compared to heterosexual men.
  • Thumbnail Image
    Item
    Nonparametric Estimation and Testing of Interaction in Generalized Additive Models
    (2011) Li, Bo; Smith, Paul J; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The additive model overcomes the "curse of dimensionality" in general nonparametric regression problems, in the sense that it achieves the optimal rate of convergence for a one-dimensional smoother. Meanwhile, compared to the classical linear regression model, it is more flexible in defining an arbitrary smooth functional relationship between the individual regressor and the conditional mean of the response variable Y given X. However, if the true model is not additive, the estimates may be seriously biased by assuming the additive structure. In this dissertation, generalized additive models (with a known link function) are considered when containing second order interaction terms. We present an extension of the existing marginal integration estimation approach for additive models with the identity link. The corresponding asymptotic normality of the estimators is derived for the univariate component functions and interaction functions. A test statistic for testing significance of the interaction terms is developed. We obtained the asymptotics for the test functional and local power results. Monte Carlo simulations are conducted to examine the finite sample performance of the estimation and testing procedures. We code our own local polynomial pre-smoother with fixed bandwidths and apply it in the integration method. The widely used LOESS function with fixed spans is also used as a pre-smoother. Both methods provide comparable results in estimation and are shown to work well with properly chosen smoothing parameters. With a small and moderate sample size, the implementation of the test procedure based on the asymptotics may produce inaccurate results. Hence a wild bootstrap procedure is provided to get empirical critical values for the test. The test procedure performs well in fitting the correct quantiles under the null hypothesis and shows strong power against the alternative.
  • Thumbnail Image
    Item
    Large Scale Distributed Testing for Fault Classification and Isolation
    (2010) Fouche, Sandro Maleewatana; Porter, Adam A; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop.