Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
33 results
Search Results
Item INVESTIGATING MODEL SELECTION AND PARAMETER RECOVERY OF THE LATENT VARIABLE AUTOREGRESIVE LATENT TRAJECTORY (LV-ALT) MODEL FOR REPEATED MEASURES DATA: A MONTE CARLO SIMULATION STUDY(2023) Houser, Ari; Harring, Jeffrey R; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Over the past several decades, several highly generalized models have been developed which can reduce, through parameter constraints, to a variety of classical models. One such framework, the Autoregressive Latent Trajectory (ALT) model, is a combination of two classical approaches to longitudinal modeling: the autoregressive or simplex family, in which trait scores at one occasion are regressed on scores at a previous occasion, and latent trajectory or growth curve models, in which individual trajectories are specified by a set of latent factors (typically a slope and an intercept) whose values vary across the population.The Latent Variable-Autoregressive Latent Trajectory (LV-ALT) model has been recently proposed as an extension of the ALT model in which the traits of interest are latent constructs measured by one or more indicator variables. The LV-ALT is presented as a framework by which one may compare the fit of a chosen model to alternative possibilities or use to empirically guide the selection of a model in the absence of theory, prior research, or standard practice. To date, however, there has not been any robust analysis of the efficacy or usefulness of the LV-ALT model for this purpose. This study uses a Monte Carlo simulation study to evaluate the efficacy of the basic formulation of the LV-ALT model (univariate latent growth process, single indicator variable) to identify the true model, model family, and key characteristics of the model under manipulated conditions of true model parameters, sample size, measurement reliability, and missing data. The performance of the LV-ALT model for model selection is mixed. Under most manipulated conditions, the best-fitting of nine candidate models was different than the generating model, and the cost of model misspecification for parameter recovery included significant increases in bias and loss of precision in parameter estimation. As a general rule, the LV-ALT should not be relied upon to empirically select a specific model, or to choose between several theoretical plausible models in the autoregressive or latent growth families. Larger sample size, greater measurement reliability, larger parameter magnitude, and a constant autoregressive parameter are associated with greater likelihood of correct model selection.Item ACCOUNTING FOR STUDENT MOBILITY IN SCHOOL RANKINGS: A COMPARISON OF ESTIMATES FROM VALUE-ADDED AND MULTIPLE MEMBERSHIP MODELS(2023) Cassiday, Kristina; Stapleton, Laura M; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Student mobility exists, but it’s not always taken into account in value-added modeling approaches used to determine school accountability rankings. Multiple membership modeling can account for student mobility in a multilevel framework, but it is more computationally demanding and requires specialized knowledge and software packages that may not be available in state and district departments of education. The purpose of this dissertation was to compare how different multilevel value-added modeling approaches perform at various levels of mobility to be able to provide recommendations to state- and district-administrators about the type of models that would be best suited to their data. To accomplish this task, a simulation study was conducted, manipulating the percentage of mobility in the dataset and the similarity of the sender and receiver schools of mobile students. Traditional gains score and covariate adjustment models were run, along with comparable multiple membership models to determine the extent to which school effect estimates and school accountability rankings were affected and to investigate the conditions under which a multiple membership model would produce a meaningful increase in accuracy to justify its computational demand. Additional comparisons were made on measures of relative bias of the fixed effect coefficients, the random effect variance components, and the relative bias of the standard errors of the fixed effects and random effects variance components. The multiple membership models with schools proportionally weighted by time spent were considered better fitting models across all conditions. All multiple membership models were able to better recover the intercept and school-level residual variance better than other models. However, when considering school accountability rankings, the proportion of school quintile shifts was close to equal across the traditional and multiple membership models that were structurally similar to each other. This finding suggests that the use of a multiple membership model is preferable in providing the most accurate parameter and standard error estimates. However, if school accountability rankings are of primary interest, a traditional VAM performs equally as well as a multiple membership model. An empirical data analysis was conducted to demonstrate how to prepare data and properly run these various models and how to interpret the results, along with a discussion of issues to consider when selecting a model. Recommendations are provided on how to select a model, informed by the findings from the simulation portion of the study.Item TACKLING PERFORMANCE AND SECURITY ISSUES FOR CLOUD STORAGE SYSTEMS(2022) Kang, Luyi; Jacob, Burce; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Building data-intensive applications and emerging computing paradigm (e.g., Machine Learning (ML), Artificial Intelligence (AI), Internet of Things (IoT) in cloud computing environments is becoming a norm, given the many advantages in scalability, reliability, security and performance. However, under rapid changes in applications, system middleware and underlying storage device, service providers are facing new challenges to deliver performance and security isolation in the context of shared resources among multiple tenants. The gap between the decades-old storage abstraction and modern storage device keeps widening, calling for software/hardware co-designs to approach more effective performance and security protocols. This dissertation rethinks the storage subsystem from device-level to system-level and proposes new designs at different levels to tackle performance and security issues for cloud storage systems. In the first part, we present an event-based SSD (Solid State Drive) simulator that models modern protocols, firmware and storage backend in detail. The proposed simulator can capture the nuances of SSD internal states under various I/O workloads, which help researchers understand the impact of various SSD designs and workload characteristics on end-to-end performance. In the second part, we study the security challenges of shared in-storage computing infrastructures. Many cloud providers offer isolation at multiple levels to secure data and instance, however, security measures in emerging in-storage computing infrastructures are not studied. We first investigate the attacks that could be conducted by offloaded in-storage programs in a multi-tenancy cloud environment. To defend against these attacks, we build a lightweight Trusted Execution Environment, IceClave to enable security isolation between in-storage programs and internal flash management functions. We show that while enforcing security isolation in the SSD controller with minimal hardware cost, IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.4x better performance than the conventional host-based trusted computing approach. In the third part, we investigate the performance interference problem caused by other tenants' I/O flows. We demonstrate that I/O resource sharing can often lead to performance degradation and instability. The block device abstraction fails to expose SSD parallelism and pass application requirements. To this end, we propose a software/hardware co-design to enforce performance isolation by bridging the semantic gap. Our design can significantly improve QoS (Quality of Service) by reducing throughput penalties and tail latency spikes. Lastly, we explore more effective I/O control to address contention in the storage software stack. We illustrate that the state-of-the-art resource control mechanism, Linux cgroups is insufficient for controlling I/O resources. Inappropriate cgroup configurations may even hurt the performance of co-located workloads under memory intensive scenarios. We add kernel support for limiting page cache usage per cgroup and achieving I/O proportionality.Item Predicting the magnetic field of the three-meter spherical Couette experiment(2021) Burnett, Sarah; Lathrop, Daniel P; Ide, Kayo; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The magnetohydrodynamics of Earth have been explored at the University of Maryland and the Institute of Geosciences in Grenoble, France through experiments, numerical models, and machine learning. The interaction between Earth's magnetic fields and its outer core is emulated in a laboratory using the three-meter spherical Couette device filled with liquid sodium driven by two independently rotating concentric shells and an external dipole magnetic field. Recently, the experiment has undergone modifications to increase the helical flows in the poloidal direction to bring it closer to the convection-driven geodynamo flows of Earth. The experiment has 31 surface Hall probes measuring sparsely the external magnetic field. The numerical model, XSHELLS, solves the coupled Navier-Stokes and induction equations numerically to give a full picture of the internal velocity and magnetic field, however, it cannot resolve all the turbulence. In this thesis we aim to improve the prediction of magnetic fields in the experiment by performing studies both on experimental data and simulation data. First, we analyze the simulation data to assess the viability of using the measured external magnetic field to represent the internal dynamics of the velocity and magnetic field. These simulations also elucidate the internal behavior of the experiment for the first time. Next, we compare the experimental magnetic field measurements with the extrapolated surface magnetic field measurements in simulations using principal component analysis by matching all parameters but the level of turbulence. Our goal is to see if (i) the eigenvectors corresponding to the largest eigenvalues are comparable and (ii) how then the surface measurements of the simulation couple with the internal measurements, which are not accessible in the experiment. Next, we perform several machine learning techniques to see the feasibility of using the current probe setup to predict the magnetic fields in time. In the second to last chapter, we assess the potential locations for magnetic field measurements. These studies provide insight on the measurements required to predict Earth's magnetic field.Item The complexity of simulating quantum physics: dynamics and equilibrium(2021) Deshpande, Abhinav; Gorshkov, Alexey V; Fefferman, Bill; Physics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Quantum computing is the offspring of quantum mechanics and computer science, two great scientific fields founded in the 20th century. Quantum computing is a relatively young field and is recognized as having the potential to revolutionize science and technology in the coming century. The primary question in this field is essentially to ask which problems are feasible with potential quantum computers and which are not. In this dissertation, we study this question with a physical bent of mind. We apply tools from computer science and mathematical physics to study the complexity of simulating quantum systems. In general, our goal is to identify parameter regimes under which simulating quantum systems is easy (efficiently solvable) or hard (not efficiently solvable). This study leads to an understanding of the features that make certain problems easy or hard to solve. We also get physical insight into the behavior of the system being simulated. In the first part of this dissertation, we study the classical complexity of simulating quantum dynamics. In general, the systems we study transition from being easy to simulate at short times to being harder to simulate at later times. We argue that the transition timescale is a useful measure for various Hamiltonians and is indicative of the physics behind the change in complexity. We illustrate this idea for a specific bosonic system, obtaining a complexity phase diagram that delineates the system into easy or hard for simulation. We also prove that the phase diagram is robust, supporting our statement that the phase diagram is indicative of the underlying physics. In the next part, we study open quantum systems from the point of view of their potential to encode hard computational problems. We study a class of fermionic Hamiltonians subject to Markovian noise described by Lindblad jump operators and illustrate how, sometimes, certain Lindblad operators can induce computational complexity into the problem. Specifically, we show that these operators can implement entangling gates, which can be used for universal quantum computation. We also study a system of bosons with Gaussian initial states subject to photon loss and detected using photon-number-resolving measurements. We show that such systems can remain hard to simulate exactly and retain a relic of the "quantumness" present in the lossless system. Finally, in the last part of this dissertation, we study the complexity of simulating a class of equilibrium states, namely ground states. We give complexity-theoretic evidence to identify two structural properties that can make ground states easier to simulate. These are the existence of a spectral gap and the existence of a classical description of the ground state. Our findings complement and guide efforts in the search for efficient algorithms.Item EXPERIMENTAL CHARACTERIZATION OF ATMOSPHERIC TURBULENCE SUPPORTED BY ADVANCED PHASE SCREEN SIMULATIONS(2020) PAULSON, DANIEL A; Davis, Christopher C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Characterization of optical propagation through the low turbulent atmosphere has been a topic of scientific investigation for decades, and has important engineering applications in the fields of free space optical communications, remote sensing, and directed energy. Traditional theories, starting with early radio science, have flowed down from the assumption of three dimensional statistical symmetry of so-called fully developed, isotropic turbulence. More recent experimental results have demonstrated that anisotropy and irregular frequency domain characteristics are regularly observed near boundaries of the atmosphere, and similar findings have been reported in computational fluid dynamics literature. We have used a multi-aperture transmissometer in field testing to characterize atmospheric transparency, refractive index structure functions, and turbulence anisotropy near atmospheric boundaries. Additionally, we have fielded arrays of resistive temperature detector probes alongside optical propagation paths to provide direct measurements of temperature and refractive index statistics supporting optical turbulence observations. We are backing up these experimental observations with a modified algorithm for modeling optical propagation through atmospheric turbulence. Our new phase screen approach utilizes a randomized spectral sampling algorithm to emulate the turbulence energy spectrum and improve modeling of low frequency fluctuations and improve convergence with theory. We have used the new algorithm to investigate open theoretical topics, such as the behavior of beam statistics in the strong fluctuation regime as functions of anisotropy parameters, and energy spectrum power law behavior. These results have to be leveraged in order to develop new approaches for characterization of atmospheric optical turbulence.Item Handling of Missing Data with Growth Mixture Models(2019) Lee, Daniel Yangsup; Harring, Jeffrey R; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The recent growth of applications of growth mixture models for inference with longitudinal data has introduced a wide range of research dedicated to testing the different aspects of the model. One area of research that has not drawn much attention, however, is the performance of growth mixture models with missing data and when using the various methods for dealing with them. Missing data are usually an inconvenience that must be addressed in any data analysis scenario, and the use of growth mixture models is no less an exception to this. While the literature on various other aspects of growth mixture models has grown, not much research has been conducted on the consequences of mishandling missing data. Although the literature on missing data has generally accepted the use of modern missing data handling techniques, these techniques are not free of problems nor have they been comprehensively tested in the context of growth mixture models. The purpose of this dissertation is to incorporate the various missing data handling techniques on growth mixture models and, by using Monte Carlo simulation techniques, to provide guidance on specific conditions in which certain missing data handling methods will produce accurate and precise parameter estimates typically compromised when using simple, ad hoc, missing data handling approaches, or incorrect techniques.Item Theoretical Studies of the Workings of Processive Molecular Motors(2017) Vu, Huong Thuy; Thirumalai, Devarajan; Biophysics (BIPH); Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Processive molecular motors, such as kinesins, myosins and helicases, take multiple discrete steps on linear polar tracks such as microtubules, filamentous actin, and DNA/RNA substrates. Insights into the mechanisms and functions of this important class of biological motors have been obtained through observations from single-molecule experiments and structural studies. Such information includes the distribution of n, the number of steps motors take before dissociating, and v, the motor velocity, in the presence and absence of an external resistive force from single molecule experiments; and different structures of different states of motors at different conditions. Based on those available data, this thesis focuses on using both analytical and computational theoretical tools to investigate the workings of processive motors. Two examples of processive motors considered here are kinesins that walk on microtubules while transporting vesicles, and helicases which translocate on DNA/RNA substrate while unwinding the helix substrate. New physical principles and predictions related to their motility emerge from the proposed theories. The most significant results reported in this thesis are: Exact and approximate equations for velocity distribution, P(v), and runlength distribution, P(n), have been derived. Application of the theory to kinesins shows that P(v) is non-Gaussian and bimodal at high resistive forces. This unexpected behavior is a consequence of the discrete spacing between the alpha/beta tubulins, the building blocks of microtubule. In the case of helicases, we demonstrate that P(v) of typical helicases T7 and T4 shows signatures of heterogeneity, inferred from large variations in the velocity from molecule to molecule. The theory is used to propose experiments in order to distinguish between different physical basis for heterogeneity. We generated a one-microsecond atomic simulation trajectory capturing the docking process of the neck-linker, a crucial element deemed to be important in the motility of Kinesin-1. The conformational change in the neck linker is important in the force generation in this type of motor. The simulations revealed new conformations of the neck-linker that have not been noted in previous structural studies of Kinesin-1, but which are demonstrated to be relevant to another superfamily member, Kinesin-5. By comparing the simulation results with currently available data, we suggest that the two superfamilies might actually share more similarities in the neck-linker docking process than previously thought.Item EXPERIMENTAL EVALUATION AND SIMULATION RESEARCH ON NOVEL VARIABLE REFRIGERANT FLOW SYSTEM(2017) Lin, Xiaojie; Radermacher, Reinhard; Srebric, Jelena; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Variable refrigerant flow (VRF) system is a popular building air conditioning system which could provide cooling or heating to individual rooms independently. The system is called “variable refrigerant flow” system due to its capability of regulating the refrigerant flow via the precise control of variable speed compressors and electronic expansion valves in each indoor unit. In this dissertation, an advanced VRF system which could provide space cooling, heating and water heating is experimentally evaluated in cooling and heating season for both heat recovery operation and water heating operation. The VRF system is simulated in EnergyPlus and validated with experimental data. Based on the deviation analysis and literature review, it is found that the existing VRF model could not fully reflect the operation characteristic of VRF systems, leading to a high uncertainty in cooling/heating energy and energy consumption. A new VRF model is thereafter proposed, validated in this research and resulted in a model uncertainty less than 5%. Based on the new model, the seasonal performance of an energy saving control strategy and the concept of chilled water storage are investigated. Meanwhile, to solve the mismatch between the building’s thermal load and cooling/heating capability of the VRF system, a new VRF system with phase change material (PCM) based thermal energy storage (TES) is proposed. The new VRF system utilizes single TES device to support both cooling and heating season operation. The performance of new VRF system with PCM based TES is investigated and compared to that of the baseline VRF system. It is found that the new VRF system with PCM based TES could achieve both energy efficiency and demand response goals in cooling and heating season. Based on the comparison, the effect of operation strategies and grid incentive program are discussed. Finally, the economic analysis of the new VRF system with PCM based TES based on annual performance is carried out.Item On agent-based modeling: Multidimensional travel behavioral theory, procedural models and simulation-based applications(2015) Xiong, Chenfeng; Zhang, Lei; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation proposes a theoretical framework to modeling multidimensional travel behavior based on artificially intelligent agents, search theory, procedural (dynamic) models, and bounded rationality. For decades, despite the number of heuristic explanations for different results, the fact that "almost no mathematical theory exists which explains the results of the simulations" remains as one of the large drawbacks of agent-based computational process approach. This is partly the side effect of its special feature that "no analytical functions are required". Among the rapidly growing literature devoted to the departure from rational behavior assumptions, this dissertation makes effort to embed a sound theoretical foundation for computational process approach and agent-based microsimulations for transportation system modeling and analyses. The theoretical contribution is three-fold: (1) It theorizes multidimensional knowledge updating, search start/stopping criteria, and search/decision heuristics. These components are formulated or empirically modeled and integrated in a unified and coherent approach. (2) Procedural and dynamic agent-based decision-making is modeled. Within the model, agents make decisions. They also make decisions on how and when to make those decisions. (3) Replace conventional user equilibrium with a dynamic behavioral user equilibrium (BUE). Search start/stop criteria is defined in the way that the modeling process should eventually lead to a steady state that is structurally different to user equilibrium (UE) or dynamic user equilibrium (DUE). The theory is supported by empirical observations and the derived quantitative models are tested by agent-based simulation on a demonstration network. The model in its current form incorporates short-term behavioral dimensions: travel mode, departure time, pre-trip routing, and en-route diversion. Based on research needs and data availability, other dimensions can be added to the framework. The proposed model is successfully integrated with a dynamic traffic simulator (i.e. DTALite, a light-weight dynamic traffic assignment and simulation engine) and then applied to a mid-size study area in White Flint, Maryland. Results obtained from the integration corroborate the behavioral richness, computational efficiency, and convergence property of the proposed theoretical framework. The model is then applied to a number of applications in transportation planning, operations, and optimization, which highlights the capabilities of the proposed theory in estimating rich behavioral dynamics and the potential of large-scale implementation. Future research should experiment the integration with activity-based models, land-use development, energy consumption estimators, etc. to fully develop the potential of the agent-based model.