Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
9 results
Search Results
Item SENSITIVITY ANALYSIS AND STOCHASTIC OPTIMIZATIONS IN STOCHASTIC ACTIVITY NETWORKS(2022) Wan, Peng; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Activity networks are a powerful tool for modeling and analysis in project management, and in many other applications, such as circuit design and parallel computing. An activity network can be represented by a directed acyclic graph with one source node and one sink node. The directed arcs between nodes in an activity network represent the precedence relationships between different activities in the project. In a stochastic activity network (SAN), the arc lengths are random variables. This dissertation studies stochastic gradient estimators for SANs using Monte Carlo simulation, and the application of stochastic gradient estimators to network optimization problems. A new algorithm called Threshold Arc Criticality (TAC) for estimating the arc criticalities of stochastic activity networks is proposed. TAC is based on the following result: given the length of all arcs in a SAN except for the one arc of interest, that arc is on the critical path (longest path) if and only if its length is greater than a threshold. By applying Infinitesimal Perturbation Analysis (IPA) to TAC, an unbiased estimator of the derivative of the arc criticalities with respect to parameters of arc length distributions can be derived. The stochastic derivative estimator can be used for sensitivity analysis of arc criticalities via simulation. Using TAC, a new IPA gradient estimator of the first and second moments of project completion time (PCT) is proposed. Combining the new PCT stochastic gradient estimator with a Taylor series approximation, a functional estimation procedure for estimating the change in PCT moments caused by a large perturbation in an activity duration's distribution parameter is proposed and applied to optimization problems involving time-cost tradeoffs. In activity networks, crashing an activity means reducing the activity's duration (deterministic or stochastic) by a given percentage with an associated cost. A crashing plan of a project aims to shorten the PCT by reducing the duration of a set of activities under a limited budget. A disruption is an event that occurs at an uncertain time. Examples of disruptions are natural disasters, electrical outages, labor strikes, etc. For an activity network, a disruption may cause delays in unfinished activities. Previous work formulates finding the optimal crashing plan of an activity network under a single disruption as a two-stage stochastic mixed integer programming problem and applies a sample average approximation technique for finding the optimal solution. In this thesis, a new stochastic gradient estimator is derived and a gradient-based simulation optimization algorithm is applied to the problem of optimizing crashing under disruption.Item MODELING AND SIMULATION OF NOVEL MEDICAL RESPONSE SYSTEMS FOR OUT-OF-HOSPITAL CARDIAC ARREST(2020) Lancaster, Greg James; Herrmann, Jeffrey W; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sudden Cardiac Arrest (SCA) is the leading cause of death in the United States, resulting in 350,000 deaths annually. SCA survival requires immediate medical treatment with a defibrillatory shock and cardiopulmonary resuscitation. The fatality rate for out-of-hospital cardiac arrest is 90%, due in part to the reliance on Emergency Medical Services (EMS) to provide treatment. A substantial improvement in survival could be realized by applying early defibrillation to cardiac arrest victims. Automated External Defibrillators (AEDs) allow lay rescuers to provide early defibrillation, before the arrival of EMS. However, very few out-of-hospital cardiac arrests are currently treated with AEDs. Novel response concepts are being explored to reduce the time to defibrillation. These concepts include mobile citizen responders dispatched by a cell phone app to nearby cardiac arrest locations, and the use of drones to deliver AEDs to a cardiac arrest scene. A small number of pilot studies of these systems are currently in progress, however, the effectiveness of these systems remains largely unknown. This research presents a modeling and simulation approach to predict the effectiveness of various response concepts, with comparison to the existing standard of EMS response. The model uses a geospatial Monte Carlo sampling approach to simulate the random locations of a cardiac arrest within a geographical region, as well as both random and fixed origin locations of responding agents. The model predicts response time of EMS, mobile dispatched responders, or drone AED delivery, based on the distance travelled and the mode of transit, while accounting for additional system factors such as dispatch time, availability of equipment, and the reliability of the responders. Response times are translated to a likelihood of survival for each simulated case using a logistic regression model. Sensitivity analysis and response surface designed experiments were performed to characterize the important factors for response time predictions. Simulations of multiple types of systems in an example region are used to compare potential survival improvements. Finally, a cost analysis of the different systems is presented along with a decision analysis approach, which demonstrates how the method can be applied based on the needs and budgets of a municipality.Item MODELING AND VALIDATION OF NEUTRON ACTIVATION AND GAMMA-RAY SPECTROSCOPY MEASUREMENTS AS AN EXPLORATORY TOOL FOR NUCLEAR FORENSIC ANALYSIS(2018) Goodell, John; Mignerey, Alice C; Chemistry; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The continued success of nuclear forensic analysis relies on the development of new material and process signatures. However, the unique safety hazards and strict controls concerning nuclear materials and operations limit the practicality of experimental scenarios. To bypass these limitations, the nuclear science community is increasingly reliant on simulation-based tools. In this dissertation, neutron activation and gamma-ray spectroscopy measurements are simulated to explore the activation network of stainless steel and its components using two neutron sources. The goal is to identify nuclides or ratios that are indicative of the neutron source and test their measurability in complex samples. The neutron sources are a critical assembly, providing fission spectrum neutrons, and a beryllium (Be) neutron converter, producing neutrons through various deuteron induced reactions. Simulated neutron energy distributions are calculated using the Monte Carlo N-Particle (MCNP) radiation transport code. Neutron activation has an inherent neutron energy dependence, making nuclide production rates contingent on the neutron energy distribution. Activation calculations performed by hand and with the FISPACT-II code are compared against experiments to validate the neutron energy distributions and assess available reaction cross-section data. Additionally, ratios of activation products common to both neutron sources are investigated to determine if they are indicative of the neutron source. Gamma-ray spectroscopy with high-purity germanium (HPGe) detectors is the leading passive assay technique for radioactive samples, providing detailed qualitative and quantitative information while preserving sample integrity. A simple HPGe detector is modeled using MCNP to assess the measurability of different activation product ratios. The HPGe model is validated against its real counterpart to determine if the level of complexity is sufficient for this work. Activation calculations were able to validate the critical assembly neutron energy distribution but showed significant errors in the Be converter model. Additionally, validation of activation calculations identified shortcomings in the 60Ni(n,p)60Co reaction cross section. Absent interferences, HPGe simulation performance was equivalent to the real detector. The HPGe model also showed that decay time can affect measurement accuracy when significant interferences are present. Activation product ratios identified in this work that are indicative of the neutron source are 57Co/54Mn, 51Cr/54Mn, 57Co/59Fe, and 51Cr/59Fe.Item Evaluating Risks of Dam-Reservoir Systems Using Efficient Importance Sampling(2016) Deng, Qianli; Baecher, Gregory B; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The occurrence frequency of failure events serve as critical indexes representing the safety status of dam-reservoir systems. Although overtopping is the most common failure mode with significant consequences, this type of event, in most cases, has a small probability. Estimation of such rare event risks for dam-reservoir systems with crude Monte Carlo (CMC) simulation techniques requires a prohibitively large number of trials, where significant computational resources are required to reach the satisfied estimation results. Otherwise, estimation of the disturbances would not be accurate enough. In order to reduce the computation expenses and improve the risk estimation efficiency, an importance sampling (IS) based simulation approach is proposed in this dissertation to address the overtopping risks of dam-reservoir systems. Deliverables of this study mainly include the following five aspects: 1) the reservoir inflow hydrograph model; 2) the dam-reservoir system operation model; 3) the CMC simulation framework; 4) the IS-based Monte Carlo (ISMC) simulation framework; and 5) the overtopping risk estimation comparison of both CMC and ISMC simulation. In a broader sense, this study meets the following three expectations: 1) to address the natural stochastic characteristics of the dam-reservoir system, such as the reservoir inflow rate; 2) to build up the fundamental CMC and ISMC simulation frameworks of the dam-reservoir system in order to estimate the overtopping risks; and 3) to compare the simulation results and the computational performance in order to demonstrate the ISMC simulation advantages. The estimation results of overtopping probability could be used to guide the future dam safety investigations and studies, and to supplement the conventional analyses in decision making on the dam-reservoir system improvements. At the same time, the proposed methodology of ISMC simulation is reasonably robust and proved to improve the overtopping risk estimation. The more accurate estimation, the smaller variance, and the reduced CPU time, expand the application of Monte Carlo (MC) technique on evaluating rare event risks for infrastructures.Item GLOBAL AND REGIONAL REFERENCE MODELS FOR PREDICTING THE GEONEUTRINO FLUX AT SNO+, SUDBURY, CANADA(2013) Huang, Yu; McDonough, William F; Rudnick, Roberta L; Geology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Determining the radiogenic heat power that is driving plate tectonics and mantle convection is fundamentally important to understanding the Earth's heat budget and its thermal and chemical evolution. The radiogenic heat power is coupled to the chemical composition of the Bulk Silicate Earth (BSE), which has been debated for decades. Geoneutrinos produced by beta-minus decays in U, Th and K decay systems are correlated to the radiogenic heat power in the Earth. Measured geoneutrino signals at different locations can be used to investigate the distributions and abundances of U and Th, given appropriate reference Earth models. Here I construct both a global and regional scale reference model to predict the geoneutrino signal at the SNO+ detector in Sudbury, Canada. The primary objective of this dissertation is to predict the geoneutrino detection rate for this soon to be operational geoneutrino detector and evaluate its asymmetric uncertainty caused by the log-normal distributions of U and Th in the crust. The focus of both models are on the geoneutrino signal from the continental crust, which determines SNO+'s sensitivity to the mantle geoneutrino signal, which is key to testing different BSE compositional models. The total geoneutrino signal at SNO+ is predicted to be 40 +6 -4 TNU by combining the global and regional reference model predictions and assuming the contribution from continental lithospheric mantle and convecting mantle is 9 TNU. It is not feasible for SNO+, on its own, to provide experimental result that will determine the mantle geoneutrino signal and refine different BSE compositional models because of the large uncertainty associated with the crustal contribution. The regional crust study presented here lowers the uncertainty on the geoneutrino signal that originates from bulk crust when compared to the global reference model prediction ( 30.7 +6.0 -4.2 TNU vs. 34.0 +6.3 -5.7 TNU). A future goal is to increase the resolution of the model in proximal area to the detector (e.g., 50 km by 50 km), which will further reduce the uncertainty. To obtain useful data on the mantle geoneutrino signal, detections of geoneutrinos carried out on the oceans, such as the proposed ocean-bottom Hanohano experiment, will be of significant scientific value.Item Assessing the uncertainty of emergy analyses with Monte Carlo simulations(2012) Hudson, Amy; Tilley, David R; Environmental Science and Technology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Crop production systems were used to show the presence and propagation of uncertainty in emergy analyses and the effect of source variance on the variance of the yield unit emergy value (UEV). Data on energy/masses and UEVs for each source and yield were collected from the emergy literature and considered as inputs for the Monte Carlo simulation. The inputs were assumed to follow normal, lognormal, or uniform probability distributions. Using these inputs and a tabular method, two models ran Monte Carlo simulations to generate yield UEVs. Supplemental excel files elucidate the Monte Carlo simulations' calculations. The nitrogen fertilizer UEV and net topsoil loss energy were the inputs with the largest impact on the variance of the yield's UEV. These two sources also make the largest emergy contributions to the yield and should be the focus of a manager intent on reducing total system uncertainty. The selection of a statistical distribution had an impact on the yield UEV and thus these analyses may need to remain system- or even source- specific.Item Characterization of gradient estimators for stochastic activity networks(2011) Manterola, Renato Mauricio; Fu, Michael C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This thesis aims to characterize the statistical properties of Monte Carlo simulation-based gradient estimation techniques for performance measures in stochastic activity networks (SANs) using the estimators' variance as the comparison criterion. When analyzing SANs, both performance measures and their sensitivities (gradient, Hessian) are important. This thesis focuses on analyzing three direct gradient estimation techniques: infinitesimal perturbation analysis, the score function or likelihood ratio method, and weak derivatives. To investigate how statistical properties of the different gradient estimation techniques depend on characteristics of the SAN, we carry out both theoretical analyses and numerical experiments. The objective of these studies is to provide guidelines for selecting which technique to use for particular classes of SANs based on features such as complexity, size, shape and interconnectivity. The results reveal that a specific weak derivatives-based method with common random numbers outperforms the other direct techniques in nearly every network configuration tested.Item Surface modification of metal oxide nanoparticles by capillary condensation and its application(2006-08-15) Kim, Seonmin; Ehrman, Sheryl H; Kim, Seonmin; Chemical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Titania nanoparticles were modified using tetraethyl orthosilicate in a capillary condensation process and nanoparticles interconnected by silica layers were obtained. The amount of the silica layer was tunable by adjusting saturation conditions and the thickness of the layer generally increased as the saturation ratio increased up to a saturation ratio of 1.0. However, layer thickness was significantly affected by the geometric dimensions of the space between nanoparticles. Grand canonical Monte Carlo simulation was utilized in order to study the role of particle surface curvature and gap space between particles in capillary condensation. The curvature of the particle surface and the gap space played a crucial role in the meniscus formation. Simulation results suggested that the effect of gap space becomes significant at the saturation ratios less than 0.8 and the effect of the particle curvature is important near the saturation ratio of 1.0. From these results, these effects need to be considered for the formation of silica layers with a specific thickness. In order to further investigate the effect of addition of a silica layer, photocatalytic activity of surface-modified bi-phase titania nanoparticles was characterized on the basis of a pseudo-first order kinetic model. The results indicate that surface modification can enhance the photoactivity of original titania nanoparticles when an optimal amount of silica layer is present on the surface of nanoparticles.Item Generalized Confirmatory Factor Mixture Modeling: A Tool for Assessing Factorial Invariance Across Unspecified Populations(2004-04-30) Gagne, Phill; Hancock, Gregory R; Measurement, Statistics and EvaluationMixture modeling is an increasingly popular analysis in applied research settings. Confirmatory factor mixture modeling can be used to test for the presence of multiple populations that differ on one or more parameters of a factor model in a sample lacking a priori information about population membership. There have, however, been considerable difficulties regarding convergence and parameter recovery in confirmatory factor mixture models. The present study uses a Monte Carlo simulation design to expand upon a previous study by Lubke, Muthén, & Larsen (2002) which investigated the effects on convergence and bias of introducing intercept heterogeneity across latent classes, a break from the standard approach of intercept invariance in confirmatory factor modeling when the mean structure is modeled. Using convergence rates and percent bias as outcome measures, eight design characteristics of confirmatory factor mixture models were manipulated to investigate their effects on model performance: N; mixing proportion; number of indicators; factor saturation; number of heterogeneous intercepts, location of intercept heterogeneity, magnitude of intercept heterogeneity, and the difference between the latent means (Δκ) of the two modeled latent classes. A small portion of the present study examined another break from standard practice by having models with noninvariant factor loadings. Higher rates of convergence and lower bias in the parameter estimates were found for models with intercept and/or factor loading noninvariance than for models that were completely invariant. All manipulated model conditions affected convergence and bias, often in the form of interaction effects, with the most influential facets after the presence of heterogeneity being N and Δκ, both having a direct relation with convergence rates and an inverse relation with bias magnitude. The findings of the present study can be used to some extent to inform design decisions by applied researchers, but breadth of conditions was prioritized over depth, so the results are better suited to guiding future methodological research into confirmatory factor mixture models. Such research might consider the effects of larger Ns in models with complete invariance of intercepts and factor loadings, smaller values of Δκ in the presence of noninvariance, and additional levels of loading heterogeneity within latent classes.