Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    Item
    MONTE CARLO SIMULATIONS OF BRILLOUIN SCATTERING IN TURBID MEDIA
    (2023) Lashley, Stephanie; Chembo, Yanne K; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Brillouin microscopy is a non-invasive, label-free optical elastography method for measuring mechanical properties of cells. It provides information on the longitudinal modulus and viscosity of a medium, which can be indicators of traumatic brain injury, cancerous tumors, or fibrosis. All optical techniques face difficulties imaging turbid media, and Monte Carlo simulations are considered the gold-standard to model these scenarios. Brillouin microscopy adds a unique challenge to this problem due to the angular dependence of the scattering event. This thesis extends a traditional Monte Carlo simulation software by adding the capability to simulate Brillouin scattering in turbid media, which provides a method to test strategies to mitigate the effects of multiple elastic scattering without the time and cost associated with physical experiments. Experimental results have shown potential methods to alleviate the problems caused by multiple elastic scattering, and this thesis will verify the simulation results against the experimental findings.
  • Thumbnail Image
    Item
    MODEL-BASED SYSTEMS ENGINEERING APPLIED TO THE TRAJECTORY PLANNING FOR AUTONOMOUS VEHICLES
    (2018) Bansal, Siddharth; Baras, John S.; Systems Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Passing maneuver is a complex driving maneuver and it becomes more challenging in oncoming traffic. In this study, a passing scenario with three vehicles is considered where car 1, an Autonomous Vehicle (AV), is moving behind car 2 in the same lane. The third vehicle is part of the oncoming traffic in the adjacent lane. The primary goal is to model and evaluate a measurement-based decision-making strategy for the AV satisfying driving safety constraints. This strategy is based on the optimal control with the objective to performing the passing maneuver safely. To evaluate the efficiency of the decision-making strategy – probability of safely completing passing maneuver, a model of the system was developed considering all three cars as point-masses. Two binary variables, each representing the collaborative nature of the cars 2 and 3, were defined. These variables show if the two vehicles will collaborate with the AV when they find out about its intention to overtake. Lastly, a sensitivity study and trade-off study are done to determine optimal design parameters for AV’s measurement system and decision-making strategy.
  • Thumbnail Image
    Item
    Bayesian Estimation of the Inbreeding Coefficient for Single Nucleotide Polymorphism Using Complex Survey Data
    (2015) Xue, Zhenyi; Lahiri, Partha; Li, Yan; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In genome-wide association studies (GWAS), single nucleotide polymorphism (SNP) is often used as a genetic marker to study gene-disease association. Some large scale health sample surveys have recently started collecting genetic data. There is now growing interest in developing statistical procedures using genetic survey data. This calls for innovative statistical methods that incorporate both genetic and statistical sampling. Under simple random sampling, the traditional estimator of the inbreeding coefficient is given by 1 - (number of observed heterozygotes) / (number of expected heterozygotes). Genetic data quality control reports published by the National Health and Nutrition Examination Survey (NHANES) and the Health and Retirement Study (HRS) use this simple estimator, which serves as a reasonable quality control tool to identify problems such as genotyping error. There is, however, a need to improve on this estimator by considering different features of the complex survey design. The main goal of this dissertation is to fill in this important research gap. First, a design-based estimator and its associated jackknife standard error estimator are proposed. Secondly, a hierarchical Bayesian methodology is developed using the effective sample size and genotype count. Lastly, a Bayesian pseudo-empirical likelihood estimator is proposed using the expected number of heterozygotes in the estimating equation as a constraint when maximizing the pseudo-empirical likelihood. One of the advantages of the proposed Bayesian methodology is that the prior distribution can be used to restrict the parameter space induced by the general inbreeding model. The proposed estimators are evaluated using Monte Carlo simulation studies. Moreover, the proposed estimates of the inbreeding coefficients of SNPs from APOC1 and BDNF genes are compared using the data from the 2006 Health and Retirement Study.
  • Thumbnail Image
    Item
    Wind Power Development in the United States: Effects of Policies and Electricity Transmission Congestion
    (2013) Hitaj, Claudia; McConnell, Kenneth E; Agricultural and Resource Economics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation, I analyze the drivers of wind power development in the United States as well as the relationship between renewable power plant location and transmission congestion and emissions levels. I first examine the role of government renewable energy incentives and access to the electricity grid on investment in wind power plants across counties from 1998-2007. The results indicate that the federal production tax credit, state-level sales tax credit and production incentives play an important role in promoting wind power. In addition, higher wind power penetration levels can be achieved by bringing more parts of the electricity transmission grid under independent system operator regulation. I conclude that state and federal government policies play a significant role in wind power development both by providing financial support and by improving physical and procedural access to the electricity grid. Second, I examine the effect of renewable power plant location on electricity transmission congestion levels and system-wide emissions levels in a theoretical model and a simulation study. A new renewable plant takes the effect of congestion on its own output into account, but ignores the effect of its marginal contribution to congestion on output from existing plants, which results in curtailment of renewable power. Though pricing congestion removes the externality and reduces curtailment, I find that in the absence of a price on emissions, pricing congestion may in some cases actually increase system-wide emissions. The final part of my dissertation deals with an econometric issue that emerged from the empirical analysis of the drivers of wind power. I study the effect of the degree of censoring on random-effects Tobit estimates in finite sample with a particular focus on severe censoring, when the percentage of uncensored observations reaches 1 to 5 percent. The results show that the Tobit model performs well even at 5 percent uncensored observations with the bias in the Tobit estimates remaining at or below 5 percent. Under severe censoring (1 percent uncensored observations), large biases appear in the estimated standard errors and marginal effects. These are generally reduced as the sample size increases in both N and T.
  • Thumbnail Image
    Item
    Exploring Equilibrium Systems with Nonequilibrium Simulations
    (2012) Ballard, Andrew James; Jarzynski, Christopher; Chemical Physics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Equilibrium sampling is at the core of computational thermodynamics, aiding our understanding of various phenomena in the natural sciences including phase coexistence, molecular solvation, and protein folding. Despite the widespread development of novel sampling strategies over the years, efficient simulation of large complex systems remains a challenge. While the majority of current methods such as simulated tempering, replica exchange, and Monte Carlo methods rely solely on the use of equilibrium techniques, recent results in statistical physics have uncovered the possibility to sample equilibrium states through nonequilibrium simulations. In our first study we present a new replica exchange sampling strategy, "Replica Exchange with Nonequilibrium Switches," which uses nonequilibrium simulations to enhance equilibrium sampling. In our method, trial swap configurations between replicas are generated through nonequilibrium switching simulations which act to drive the replicas towards each other in phase space. By means of these switching simulations we can increase an effective overlap between replicas, enhancing the probability that these moves are accepted and ultimately leading to more effective sampling of the underlying energy landscape. Simulations on model systems reveal that our method can be beneficial in the case of low replica overlap, able to match the efficiency of traditional replica exchange while using fewer processors. We also demonstrate how our method can be applied for the calculation of solvation free energies. In a second, separate study, we investigate the dynamics leading to the dissociation of Na-Cl in water. Here we employ tools of rare event sampling to deduce the role of the surrounding water molecules in promoting the dissociation of the ion pair. We first study the thermodynamic forces leading to dissociation, finding it to be driven energetically and opposed entropically. In further analysis of the system dynamics, we deduce a) the spatial extent over which solvent fluctuations influence dissociation, b) the role of sterics and electrostatics, and c) the importance of inertia in enhancing the reaction probability.
  • Thumbnail Image
    Item
    A Comparison of Methods for Testing for Interaction Effects in Structural Equation Modeling
    (2010) Weiss, Brandi A.; Harring, Jeffrey R.; Hancock, Gregory R.; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The current study aimed to determine the best method for estimating latent variable interactions as a function of the size of the interaction effect, sample size, the loadings of the indicators, the size of the relation between the first-order latent variables, and normality. Data were simulated from known population parameters, and data were analyzed using nine latent variable methods of testing for interaction effects. Evaluation criteria used for comparing the methods included proportion of relative bias, the standard deviation of parameter estimates, the mean standard error estimate, a relative ratio of the mean standard error estimate to the standard deviation of parameter estimates, the percent of converged solutions, Type I error rates, and empirical power. It was found that when data were normally distributed and the sample size was 250 or more, the constrained approach results in the least biased estimates of the interaction effect, had the most accurate standard error estimates, high convergence rates, and adequate type I error rates and power. However, when sample sizes were small and the loadings were of adequate size, the latent variable scores approach may be preferable to the constrained approach. When data were severely non-normal, all of the methods were biased, had inaccurate standard error estimates, low power, and high Type I error rates. Thus, when data were non-normal, relative comparisons were made regarding the approaches rather than absolute comparisons. In relative terms, the marginal-maximum likelihood approach performed the least poorly of the methods for estimating the interaction effect, but requires sample sizes of 500 or greater. However, when data were non-normal, the latent moderated structure analysis resulted in the least biased estimates of the first-order effects and had bias similar to that of the marginal-maximum likelihood approach. Recommendations are made for researchers who wish to test for latent variable interaction effects.
  • Thumbnail Image
    Item
    A MULTISCALE MODEL FOR AN ATOMIC LAYER DEPOSITION PROCESS
    (2010) Dwivedi, Vivek Hari; Adomaitis, Raymond A; Chemical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Atomic layer deposition (ALD) is a deposition technique suitable for the con- trolled growth of thin films. During ALD, precursor gasses are supplied to the reactor in an alternating sequence producing individual atomic layers through self- limiting reactions. Thin films are grown conformally with atomic layer control over surfaces with topographical features. A very promising material system for ALD growth is aluminum oxide. Alu- minum oxide is highly desirable for both its physical and electronic characteristics. Aluminum oxide has a very high band gap (~ 9 ev) and a high dielectric constant (k ~ 9). The choice of precursors for aluminum oxide atomic layer deposition vary from aluminum halide, alkyl, and alkoxides for aluminum-containing molecules; for oxygen-containing molecules choices include oxygen, water, hydrogen peroxide and ozone. For this work a multiscale simulation is presented where aluminum oxide is deposited inside anodic aluminum oxide (AAO) pores for the purposes of tuning the pore diameter. Controlling the pore diameter is an import step in the conversion of AAO into nanostructered catalytic membranes (NCM). Shrinking the pore size to a desired radius allows for the control of the residence time for molecules entering the pore and a method for molecular filtration. Furthermore pore diameter control would allow for the optimization of precursor doses making this a green process. Inherently, the ALD of AAO is characterized by a slow and a faster time scale where film growth is on the order of minutes and hours and surface reactions are near instantaneous. Likewise there are two length scales: film thickness and composition on the order of nanometers and pore length on the order of microns. The surface growth is modeled in terms of a lattice Monte Carlo simulation while the diffusion of the precursor gas along the length of the pore is modeled as a Knudsen diffusion based transport model.
  • Thumbnail Image
    Item
    Modeling and validation of dosimetry measurement assumptions within the Armed Forces Radiobiology Research Institute TRIGA Mark F reactor and associated exposure facilities using Monte Carlo techniques
    (2009) Hall, Donald Edward; Modarres, Mohammad; Al-Sheikhly, Mohamad; Nuclear Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The TRIGA Mark F reactor at the Armed Forces Radiobiology Research Institute in Bethesda Maryland is a 1 megawatt steady state reactor which can also be operated in pulse mode at a power of up to 2500 megawatts. It is characterized by a moveable core and two large exposure rooms, rather than a thermal column or beam ports, as found in most research reactors. A detailed model of the reactor and the associated exposure facilities was developed using the Monte Carlo N-Particle (MCNP) and Monte Carlo N-Particle Extended (MCNPX) software programs. The model was benchmarked against operational data from the reactor, to include total core excess reactivity, control rod worths, and nominal fuel element worths. The model was then used to model burnup within individual fuel elements within the core to determine the effect of core movement within the reactor pool on individual element burnup. Movement of the core with respect to the two exposure rooms was modeled to determine the effect of movement of the core on the radiation fields and gamma and neutron fluxes within the exposure rooms. Additionally, the model was used to demonstrate the effectiveness of gadolinium paint used within the exposure rooms to reduce thermal neutron production and subsequent Ar41 production within the exposure rooms. The model showed a good approximation to measured benchmark data across all applied metrics, and additionally provided confirmation of data on dose rates within the exposure rooms. It also showed that, while there was some variation of burnup within individual fuel elements based on core position within the reactor pool, the overall effect was negligible for effective fuel management within the core. Finally, the model demonstrated explicitly that the use of gadolinium paint within the exposure rooms was, and remains, an effective way of reducing the thermal flux, and subsequent Ar-41 production within the exposure rooms. It also demonstrated that the gadolinium paint also resulted in a much steeper neutron flux gradient within the exposure rooms than would have been obtained had neutrons been allowed to thermalize within the wood walls lining the rooms and then reenter the exposure facilities.
  • Thumbnail Image
    Item
    IN-SITU MEASUREMENT OF EPITHELIAL TISSUE OPTICAL PROPERTIES: DEVELOPMENT AND IMPLEMENTATION OF DIFFUSE REFLECTANCE SPECTROSCOPY TECHNIQUES
    (2009) Wang, Quanzeng; Wang, Nam Sun; Chemical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Cancer is a severe threat to human health. Early detection is considered the best way to increase the chance for survival. While the traditional cancer detection method, biopsy, is invasive, noninvasive optical diagnostic techniques are revolutionizing the way that cancer is diagnosed. Reflectance spectroscopy is one of these optical spectroscopy techniques showing promise as a diagnostic tool for pre-cancer detection. When a neoplasia occurs in tissue, morphologic and biochemical changes happen in the tissue, which in turn results in the change of optical properties and reflectance spectroscopy. Therefore, a pre-cancer can be detected by extracting optical properties from reflectance spectroscopy. This dissertation described the construction of a fiberoptic based reflectance system and the development of a series of modeling studies. This research is aimed at establishing an improved understanding of the optical properties of mucosal tissues by analyzing reflectance signals at different wavelengths. The ultimate goal is to reveal the potential of reflectance-based optical diagnosis of pre-cancer. The research is detailed in Chapter 3 through Chapter 5. Although related with each other, each chapter was designed to become a journal paper ultimately. In Chapter 3, a multi-wavelength, fiberoptic system was constructed, evaluated and implemented to determine internal tissue optical properties at ultraviolet A and visible wavelengths. A condensed Monte Carlo model was deployed to simulate light-tissue interaction and generate spatially distributed reflectance data. These data were used to train an inverse neural network model to extract tissue optical properties from reflectance. Optical properties of porcine mucosal and liver tissues were finally measured. In Chapter 4, the condensed Monte Carlo method was extended so that it can rapidly simulate reflectance from a single illumination-detection fiber thus enabling the calculation of large data sets. The model was implemented to study spectral reflectance changes due to breast cancer. The effect of adding an illumination-detection fiber to a linear array fiber for optical property determination was also evaluated. In Chapter 5, an investigation of extracting the optical properties from two-layer tissues was performed. The relationship between spatially-resolved reflectance distributions and optical properties in two-layer tissue was investigated. Based on all the aforementioned studies, spatially resolved reflectance system coupled with condensed Monte Carlo and neural network models was found to be objective and appear to be sensitive and accurate in quantitatively assessing optical property change of mucosal tissues.
  • Thumbnail Image
    Item
    Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA)
    (2007-01-25) Pour-Gol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation describes a new integrated uncertainty analysis methodology for "best estimate" thermal hydraulics (TH) codes such as RELAP5. The main thrust of the methodology is to utilize all available types of data and information in an effective way to identify important sources of uncertainty and to assess the magnitude of their impact on the uncertainty of the TH code output measures. The proposed methodology is fully quantitative and uses the Bayesian approach for quantifying the uncertainties in the predictions of TH codes. The methodology also uses the data and information for a more informed and evidence-based ranking and selection of TH phenomena through a modified PIRT method. The modification considers importance of various TH phenomena as well as their uncertainty importance. In identifying and assessing uncertainties, the proposed methodology treats the TH code as a white box, thus explicitly treating internal sub-model uncertainties, and propagation of such model uncertainties through the code structure as well as various input parameters. A The TH code output is further corrected through a Bayesian updating with available experimental data from integrated test facilities. It utilizes the data directly or indirectly related to the code output to account implicitly for missed/screened out sources of uncertainties. The proposed methodology uses an efficient Monte Carlo sampling technique for the propagation of uncertainty using modified Wilks sampling criteria. The methodology is demonstrated on the LOFT facility for 200% cold leg LBLOCA transient scenario.