Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
13 results
Search Results
Item Characterization and Modeling of Two-Phase Heat Transfer in Chip-Scale Non-Uniformly Heated Microgap Channels(2010) Ali, Ihab A.; Bar-Cohen, Avram; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A chip-scale, non-uniformly heated microgap channel, 100 micron to 500 micron in height with dielectric fluid HFE-7100 providing direct single- and two-phase liquid cooling for a thermal test chip with localized heat flux reaching 100 W/cm2, is experimentally characterized and numerically modeled. Single-phase heat transfer and hydraulic characterization is performed to establish the single-phase baseline performance of the microgap channel and to validate the mesh-intensive CFD numerical model developed for the test channel. Convective heat transfer coefficients for HFE-7100 flowing in a 100-micron microgap channel reached 9 kW/m2K at 6.5 m/s fluid velocity. Despite the highly non-uniform boundary conditions imposed on the microgap channel, CFD model simulation gave excellent agreement with the experimental data (to within 5%), while the discrepancy with the predictions of the classical, "ideal" channel correlations in the literature reached 20%. A detailed investigation of two-phase heat transfer in non-ideal micro gap channels, with developing flow and significant non-uniformities in heat generation, was performed. Significant temperature non-uniformities were observed with non-uniform heating, where the wall temperature gradient exceeded 30°C with a heat flux gradient of 3-30 W/cm2, for the quadrant-die heating pattern compared to a 20°C gradient and 7-14 W/cm2 heat flux gradient for the uniform heating pattern, at 25W heat and 1500 kg/m2s mass flux. Using an inverse computation technique for determining the heat flow into the wetted microgap channel, average wall heat transfer coefficients were found to vary in a complex fashion with channel height, flow rate, heat flux, and heating pattern and to typically display an inverse parabolic segment of a previously observed M-shaped variation with quality, for two-phase thermal transport. Examination of heat transfer coefficients sorted by flow regimes yielded an overall agreement of 31% between predictions of the Chen correlation and the 24 data points classified as being in Annular flow, using a recently proposed Intermittent/Annular transition criterion. A semi-numerical first-order technique, using the Chen correlation, was found to yield acceptable prediction accuracy (17%) for the wall temperature distribution and hot spots in non-uniformly heated "real world" microgap channels cooled by two-phase flow. Heat transfer coefficients in the 100-micron channel were found to reach an Annular flow peak of ~8 kW/m2K at G=1500 kg/m2s and vapor quality of x=10%. In a 500-micron channel, the Annular heat transfer coefficient was found to reach 9 kW/m2K at 270 kg/m2s mass flux and 14% vapor quality level. The peak two-phase HFE-7100 heat transfer coefficient values were nearly 2.5-4 times higher (at similar mass fluxes) than the single-phase HFE-7100 values and sometimes exceeded the cooling capability associated with water under forced convection. An alternative classification of heat transfer coefficients, based on the variable slope of the observed heat transfer coefficient curve), was found to yield good agreement with the Chen correlation predictions in the pseudo-annular flow regime (22%) but to fall to 38% when compared to the Shah correlation for data in the pseudo-intermittent flow regime.Item Flux Maps Obtained from Core Geometry Approximations: Monte Carlo Simulations and Benchmark Measurements for a 250 kW TRIGA Reactor(2009) Mohamed, Ali Bellou; Al-Sheikhly, Mohamad; Silverman, Joseph; Material Science and Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Two MCNP models (detailed and approximated) of University of Maryland Training Reactor were created. The detailed model attempted to simulate the reactor according to engineering specifications while the simplified model eliminated all structural materials above and below the core. Neutron flux spectrum calculations for both models within the core showed that the results obtained from both models agreed within less than 0.5%. It was concluded that reactors equipped with standard TRIGA fuels enriched to 20 percent in uranium-235 can be modeled with all structures above and below the core eliminated entirely from the model without increasing the error due to geometry modeling simplifications of the core. In TRIGA reactors supplied with standard TRIGA fuels enriched to 20 percent in U-235, the graphite reflectors above and below the fuel act as "neutron energy regulators." Neutrons reflected back into the core through the graphite reflectors quickly become thermalized even if their energies were altered due to the change in materials properties above and below the core. Both MCNP models results agree well with measured data. It was also found that simplification in the target geometry leads to substantial uncertainty in the calculated results. The neutron energy spectrum, thermal flux, and total flux were calculated at the thermal column access plug face; in the pneumatic transfer system rabbit, and on top and bottom sections of the most center fuel element. The thermal flux and the total flux at the thermal column access plug face both agreed with measured data within a 5% uncertainty. The thermal flux, fast flux, and the total flux in the rabbit differ by 18.8%, 35%, and 5.7% respectively, from the measured data. The relatively high uncertainty (in the neutron energy distribution but not the total neutron flux) was attributed to the use of air as the target irradiated inside the rabbit. For such a thin target (15 mg/cm2), a precise neutron balance between reflection and absorption events is difficult to obtain; that will alter the thermal or fast flux values. The contribution of this work to the reactor users is that a virtual reactor model that compared well with experiment is created. Experiments utilizing the reactor experimental facilities (thermal column, through tube, pneumatic transfer system rabbit, and beam ports) can now be optimized before they are executed. The contribution of this work to the research reactor community at is that research reactors equipped with standard TRIGA fuels can be modeled with core geometry approximations, such as these adopted in this work, without affecting the precision and accuracy of the Monte Carlo calculations.Item A DATA-INFORMED MODEL OF PERFORMANCE SHAPING FACTORS FOR USE IN HUMAN RELIABILITY ANALYSIS(2009) Groth, Katrina M.; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many Human Reliability Analysis (HRA) models use Performance Shaping Factors (PSFs) to incorporate human elements into system safety analysis and to calculate the Human Error Probability (HEP). Current HRA methods rely on different sets of PSFs that range from a few to over 50 PSFs, with varying degrees of interdependency among the PSFs. This interdependency is observed in almost every set of PSFs, yet few HRA methods offer a way to account for dependency among PSFs. The methods that do address interdependencies generally do so by varying different multipliers in linear or log-linear formulas. These relationships could be more accurately represented in a causal model of PSF interdependencies. This dissertation introduces a methodology to produce a Bayesian Belief Network (BBN) of interactions among PSFs. The dissertation also presents a set of fundamental guidelines for the creation of a PSF set, a hierarchy of PSFs developed specifically for causal modeling, and a set of models developed using currently available data. The models, methodology, and PSF set were developed using nuclear power plant data available from two sources: information collected by the University of Maryland for the Information-Decision-Action model [1] and data from the Human Events Repository and Analysis (HERA) database [2] , currently under development by the United States Nuclear Regulatory Commission. Creation of the methodology, the PSF hierarchy, and the models was an iterative process that incorporated information from available data, current HRA methods, and expert workshops. The fundamental guidelines are the result of insights gathered during the process of developing the methodology; these guidelines were applied to the final PSF hierarchy. The PSF hierarchy reduces overlap among the PSFs so that patterns of dependency observed in the data can be attribute to PSF interdependencies instead of overlapping definitions. It includes multiple levels of generic PSFs that can be expanded or collapsed for different applications. The model development methodology employs correlation and factor analysis to systematically collapse the PSF hierarchy and form the model structure. Factor analysis is also used to identify Error Contexts (ECs) – specific PSF combinations that together produce an increased probability of human error (versus the net effect of the PSFs acting alone). Three models were created to demonstrate how the methodology can be used provide different types of data-informed insights. By employing Bayes' Theorem, the resulting model can be used to replace linear calculations for HEPs used in Probabilistic Risk Assessment. When additional data becomes available, the methodology can be used to produce updated causal models to further refine HEP values.Item Modeling and validation of dosimetry measurement assumptions within the Armed Forces Radiobiology Research Institute TRIGA Mark F reactor and associated exposure facilities using Monte Carlo techniques(2009) Hall, Donald Edward; Modarres, Mohammad; Al-Sheikhly, Mohamad; Nuclear Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The TRIGA Mark F reactor at the Armed Forces Radiobiology Research Institute in Bethesda Maryland is a 1 megawatt steady state reactor which can also be operated in pulse mode at a power of up to 2500 megawatts. It is characterized by a moveable core and two large exposure rooms, rather than a thermal column or beam ports, as found in most research reactors. A detailed model of the reactor and the associated exposure facilities was developed using the Monte Carlo N-Particle (MCNP) and Monte Carlo N-Particle Extended (MCNPX) software programs. The model was benchmarked against operational data from the reactor, to include total core excess reactivity, control rod worths, and nominal fuel element worths. The model was then used to model burnup within individual fuel elements within the core to determine the effect of core movement within the reactor pool on individual element burnup. Movement of the core with respect to the two exposure rooms was modeled to determine the effect of movement of the core on the radiation fields and gamma and neutron fluxes within the exposure rooms. Additionally, the model was used to demonstrate the effectiveness of gadolinium paint used within the exposure rooms to reduce thermal neutron production and subsequent Ar41 production within the exposure rooms. The model showed a good approximation to measured benchmark data across all applied metrics, and additionally provided confirmation of data on dose rates within the exposure rooms. It also showed that, while there was some variation of burnup within individual fuel elements based on core position within the reactor pool, the overall effect was negligible for effective fuel management within the core. Finally, the model demonstrated explicitly that the use of gadolinium paint within the exposure rooms was, and remains, an effective way of reducing the thermal flux, and subsequent Ar-41 production within the exposure rooms. It also demonstrated that the gadolinium paint also resulted in a much steeper neutron flux gradient within the exposure rooms than would have been obtained had neutrons been allowed to thermalize within the wood walls lining the rooms and then reenter the exposure facilities.Item A Predictive Model of Nuclear Power Plant Crew Decision-Making and Performance in a Dynamic Simulation Environment(2009) Coyne, Kevin; Mosleh, Ali; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.Item Towards A Formal And Scalable Approach For Quantifying Software Reliability At Early Development Stages(2009) Kong, Wende; Smidts, Carol; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Problems which originate in early development stages can have a lasting influence on the reliability, safety, and cost of a software system. The requirements document, which is usually available at the requirements analysis stage, must be correct, unambiguous, and complete if the rest of the development effort is to succeed. The ability to identify faults in requirements and predict the reliability of a software system early in its development can help organizations make informative decisions about corrective actions and improve the system's quality in a cost-effective manner. A review of the literature reveals that existing approaches are unsuited to provide trustworthy reliability prediction either due to the ignorance of the requirements documents, or because of the informal and fairly sketchy way in detecting faults in requirements. This study explores the use of a preselected software reliability measurement for early software faults detection and reliability prediction. This measurement, originally a black-box testing technique, was broadly recognized for its ability to detect incomplete and ambiguous requirements, although no information was found in the literature about how to take advantage of its power. This study mathematically formalized the measurement to enhance its rigidity, repeatability and scalability and further extended it as an effective requirements faults detection technique. An automation-oriented algorithm was developed for quantifying the impact of the detected requirements faults on software reliability. The feasibility and scalability of the proposed approach for early faults detection and reliability prediction were examined using two real applications. The results clearly confirmed its feasibility and usefulness, particularly when no failure data is available and other methods are not applicable. The scalability barriers were also spotted in the approach. An empirical study was thus conducted to gain insight into the nature of the technical barriers. As an attempt to overcome the barrier, a set of rules was proposed based on the observed patterns. Finally, a preliminarily controlled experiment was conducted to evaluate the usability of the proposed rules. This study will enable software project stakeholders to effectively detect requirements faults and assess the quality of requirements early in development, and ultimately lead to improved software reliability if the identified faults are removed in time. Software project practitioners, regulators, and policy makers involved in the certification of software systems can benefit most from the techniques proposed in this study.Item Integrated Propulsion and Power Modeling for Bimodal Nuclear Thermal Rockets(2007-10-08) Clough, Joshua; Lewis, Mark J; Aerospace Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Bimodal nuclear thermal rocket (BNTR) engines have been shown to reduce the weight of space vehicles to the Moon, Mars, and beyond by utilizing a common reactor for propulsion and power generation. These savings lead to reduced launch vehicle costs and/or increased mission safety and capability. Experimental work of the Rover/NERVA program demonstrated the feasibility of NTR systems for trajectories to Mars. Numerous recent studies have demonstrated the economic and performance benefits of BNTR operation. Relatively little, however, is known about the reactor-level operation of a BNTR engine. The objective of this dissertation is to develop a numerical BNTR engine model in order to study the feasibility and component-level impact of utilizing a NERVA-derived reactor as a heat source for both propulsion and power. The primary contribution is to provide the first-of-its-kind model and analysis of a NERVA-derived BNTR engine. Numerical component models have been modified and created for the NERVA reactor fuel elements and tie tubes, including 1-D coolant thermodynamics and radial thermal conduction with heat generation. A BNTR engine system model has been created in order to design and analyze an engine employing an expander-cycle nuclear rocket and Brayton cycle power generator using the same reactor. Design point results show that a 316 MWt reactor produces a thrust and specific impulse of 66.6 kN and 917 s, respectively. The same reactor can be run at 73.8 kWt to produce the necessary 16.7 kW electric power with a Brayton cycle generator. This demonstrates the feasibility of BNTR operation with a NERVA-derived reactor but also indicates that the reactor control system must be able to operate with precision across a wide power range, and that the transient analysis of reactor decay heat merits future investigation. Results also identify a significant reactor pressure-drop limitation during propulsion and power-generation operation that is caused by poor tie tube thermal conductivity. This leads to the conclusion that, while BNTR operation is possible with a NERVA-derived reactor, doing so requires careful consideration of the Brayton cycle design point and fuel element survivability.Item Reliability-Based Design Of Piping: Internal Pressure, Gravity, Earthquake, and Thermal Expansion(2007-08-09) Avrithi, Kleio; Ayyub, Bilal M.; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Although reliability theory has offered the means for reasonably accounting for the design uncertainties of structural components, limited effort has been made to estimate and control the probability of failure for mechanical components, such as piping. The ASME B&PV Code, Section III, used today for the design of safety piping in nuclear plants is based on the traditional Allowable Stress Design (ASD) method. This dissertation can be considered as a primary step towards the reliability-based design of nuclear safety piping. Design equations are developed according to the Load and Resistance Factor Design (LRFD) method. The loads addressed are the sustained weight, internal pressure, and dynamic loading (e.g., earthquake). The dissertation provides load combinations, and a database of statistical information on basic variables (strength of steel, geometry, and loads). Uncertainties associated with selected ultimate strength prediction models -burst or yielding due to internal pressure and the ultimate bending moment capacity- are quantified for piping. The procedure is based on evaluation of experimental results cited in literature. Partial load and resistance factors are computed for the load combinations and for selected values of the target reliability index, β. Moreover, design examples demonstrate the procedure of the computations. A probabilistic-based method especially for Class 2 and 3 piping is proposed by considering only cycling moment loading (e.g., thermal expansion). Conclusions of the study and provided suggestions can be used for future research.Item Radiation-induced Dechlorination of PCBs and Chlorinated Pesticides and the Destruction of the Hazardous Organic Solvents in Waste Water(2007-05-02) Chaychian, Mahnaz; Al-Sheikhly, Mohamad; Chemical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation presents research on the approach, feasibility and mechanisms of using high energy electrons for the dechlorination of polychlorinated biphenyls (PCBs) in transformer oil, PCBs and chlorinated pesticides in marine sediment, and hazardous organic solvents in waste water. The remediation of the organic contaminants by ionizing radiation is achieved by means of both reduction and oxidation processes. PCBs in transformer oil and in marine sediment can be effectively dechlorinated by reduction, while toxic organic compounds in water are removed by oxidation. The complete conversion of 2,2',6,6'-tetrachlorobiphenyl (PCB 54) in transformer oil to benign products is achieved without degradation of the oil itself. It requires 200 kGy of gamma irradiation of transformer oil containing PCB 54 (0.27 mg/g) to achieve >99% destruction of the PCB. Analysis of samples obtained as a function of dose demonstrates gradual degradation of PCB 54 and successive formation and degradation of trichloro-, dichloro-, and monochlorobiphenyl leading to the environmentally acceptable products, biphenyl and inorganic chloride. The mechanisms and kinetics of reductive degradation, which were obtained by pulse radiolysis studies, are discussed. Radiolysis may be of practical interest because the transformer oil may be re-used following treatment with little or no clean-up. Radiolytic degradation of aqueous suspensions of PCBs in marine sediments in the presence of isopropanol and food grade surfactants was also studied. Additives, such as an alcohol, were necessary to enhance the radiolytic yield and the dechlorination of PCBs. Conditions are demonstrated under which surfactants can be an effective approach for the enhanced remediation of chlorinated compounds in organic-rich environments such as marine sediments. Results presented on the treatment of marine sediment by radiolysis in the presence of additives for the degradation of PCBs advance the chemistry of this costly process, which may prove to be competitive with available alternatives. Also presented are results from an examination and study of the oxidative and reductive effects of electron-beam irradiation on the concentrations of six organic solvents in water. The organic solvents in water were prepared to mimic a pharmaceutical waste stream. Saturation with ozone did not sufficiently lower the unacceptably high dose requirements to meet environmental standards.Item PROBABILISTIC MODELS TO ESTIMATE FIRE-INDUCED CABLE DAMAGE AT NUCLEAR POWER PLANTS(2007-04-10) Valbuena, Genebelin R; Modarres, Mohammad; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Even though numerous PRAs have shown that fire can be a major contributor to nuclear power plant risk, there are some specific areas of knowledge related to this issue, such as the prediction of fire-induced damage to electrical cables and circuits, and their potential effects in the safety of the nuclear power plant, that still constitute a practical enigma, particularly for the lack of approaches/models to perform consistent and objective assessments. This report contains a discussion of three different models to estimate fire-induced cable damage likelihood given a specified fire profile: the kinetic, the heat transfer and the IR "K Factor" model. These models not only are based on statistical analysis of data available in the open literature, but to the greatest extent possible they use physics based principles to describe the underlying mechanism of failures that take place among the electrical cables upon heating due to external fires. The characterization of cable damage, and consequently the loss of functionality of electrical cables in fire is a complex phenomenon that depends on a variety of intrinsic factors such as cable materials and dimensions, and extrinsic factors such as electrical and mechanical loads on the cables, heat flux severity, and exposure time. Some of these factors are difficult to estimate even in a well-characterized fire, not only for the variability related to the unknown material composition and physical arrangements, but also for the lack of objective frameworks and theoretical models to study the behavior of polymeric wire cable insulation under dynamic external thermal insults. The results of this research will 1) help to develop a consistent framework to predict fire-induced cable failure modes likelihood, and 2) develop some guidance to evaluate and/or reduce the risk associated with these failure modes in existing and new power plant facilities. Among the models evaluated, the physics-based heat transfer model takes into account the properties and characteristics of the cables and cable materials, and the characteristics of the thermal insult. This model can be used to estimate the probability of cable damage under different thermal conditions.