Mechanical Engineering
Permanent URI for this communityhttp://hdl.handle.net/1903/2263
Browse
6 results
Search Results
Item Characterization and Modeling of Two-Phase Heat Transfer in Chip-Scale Non-Uniformly Heated Microgap Channels(2010) Ali, Ihab A.; Bar-Cohen, Avram; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A chip-scale, non-uniformly heated microgap channel, 100 micron to 500 micron in height with dielectric fluid HFE-7100 providing direct single- and two-phase liquid cooling for a thermal test chip with localized heat flux reaching 100 W/cm2, is experimentally characterized and numerically modeled. Single-phase heat transfer and hydraulic characterization is performed to establish the single-phase baseline performance of the microgap channel and to validate the mesh-intensive CFD numerical model developed for the test channel. Convective heat transfer coefficients for HFE-7100 flowing in a 100-micron microgap channel reached 9 kW/m2K at 6.5 m/s fluid velocity. Despite the highly non-uniform boundary conditions imposed on the microgap channel, CFD model simulation gave excellent agreement with the experimental data (to within 5%), while the discrepancy with the predictions of the classical, "ideal" channel correlations in the literature reached 20%. A detailed investigation of two-phase heat transfer in non-ideal micro gap channels, with developing flow and significant non-uniformities in heat generation, was performed. Significant temperature non-uniformities were observed with non-uniform heating, where the wall temperature gradient exceeded 30°C with a heat flux gradient of 3-30 W/cm2, for the quadrant-die heating pattern compared to a 20°C gradient and 7-14 W/cm2 heat flux gradient for the uniform heating pattern, at 25W heat and 1500 kg/m2s mass flux. Using an inverse computation technique for determining the heat flow into the wetted microgap channel, average wall heat transfer coefficients were found to vary in a complex fashion with channel height, flow rate, heat flux, and heating pattern and to typically display an inverse parabolic segment of a previously observed M-shaped variation with quality, for two-phase thermal transport. Examination of heat transfer coefficients sorted by flow regimes yielded an overall agreement of 31% between predictions of the Chen correlation and the 24 data points classified as being in Annular flow, using a recently proposed Intermittent/Annular transition criterion. A semi-numerical first-order technique, using the Chen correlation, was found to yield acceptable prediction accuracy (17%) for the wall temperature distribution and hot spots in non-uniformly heated "real world" microgap channels cooled by two-phase flow. Heat transfer coefficients in the 100-micron channel were found to reach an Annular flow peak of ~8 kW/m2K at G=1500 kg/m2s and vapor quality of x=10%. In a 500-micron channel, the Annular heat transfer coefficient was found to reach 9 kW/m2K at 270 kg/m2s mass flux and 14% vapor quality level. The peak two-phase HFE-7100 heat transfer coefficient values were nearly 2.5-4 times higher (at similar mass fluxes) than the single-phase HFE-7100 values and sometimes exceeded the cooling capability associated with water under forced convection. An alternative classification of heat transfer coefficients, based on the variable slope of the observed heat transfer coefficient curve), was found to yield good agreement with the Chen correlation predictions in the pseudo-annular flow regime (22%) but to fall to 38% when compared to the Shah correlation for data in the pseudo-intermittent flow regime.Item A DATA-INFORMED MODEL OF PERFORMANCE SHAPING FACTORS FOR USE IN HUMAN RELIABILITY ANALYSIS(2009) Groth, Katrina M.; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many Human Reliability Analysis (HRA) models use Performance Shaping Factors (PSFs) to incorporate human elements into system safety analysis and to calculate the Human Error Probability (HEP). Current HRA methods rely on different sets of PSFs that range from a few to over 50 PSFs, with varying degrees of interdependency among the PSFs. This interdependency is observed in almost every set of PSFs, yet few HRA methods offer a way to account for dependency among PSFs. The methods that do address interdependencies generally do so by varying different multipliers in linear or log-linear formulas. These relationships could be more accurately represented in a causal model of PSF interdependencies. This dissertation introduces a methodology to produce a Bayesian Belief Network (BBN) of interactions among PSFs. The dissertation also presents a set of fundamental guidelines for the creation of a PSF set, a hierarchy of PSFs developed specifically for causal modeling, and a set of models developed using currently available data. The models, methodology, and PSF set were developed using nuclear power plant data available from two sources: information collected by the University of Maryland for the Information-Decision-Action model [1] and data from the Human Events Repository and Analysis (HERA) database [2] , currently under development by the United States Nuclear Regulatory Commission. Creation of the methodology, the PSF hierarchy, and the models was an iterative process that incorporated information from available data, current HRA methods, and expert workshops. The fundamental guidelines are the result of insights gathered during the process of developing the methodology; these guidelines were applied to the final PSF hierarchy. The PSF hierarchy reduces overlap among the PSFs so that patterns of dependency observed in the data can be attribute to PSF interdependencies instead of overlapping definitions. It includes multiple levels of generic PSFs that can be expanded or collapsed for different applications. The model development methodology employs correlation and factor analysis to systematically collapse the PSF hierarchy and form the model structure. Factor analysis is also used to identify Error Contexts (ECs) – specific PSF combinations that together produce an increased probability of human error (versus the net effect of the PSFs acting alone). Three models were created to demonstrate how the methodology can be used provide different types of data-informed insights. By employing Bayes' Theorem, the resulting model can be used to replace linear calculations for HEPs used in Probabilistic Risk Assessment. When additional data becomes available, the methodology can be used to produce updated causal models to further refine HEP values.Item A Predictive Model of Nuclear Power Plant Crew Decision-Making and Performance in a Dynamic Simulation Environment(2009) Coyne, Kevin; Mosleh, Ali; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.Item Towards A Formal And Scalable Approach For Quantifying Software Reliability At Early Development Stages(2009) Kong, Wende; Smidts, Carol; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Problems which originate in early development stages can have a lasting influence on the reliability, safety, and cost of a software system. The requirements document, which is usually available at the requirements analysis stage, must be correct, unambiguous, and complete if the rest of the development effort is to succeed. The ability to identify faults in requirements and predict the reliability of a software system early in its development can help organizations make informative decisions about corrective actions and improve the system's quality in a cost-effective manner. A review of the literature reveals that existing approaches are unsuited to provide trustworthy reliability prediction either due to the ignorance of the requirements documents, or because of the informal and fairly sketchy way in detecting faults in requirements. This study explores the use of a preselected software reliability measurement for early software faults detection and reliability prediction. This measurement, originally a black-box testing technique, was broadly recognized for its ability to detect incomplete and ambiguous requirements, although no information was found in the literature about how to take advantage of its power. This study mathematically formalized the measurement to enhance its rigidity, repeatability and scalability and further extended it as an effective requirements faults detection technique. An automation-oriented algorithm was developed for quantifying the impact of the detected requirements faults on software reliability. The feasibility and scalability of the proposed approach for early faults detection and reliability prediction were examined using two real applications. The results clearly confirmed its feasibility and usefulness, particularly when no failure data is available and other methods are not applicable. The scalability barriers were also spotted in the approach. An empirical study was thus conducted to gain insight into the nature of the technical barriers. As an attempt to overcome the barrier, a set of rules was proposed based on the observed patterns. Finally, a preliminarily controlled experiment was conducted to evaluate the usability of the proposed rules. This study will enable software project stakeholders to effectively detect requirements faults and assess the quality of requirements early in development, and ultimately lead to improved software reliability if the identified faults are removed in time. Software project practitioners, regulators, and policy makers involved in the certification of software systems can benefit most from the techniques proposed in this study.Item PROBABILISTIC MODELS TO ESTIMATE FIRE-INDUCED CABLE DAMAGE AT NUCLEAR POWER PLANTS(2007-04-10) Valbuena, Genebelin R; Modarres, Mohammad; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Even though numerous PRAs have shown that fire can be a major contributor to nuclear power plant risk, there are some specific areas of knowledge related to this issue, such as the prediction of fire-induced damage to electrical cables and circuits, and their potential effects in the safety of the nuclear power plant, that still constitute a practical enigma, particularly for the lack of approaches/models to perform consistent and objective assessments. This report contains a discussion of three different models to estimate fire-induced cable damage likelihood given a specified fire profile: the kinetic, the heat transfer and the IR "K Factor" model. These models not only are based on statistical analysis of data available in the open literature, but to the greatest extent possible they use physics based principles to describe the underlying mechanism of failures that take place among the electrical cables upon heating due to external fires. The characterization of cable damage, and consequently the loss of functionality of electrical cables in fire is a complex phenomenon that depends on a variety of intrinsic factors such as cable materials and dimensions, and extrinsic factors such as electrical and mechanical loads on the cables, heat flux severity, and exposure time. Some of these factors are difficult to estimate even in a well-characterized fire, not only for the variability related to the unknown material composition and physical arrangements, but also for the lack of objective frameworks and theoretical models to study the behavior of polymeric wire cable insulation under dynamic external thermal insults. The results of this research will 1) help to develop a consistent framework to predict fire-induced cable failure modes likelihood, and 2) develop some guidance to evaluate and/or reduce the risk associated with these failure modes in existing and new power plant facilities. Among the models evaluated, the physics-based heat transfer model takes into account the properties and characteristics of the cables and cable materials, and the characteristics of the thermal insult. This model can be used to estimate the probability of cable damage under different thermal conditions.Item Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA)(2007-01-25) Pour-Gol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation describes a new integrated uncertainty analysis methodology for "best estimate" thermal hydraulics (TH) codes such as RELAP5. The main thrust of the methodology is to utilize all available types of data and information in an effective way to identify important sources of uncertainty and to assess the magnitude of their impact on the uncertainty of the TH code output measures. The proposed methodology is fully quantitative and uses the Bayesian approach for quantifying the uncertainties in the predictions of TH codes. The methodology also uses the data and information for a more informed and evidence-based ranking and selection of TH phenomena through a modified PIRT method. The modification considers importance of various TH phenomena as well as their uncertainty importance. In identifying and assessing uncertainties, the proposed methodology treats the TH code as a white box, thus explicitly treating internal sub-model uncertainties, and propagation of such model uncertainties through the code structure as well as various input parameters. A The TH code output is further corrected through a Bayesian updating with available experimental data from integrated test facilities. It utilizes the data directly or indirectly related to the code output to account implicitly for missed/screened out sources of uncertainties. The proposed methodology uses an efficient Monte Carlo sampling technique for the propagation of uncertainty using modified Wilks sampling criteria. The methodology is demonstrated on the LOFT facility for 200% cold leg LBLOCA transient scenario.