Reliability Engineering Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/33173

Browse

Search Results

Now showing 1 - 10 of 51
  • Item
    A CAUSAL INFORMATION FUSION MODEL FOR ASSESSING PIPELINE INTEGRITY IN THE PRESENCE OF GROUND MOVEMENT
    (2024) Schell, Colin Andrew; Groth, Katrina M; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Pipelines are the primary transportation method for natural gas and oil in the United States making them critical infrastructure to maintain. However, ground movement hazards, such as landslides and ground subsidence, can deform pipelines and potentially lead to the release of hazardous materials. According to the Pipeline and Hazardous Materials Safety Administration (PHMSA), from 2004 to 2023, ground movement related pipeline failures resulted in $413M USD in damages. The dynamic nature of ground movement makes it necessary to collect pipeline and ground monitoring data and to actively model and predict pipeline integrity. Conventional stress-based methods struggle to predict pipeline failure in the presence of large longitudinal strains that result from ground movement. This has prompted many industry analysts to use strain-based design and assessment (SBDA) methods to manage pipeline integrity in the presence of ground movement. However, due to the complexity of ground movement hazards and their variable effects on pipeline deformation, current strain-based pipeline integrity models are only applicable in specific ground movement scenarios and cannot synthesize complementary data sources. This makes it costly and time-consuming for pipeline companies to protect their pipeline network from ground movement hazards. To close these gaps, this research made significant steps towards the development of a causal information fusion model for assessing pipeline integrity in a variety of ground movement scenarios that result in permanent ground deformation. We developed a causal framework that categorizes and describes how different risk-influencing factors (RIFs) affect pipeline reliability using academic literature, joint industry projects, PHMSA projects, pipeline data, and input from engineering experts. This framework was the foundation of the information fusion model which leverages SBDA methods, Bayesian network (BN) models, pipeline monitoring data, and ground monitoring data to calculate the probability of failure and the additional longitudinal strain needed to fail the pipeline. The information fusion model was then applied to several case studies with different contexts and data to compare model-based recommendations to the actions taken by decision makers. In these case studies, the proposed model leveraged the full extent of data available at each site and produced similar conclusions to those made by decision makers. These results demonstrate that the model could be used in a variety of ground movement scenarios that result in permanent ground deformation and exemplified the comprehensive insights that come from using an information fusion approach for assessing pipeline integrity. The proposed model lays the foundation for the development of advanced decision making tools that can enable operators to identify at-risk pipeline segments that require site specific integrity assessments and efficiently manage the reliability of their pipelines in the presence of ground movement.
  • Item
    TOPOLOGICAL ANALYSIS OF DISTANCE WEIGHTED NORTH AMERICAN RAILROAD NETWORK: EFFICIENCY, ECCENTRICITY, AND RELATED ATTRIBUTES
    (2023) Elsibaie, Sherief; Ayyub, Bilal M.; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The North American railroad system can be well represented by a network with 302,943 links (track segments) and 250,388 nodes (stations, junctions, and waypoints), and other points of interest based on publicly accessible geographical information obtained from the Bureau of Transportation Statistics (BTS) and the Federal Railroad Administration (FRA). From this large network a slightly more consolidated subnetwork representing the major freight railroads and Amtrak was selected for analysis. Recent improvements in network and graph theory and improvements in all-pairs shortest path algorithms make it more feasible to process certain characteristics on large networks with reduced computation time and resources. The characteristics of networks at issue to support network-level risk and resilience studies include node efficiency, node eccentricity, and other attributes derived from those measures, such as network arithmetic efficiency, network geometric central node, radius, and diameter, and some distribution measures of the node characteristics. Rail distance weighting factors, representing the length of each rail line derived from BTS data, are mapped to corresponding links, and are used as link weights for the purpose of computing all pair shortest paths and subsequent characteristics. This study also compares the characteristics of North American railroad infrastructure subnetworks divided by Class I carriers, which are the largest railroad carriers classified by the Surface Transportation Board (STB) by annual operating revenue, and which together comprise most of the North American railroad network. These network characteristics can be used to inform placement of resources and plan for natural hazard and disaster scenarios. They relate to many practical applications such as network efficiency to distribute traffic and a network’s ability to recover from disruptions. The primary contribution of this thesis is the novel characterization of a detailed network representation of the North American railroad network and Class I carrier subnetworks, with established as well novel network characteristics.
  • Item
    Root Cause Analysis Of Adverse Events Using A Human Reliability Analysis Approach
    (2022) Johnson, David Michael; Vaughn-Cooke, Monifa; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Large scale analysis of adverse event data is challenging due to the unstructured nature of event reporting and narrative textual data in adverse event repositories. This issue is further complicated for human error adverse events, which are routinely treated as a root cause instead of as initiating events in a causal chain. Human error events are commonly misunderstood and underreported, which hinders the analysis of trends and the identification of risk mitigation strategies across industries. Currently, the prevailing means of human error investigation is the analysis of accident and incident data which are not designed around a framework of human cognition or psychomotor function. Existing approaches lack a theoretical foundation with sufficient cognitive granularity to identify root causes of human error. This research provides a cognitive task decomposition to standardize the investigation, reporting, and analysis of human error adverse event data in narrative textual form. The proposed method includes a qualitative structure to answer six questions (when, who, what, where, how, why) that are critical to comprehensively understand the events surrounding human error. This process is accomplished in five main stages: (1) Develop guidelines for a cognitively-driven adverse event investigation; (2) Perform a baseline cognitive task analysis (when) to document relevant stakeholders (who), products or processes (what), and environments (where) based on a taxonomy of cognitive and psychomotor function; (3) Identify deviations for the baseline task analysis in the form of unsafe acts (how) using a human error classification; (4) and Develop a root cause mapping to identify the performance shaping factors (PSFs) (why) for each unsafe act. The outcome of the proposed method will advance the fields of risk analysis and regulatory science by providing a standardized and repeatable process to input and analyze human error in adverse event databases. The method provides a foundation for more effective human error trending and accident analysis at a greater level of cognitive granularity. Application of this method to adverse event risk mitigations can inform prospective strategies such as resource allocation and system design, with the ultimate long-term goal of reducing the human contribution to risk.
  • Item
    A PHYSICS-INFORMED NEURAL NETWORK FRAMEWORK FOR BIG MACHINERY DATA IN PROGNOSTICS AND HEALTH MANAGEMENT FOR COMPLEX ENGINEERING SYSTEMS
    (2022) Cofre Martel, Sergio Manuel Ignacio; Modarres, Mohammad; Lopez Droguett, Enrique; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Big data analysis and data-driven models (DDMs) have become essential tools in prognostics and health management (PHM). Despite this, several challenges remain to successfully apply these techniques to complex engineering systems (CESs). Indeed, current state-of-the-art applications are treated as black-box algorithms, where research efforts have focused on developing complex DDMs, overlooking or neglecting the importance of the data preprocessing stages prior to training these models. Guidelines to adequately prepare data sets collected from CESs to train DDMs in PHM are frequently unclear or inexistent. Furthermore, these DDMs do not consider prior knowledge on the system’s physics of degradation, which gives little-to-no control over the data interpretation in reliability applications such as maintenance planning.In this context, this dissertation presents a physics-informed neural network (PINN) architecture for remaining useful life (RUL) estimation based on big machinery data (BMD) collected from sensor monitoring networks (SMNs) in CESs. The main outcomes of this work are twofold. First, a systematic guide to preprocess BMD for diagnostics and prognostics tasks is developed based on expert knowledge and data science techniques. Second, a PINN-inspired PHM framework is proposed for RUL estimation through an open-box approach by exploring the system’s physics of degradation through partial differential equations (PDEs). The PINN-RUL framework aims to discover the system’s underlying physics-related behaviors, which could provide valuable information to create more trustworthy PHM models. The data preprocessing and RUL estimation frameworks are validated through three case studies, including the C-MAPSS benchmark data set and two data sets corresponding to real CESs. Results show that the proposed preprocessing methodology can effectively generate data sets for supervised PHM models for CESs. Furthermore, the proposed PINN-RUL framework provides an interpretable latent variable that can capture the system’s degradation dynamics. This is a step forward to increase interpretability of prognostic models by mapping the RUL estimation to the latent space and its implementation as a state of health classifier. The PINN-RUL framework is flexible as it allows incorporating available physics-based models to its architecture. As such, this framework takes a step forward in bridging the gap between statistic-based PHM and physics-based PHM methods.
  • Item
    A BAYESIAN NETWORK PERSPECTIVE ON THE ELEMENTS OF A NUCLEAR POWER PLANT MULTI-UNIT SEISMIC PROBABILISTIC RISK ASSESSMENT
    (2021) DeJesus Segarra, Jonathan; Bensi, Michelle T.; Modarres, Mohammad; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Nuclear power plants (NPPs) generated about 10% of the world’s electricity in 2020 and about 1/3 of the world’s low-carbon electricity production. Probabilistic risk assessments (PRAs) are used to estimate the risk posed by NPPs, generate insights related to strengths and vulnerabilities, and support risk-informed decisionmaking related to safety and reliability. While PRAs are typically carried out on a reactor-by-reactor basis, the Fukushima Dai-ichi accident highlighted the need to also consider multi-unit accidents. To properly characterize the risks of reactor core damage and subsequent radiation release at a multi-unit site, it is necessary to account for dependencies among reactors arising from the possibility that adverse conditions affect multiple units concurrently. For instance, the seismic hazard is one of the most critical threats to NPP structures, systems, and components (SSCs) because it affects their redundancy. Seismic PRAs are comprised of three elements: seismic hazard analysis, fragility evaluation, and systems analysis. This dissertation presents a Bayesian network (BN) perspective on the elements of a multi-unit seismic PRA (MUSPRA) by outlining a MUSPRA approach that accounts for the dependencies across NPP reactor units. BNs offer the following advantages: graphical representation that enables transparency and facilitates communicating modeling assumptions; efficiency in modeling complex dependencies; ability to accommodate differing probability distribution assumptions; and facilitating multi-directional inference, which allows for the efficient calculation of joint and conditional probability distributions for all random variables in the BN. The proposed MUSPRA approach considers the spatial variability of the ground motions (hazard analysis), dependent seismic performance of SSCs (fragility evaluation), and efficient BN modeling of systems (systems analysis). Considering the spatial variability of ground motions represents an improvement over the typical assumption that ground motions across a NPP site are perfectly correlated. The method to model dependent seismic performance of SSCs presented is an improvement over the current “perfectly dependent or independent” approach for dependent seismic performance and provides system failure probability results that comply with theoretical bounds. Accounting for these dependencies in a systematic manner makes the MUSPRA more realistic and, therefore, should provide confidence in its results (calculated metrics) and risk insights.
  • Item
    THERMODYNAMIC AND INFORMATION ENTROPY-BASED PREDICTION AND DETECTION OF FATIGUE FAILURES IN METALLIC AND COMPOSITE MATERIALS USING ACOUSTIC EMISSION AND DIGITAL IMAGE CORRELATION
    (2021) Karimian, Seyed Fouad; Modarres, Mohammad; Bruck, Hugh A.; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Although assumed to be identical, manufactured components always present some variability in their performance while in service. This variability can be seen in their degradation path and time to failure as they are tested under identical conditions. In engineering structures and some components, fatigue is among the most common degradation mechanisms and has been under extensive study over the past century. A common characteristic of the fatigue life models is to rely on some observable or measurable markers of damage, such as crack length or modulus reduction. However, these markers become more pronounced and detectable toward the end of the component or structure’s life. Therefore, more advanced techniques would be needed to better account for a structure's fatigue degradation. Several methods based on non-destructive testing techniques have developed over the past decades to decrease the uncertainty in fatigue degradation assessments. These methods seek to exploit the data collected by sensors during the operational life of a structure or component. Hence, the assessment of the health state can be constantly updated based on the operational conditions that allow for condition-based monitoring and maintenance. However, these methods are mostly context-dependent and limited to specific experimental conditions. Therefore, a method to effectively characterize and measure fatigue damage evolution at multiple length scales based on the fundamental concept of entropy is studied in this dissertation. The two entropic-based indices used are: Thermodynamic entropy, and, Information entropy.The objectives of this dissertation are to develop new methods for fatigue damage detection and failure prediction in metallic and FRP laminated composite materials by using AE and DIC techniques and converting them to information and thermodynamic entropy gains caused by fatigue damage. 1. Develop and experimentally validate fatigue damage detection, failure prediction, and prognosis approaches based on the information entropy of AE signal waveforms in both metallic and FRP laminated composite materials. 2. Develop and experimentally validate fatigue damage detection, failure prediction, and prognosis approaches based on thermodynamic entropy using the DIC technique in both metallic and FRP laminated composite materials. 3. Develop a framework for RUL estimation of metallic and FRP laminated composite structures based on the two entropic measures.
  • Item
    Data Requirements to Enable PHM for Liquid Hydrogen Storage Systems from a Risk Assessment Perspective
    (2021) Correa Jullian, Camila Asuncion; Groth, Katrina M; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Quantitative Risk Assessment (QRA) aids the development of risk-informed safety codes and standards which are employed to reduce risk in a variety of complex technologies, such as hydrogen systems. Currently, the lack of reliability data limits the use of QRAs for fueling stations equipped with bulk liquid hydrogen storage systems. In turn, this hinders the ability to develop the necessary rigorous safety codes and standards to allow worldwide deployment of these stations. Prognostics and Health Management (PHM) and the analysis of condition-monitoring data emerge as an alternative to support risk assessment methods. Through the QRA-based analysis of a liquid hydrogen storage system, the core elements for the design of a data-driven PHM framework are addressed from a risk perspective. This work focuses on identifying the data collection requirements to strengthen current risk analyses and enable data-driven approaches to improve the safety and risk assessment of a liquid hydrogen fueling infrastructure.
  • Item
    COST-EFFECTIVE PROGNOSTICS AND HEALTH MONITORING OF LOCALLY DAMAGED PIPELINES WITH HIGH CONFIDENCE LEVEL
    (2020) Aria, Amin; Modarres, Mohammad; Azarm, Shapour; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Localized pipeline damages, caused by degradation processes such as corrosion, are prominent, can result in pipeline failure and are expensive to monitor. To prevent pipeline failure, many Prognostics and Health Monitoring (PHM) approaches have been developed in which sensor network for online, and human inspection for offline data gathering are separately used. In this dissertation, a two-level (segment- and integrated-level) PHM approach for locally damaged pipelines is proposed where both of these degradation data gathering schemes (i.e., detection methods) are considered simultaneously. The segment-level approach, in which the damage behavior is considered to be uniform, consists of a static and a dynamic phase. In the static phase, a new optimization problem for the health monitoring layout design of locally damaged pipelines is formulated. The solution to this problem is an optimal configuration (or layout) of degradation detection methods with a minimized health monitoring cost and a maximized likelihood of damage detection. In the dynamic phase, considering the optimal layout, an online fusion of high-frequency sensors data and low-frequency inspection information is conducted to estimate and then update the pipeline’s Remaining Useful Life (RUL) estimate. Subsequently, the segment-level optimization formulation is modified to improve its scalability and facilitate updating layouts considering the online RUL estimates. Finally, at the integrated-level, the modified segment-level approach is used along with Stochastic Dynamic Programming (SDP) to produce an optimal set of layouts for a long pipeline consisting of multiple segments with different damage behavior. Experimental data and several notional examples are used to demonstrate the performance of the proposed approaches. Synthetically generated damage data are used in two examples to demonstrate that the proposed segment-level layout optimization approach results in a more robust solution compared to single detection approaches and deterministic methods. For the dynamic segment-level phase, acoustic emission sensor signals and microscopic images from a set of fatigue crack experiments are considered to show that combining sensor- and image-based damage size estimates leads to accuracy improvements in RUL estimation. Lastly, using synthetically generated damage data for three hypothetical pipeline segments, it is shown that the constructed integrated-level approach provides an optimal set of layouts for several pipeline segments.
  • Item
    DEVELOPING HYBRID PHM MODELS FOR PIPELINE PITTING CORROSION, CONSIDERING DIFFERENT TYPES OF UNCERTAINTY AND CHANGES IN OPERATIONAL CONDITIONS
    (2019) Heidarydashtarjandi, Roohollah; Groth, Katrina M; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Pipelines are the most efficient and reliable way to transfer oil and gas in large quantities. Pipeline infrastructures represent a high capital investment and, if they fail, a source of environmental hazards and a potential threat to life. Among different pipeline failure mechanisms, pitting corrosion is of most concern because of the high growth rate of pits. In this dissertation two hybrid prognostics and health management (PHM) models are developed to evaluate degradation level of piggable pipelines, due to internal pitting corrosion. These models are able to incorporate multiple sensors data and physics of failure (POF) knowledge of internal pitting corrosion process. This dissertation covers both cases when in some pipeline's segments the pit density is low and in some segments it is high. In addition, it takes into account four types of uncertainty, including epistemic uncertainty, variability in the temporal aspects, spatial heterogeneity, and inspection errors. For a pipeline segment with a low pit density, a hybrid defect-based algorithm is developed to estimate probability distribution of maximum depth of each individual pit on that segment. This algorithm considers change in operational condition in internal pitting corrosion degradation modeling for the first time. In this way a two-phase similarity-based data fusion algorithm is developed to fuse POF knowledge, in-line inspection (ILI) and online inspection (OLI) data. In the first phase, a hierarchical Bayesian method based on a non-homogeneous gamma process is used to fuse POF knowledge and in-line inspection (ILI) data on multiple pits, and augmented particle filtering is used to fuse POF knowledge and online inspection (OLI) data of an active reference pit. The results are used to define a similarity index between each ILI pit and the OLI pit. In the second phase, this similarity index is used to generate dummy observations of depth for each ILI pit, based on the inspection data of the OLI pit. Those dummy observations are used in augmented particle filtering to estimate the remaining useful life (RUL) of that segment after the change in operational conditions when there is no new ILI data. For a pipeline segment with a high pit density, a hybrid population-based algorithm is developed to estimate the probability density function of maximum depth of the pit population on that segment. This algorithm eliminates the need of matching procedure that is computationally expensive and prone to error when the pit density is high. In this algorithm three types of measurement uncertainty including sizing error, probability of detection (POD), and probability of false call (POFC) are taken into account. In addition, initiation of new pits between the last ILI and a prediction time is modeled by using a homogeneous Poisson process. The non-linearity of the pitting corrosion process and the POF knowledge of this process is modeled by using a non-homogeneous gamma process. The estimation of these two algorithms are used in a series system to estimate the reliability of a long pipeline with multiple segments, when in some segments the pit density is low and in some segments it is high. The output of this research can be used to find the optimal maintenance action and time for each segment and the optimal next ILI time for the whole pipeline that eventually decreases the cost of unpredicted failures and unnecessary maintenance activities.
  • Item
    QUANTIFYING AND PREDICTING USER REPUTATION IN A NETWORK SECURITY CONTEXT
    (2019) Gratian, Margaret Stephanie; Cukier, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Reputation has long been an important factor for establishing trust and evaluating the character of others. Though subjective by definition, it recently emerged in the field of cybersecurity as a metric to quantify and predict the nature of domain names, IP addresses, files, and more. Implicit in the use of reputation to enhance cybersecurity is the assumption that past behaviors and opinions of others provides insight into the expected future behavior of an entity, which can be used to proactively identify potential threats to cybersecurity. Despite the plethora of work in industry and academia on reputation in cyberspace, proposed methods are often presented as black boxes and lack scientific rigor, reproducibility, and validation. Moreover, despite widespread recognition that cybersecurity solutions must consider the human user, there is limited work focusing on user reputation in a security context. This dissertation presents a mathematical interpretation of user cyber reputation and a methodology for evaluating reputation in a network security context. A user’s cyber reputation is defined as the most likely probability the user demonstrates a specific characteristic on the network, based on evidence. The methodology for evaluating user reputation is presented in three phases: characteristic definition and evidence collection; reputation quantification and prediction; and reputation model validation and refinement. The methodology is illustrated through a case study on a large university network, where network traffic data is used as evidence to determine the likelihood a user becomes infected or remains uninfected on the network. A separate case study explores social media as an alternate source of data for evaluating user reputation. User-reported account compromise data is collected from Twitter and used to predict if a user will self-report compromise. This case study uncovers user cybersecurity experiences and victimization trends and emphasizes the feasibility of using social media to enhance understandings of users from a security perspective. Overall, this dissertation presents an exploration into the complicated space of cyber identity. As new threats to security, user privacy, and information integrity continue to manifest, the need for reputation systems and techniques to evaluate and validate online identities will continue to grow.