Mechanical Engineering

Permanent URI for this communityhttp://hdl.handle.net/1903/2263

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Organizational Interface Failures: Causes and Impacts on System Risk
    (2017) Pires, Thiago Tinoco; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Organizational Interfaces exist when two or more organizations interact with each other in order to achieve objectives that would not be possible or feasible by operating independently. When organizations become interdependent an entire new class of vulnerabilities emerge, and understanding these is vital. Ideally, probabilistic risk assessments (PRAs) account for the reliability of hardware, software, humans and the interfaces among them. From a reliability and PRA disciplines perspective, very little is available in terms of methodologies for estimating the chances that OIs problems can contribute to risks. The objectives of this work are to address the following questions: 1) Are OIs important contributors to risks? 2) What are the ways/means of OI failures? 3) Can causal model of OI failures be developed? 4) Can improvements in the reliability discipline be made to incorporate the effects of OI failures? The importance of OIs as contributors to risks were confirmed through an investigation on past accidents in different industrial and service sectors and identifying the evidence on how OI failures played a role. A set of OIs characteristics that provide an understanding of how deficiencies and enhancements in such characteristics can lead to or mitigate/prevent OI failures were proposed. These are derived from insights gained from the accidents reviewed, and from a review on organizational behavior theories and models. The OI characterization was used to propose a Bayesian Belief Network causal model of OI failures for communication transfer. The model is built by means of a study conducted to gather empirical evidence on whether OI failures can be dependent on the OI characteristics. The evidence was gathered through a survey questionnaire to study causal factors of OI failures. The OI characterization was also used to develop OI Failure Mode and Effects Analysis (OI-FMEA) to be utilized as tool to incorporate the effects of OI failures in systems failure. The OI-FMEA was exercised to test if it provides enhancements on current Dynamic Position FMEA practices in the deepwater oil and gas well drilling industry. The exercise demonstrated that the OI-FMEA concepts were a powerful tool to identify serious risk scenario not realized previously.
  • Thumbnail Image
    Item
    BAYESIAN BELIEF NETWORK AND FUZZY LOGIC ADAPTIVE MODELING OF DYNAMIC SYSTEM: EXTENSION AND COMPARISON
    (2010) CHENG, PING DANNY; MODARRES, MOHAMMAD; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The purpose of this thesis is to develop, expand, compare and contrast two methodologies, namely BBN and FLM, which are used in the modeling of the dynamics of physical system behavior and are instrumental in a better understanding on the POF. The paper begins with an introduction of the proposed approaches in the modeling of complex physical systems, followed by a quick literature review of FLM and BBN. This thesis uses an existing pump system [3] as a case study, where the resulting NPSHA data obtained from the applications of BBN and FLM are compared with the outputs derived from the implementation of a Mathematical Model. Based on these findings, discussions and analyses are made, including the identification of the respective strengths and weaknesses posed by the two methodologies. Last but not least, further extensions and improvements towards this research are discussed at the end of this paper.
  • Thumbnail Image
    Item
    Hybrid Causal Logic Methodology for Risk Assessment
    (2007-11-27) Wang, Chengdong; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Probabilistic Risk Assessment is being increasingly used in a number of industries such as nuclear, aerospace, chemical process, to name a few. Probabilistic Risk Assessment (PRA) characterizes risk in terms of three questions: (1) What can go wrong? (2) How likely is it? (3) What are the consequences? Probabilistic Risk Assessment studies answer these questions by systematically postulating and quantifying undesired scenarios in a highly integrated, top down fashion. The PRA process for technological systems typically includes the following steps: objective and scope definition, system familiarization, identification of initiating events, scenario modeling, quantification, uncertainty analysis, sensitivity analysis, importance ranking, and data analysis. Fault trees and event trees are widely used tools for risk scenario analysis in PRAs of technological systems. This methodology is most suitable for systems made of hardware components. A more comprehensive treatment of risks of technical systems needs to consider the entire environment within which such systems are designed and operated. This environment includes the physical environment, the socio-economic environment, and in some cases the regulatory and oversight environment. The technical system, supported by an organization of people in charge of its operation, is at the cross-section of these environments. In order to develop a more comprehensive risk model for these systems, an important step is to extend the modeling capabilities of the conventional Probabilistic Risk Assessment methodology to also include risks associated with human activities and organizational factors in addition to hardware and software failures and adverse conditions of the physical environment. The causal modeling should also extend to the influence of regulatory and oversight functions. This research offers such a methodology. It proposes a multi-layered modeling approach so that most the appropriate techniques are applied to different individual domains of the system. The approach is called the Hybrid Causal Logic (HCL) methodology. The main layers include: (a) A model to define safety/risk context. This is done using a technique known as event sequence diagram (ESD) method that helps define the kinds of accidents and incidents that can occur in relation to the system being considered; (b) A model that captures the behaviors of the physical system (hardware, software, and environmental factors) as possible causes or contributing factors to accidents and incidents delineated by the event sequence diagrams. This is done by common system modeling techniques such as fault tress (FT); and (c) A model to extend the causal chain of events to their potential human and organizational roots. This is done using Bayesian belief networks (BBN). Bayesian belief networks are particularly useful as they do not require complete knowledge of the relation between causes and effects. The integrated model is therefore a hybrid causal model with the corresponding sets of taxonomies and analytical and computational procedures. In this research, a methodology to combine fault trees, event trees or event sequence diagrams, and Bayesian belief networks has been introduced. Since such hybrid models involve significant interdependencies, the nature of such dependencies are first determined to pave the way for developing proper algorithmic solutions of the logic model. Major achievements of this work are: (1) development of the Hybrid Causal Logic model concept and quantification algorithms; (2) development and testing of computer implementation of algorithms (collaborative work); (3) development and implementation of algorithms for HCL-based importance measures, an uncertainty propagation method the BBN models, and algorithms for qualitative-quantitative Bayesian belief networks; and (4) development and testing of the Integrated Risk Information System (IRIS) software based on HCL methodology.