Mechanical Engineering
Permanent URI for this communityhttp://hdl.handle.net/1903/2263
Browse
9 results
Search Results
Item DYNAMIC PROGNOSTIC HEALTH MANAGEMENT FOR RESPONSE TIME BASED REMAINING USEFUL LIFE PREDICTION OF SOFTWARE SYSTEMS(2022) Islam, Mohammad Rubyet; Sandborn, Dr. Peter A; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Prognostics and Health Management (PHM) is an engineering discipline focused on predicting the future point at which systems or components will no longer perform as intended. The prediction is often articulated as a Remaining Useful Life (RUL). PHM has been widely applied to hardware systems in the electronics and non-electronics domains but has not been explored for software applications. While software does not decay over time, it can degrade over release cycles. Software degradation is a common problem faced by legacy systems. Today, software health management is confined to diagnostic assessments that identify problems. In contrast, prognostic assessment potentially indicates what problems will become detrimental to the operation of the system in the future. Relevant research areas such as software defect prediction, software reliability prediction, predictive maintenance of software, software degradation, and software performance prediction, exist, but all of these represent diagnostic models built upon historical data – none of which can predict an RUL for software. This dissertation addresses the application of PHM concepts to software systems for fault predictions and RUL estimation. Specifically, this dissertation addresses how PHM can be used to make decisions for software systems such as version update/upgrade, module changes, rejuvenation, maintenance schedules, and abandonment. This dissertation presents a method to prognostically and continuously predict the RUL of a software system based on usage parameters (e.g., the numbers and categories of releases) and performance parameters (e.g., response time). The model developed in this dissertation has been validated by comparing actual data generated using test beds. Statistical validation (regression validation) has also been carried out. A case study is presented based on publicly available data for the Bugzilla application. Controlled test beds for multiple Bugzilla releases are prepared to formulate standard staging environments to populate relevant data. This case study demonstrates that PHM concepts can be applied to software systems, and RUL can be calculated to make decisions on software management.Item Towards Trust and Transparency in Deep Learning Systems through Behavior Introspection & Online Competency Prediction(2021) Allen, Julia Filiberti; Gabriel, Steven A.; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Deep neural networks are naturally “black boxes”, offering little insight into how or why they make decisions. These limitations diminish the adoption likelihood of such systems for important tasks and as trusted teammates. We employ introspective techniques to abstract machine activation patterns into human-interpretable strategies and identify relationships between environmental conditions (why), strategies (how), and performance (result) on both a deep reinforcement learning two-dimensional pursuit game application and image-based deep supervised learning obstacle recognition application. Pursuit-evasion games have been studied for decades under perfect information and analytically-derived policies for static environments. We incorporate uncertainty in a target’s position via simulated measurements and demonstrate a novel continuous deep reinforcement learning approach against speed-advantaged targets. The resulting approach was tested under many scenarios and performance exceeded that of a baseline course-aligned strategy. We manually observed separation of learned pursuit behaviors into strategy groups and manually hypothesized environmental conditions that affected performance. These manual observations motivated automation and abstraction of conditions, performance and strategy relationships. Next, we found that deep network activation patterns could be abstracted into human-interpretable strategies for two separate deep learning approaches. We characterized machine commitment by the introduction of a novel measure and revealed significant correlations between machine commitment, strategies, environmental conditions, and task performance. As such, we motivated online exploitation of machine behavior estimation for competency-aware intelligent systems. And finally, we realized online prediction capabilities for conditions, strategies, and performance. Our competency-aware machine learning approach is easily portable to new applications due to its Bayesian nonparametric foundation, wherein all inputs are compactly transformed into the same compact data representation. In particular, image data is transformed into a probability distribution over features extracted from the data. The resulting transformation forms a common representation for comparing two images, possibly from different types of sensors. By uncovering relationships between environmental conditions (why), machine strategies (how), & performance (result) and by giving rise to online estimation of machine competency, we increase transparency and trust in machine learning systems, contributing to the overarching explainable artificial intelligence initiative.Item A COMPREHENSIVE EVALUATION OF FEATURE-BASED MALICIOUS WEBSITE DETECTION(2020) McGahagan , John Francis; Cukier,, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Although the internet enables many important functions of modern life, it is also a ground for nefarious activity by malicious actors and cybercriminals. For example, malicious websites facilitate phishing attacks, malware infections, data theft, and disruption. A major component of cybersecurity is to detect and mitigate attacks enabled by malicious websites. Although prior researchers have presented promising results – specifically in the use of website features to detect malicious websites – malicious website detection continues to pose major challenges. This dissertation presents an investigation into feature-based malicious website detection. We conducted six studies on malicious website detection, with a focus on discovering new features for malicious website detection, challenging assumptions of features from prior research, comparing the importance of the features for malicious website detection, building and evaluating detection models over various scenarios, and evaluating malicious website detection models across different datasets and over time. We evaluated this approach on various datasets, including: a dataset composed of several threats from industry; a dataset derived from the Alexa top one million domains and supplemented with open source threat intelligence information; and a dataset consisting of websites gathered repeatedly over time. Results led us to postulate that new, unstudied, features could be incorporated to improve malicious website detection models, since, in many cases, models built with new features outperformed models built from features used in prior research and did so with fewer features. We also found that features discovered using feature selection could be applied to other datasets with minor adjustments. In addition: we demonstrated that the performance of detection models decreased over time; we measured the change of websites in relation to our detection model; and we demonstrated the benefit of re-training in various scenarios.Item Some Guidelines for Risk Assessment of Vulnerability Discovery Processes(2019) Movahedi, Yazdan; Cukier, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Software vulnerabilities can be defined as software faults, which can be exploited as results of security attacks. Security researchers have used data from vulnerability databases to study trends of discovery of new vulnerabilities or propose models for fitting the discovery times and for predicting when new vulnerabilities may be discovered. Estimating the discovery times for new vulnerabilities is useful both for vendors as well as the end-users as it can help with resource allocation strategies over time. Among the research conducted on vulnerability modeling, only a few studies have tried to provide a guideline about which model should be used in a given situation. In other words, assuming the vulnerability data for a software is given, the research questions are the following: Is there any feature in the vulnerability data that could be used for identifying the most appropriate models for that dataset? What models are more accurate for vulnerability discovery process modeling? Can the total number of publicly-known exploited vulnerabilities be predicted using all vulnerabilities reported for a given software? To answer these questions, we propose to characterize the vulnerability discovery process using several common software reliability/vulnerability discovery models, also known as Software Reliability Models (SRMs)/Vulnerability Discovery Models (VDMs). We plan to consider different aspects of vulnerability modeling including curve fitting and prediction. Some existing SRMs/VDMs lack accuracy in the prediction phase. To remedy the situation, three strategies are considered: (1) Finding a new approach for analyzing vulnerability data using common models. In other words, we examine the effect of data manipulation techniques (i.e. clustering, grouping) on vulnerability data, and investigate whether it leads to more accurate predictions. (2) Developing a new model that has better curve filling and prediction capabilities than current models. (3) Developing a new method to predict the total number of publicly-known exploited vulnerabilities using all vulnerabilities reported for a given software. The dissertation is intended to contribute to the science of software reliability analysis and presents some guidelines for vulnerability risk assessment that could be integrated as part of security tools, such as Security Information and Event Management (SIEM) systems.Item QUANTIFYING AND PREDICTING USER REPUTATION IN A NETWORK SECURITY CONTEXT(2019) Gratian, Margaret Stephanie; Cukier, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Reputation has long been an important factor for establishing trust and evaluating the character of others. Though subjective by definition, it recently emerged in the field of cybersecurity as a metric to quantify and predict the nature of domain names, IP addresses, files, and more. Implicit in the use of reputation to enhance cybersecurity is the assumption that past behaviors and opinions of others provides insight into the expected future behavior of an entity, which can be used to proactively identify potential threats to cybersecurity. Despite the plethora of work in industry and academia on reputation in cyberspace, proposed methods are often presented as black boxes and lack scientific rigor, reproducibility, and validation. Moreover, despite widespread recognition that cybersecurity solutions must consider the human user, there is limited work focusing on user reputation in a security context. This dissertation presents a mathematical interpretation of user cyber reputation and a methodology for evaluating reputation in a network security context. A user’s cyber reputation is defined as the most likely probability the user demonstrates a specific characteristic on the network, based on evidence. The methodology for evaluating user reputation is presented in three phases: characteristic definition and evidence collection; reputation quantification and prediction; and reputation model validation and refinement. The methodology is illustrated through a case study on a large university network, where network traffic data is used as evidence to determine the likelihood a user becomes infected or remains uninfected on the network. A separate case study explores social media as an alternate source of data for evaluating user reputation. User-reported account compromise data is collected from Twitter and used to predict if a user will self-report compromise. This case study uncovers user cybersecurity experiences and victimization trends and emphasizes the feasibility of using social media to enhance understandings of users from a security perspective. Overall, this dissertation presents an exploration into the complicated space of cyber identity. As new threats to security, user privacy, and information integrity continue to manifest, the need for reputation systems and techniques to evaluate and validate online identities will continue to grow.Item PRIVACY IN DISTRIBUTED MULTI-AGENT COLLABORATION: CONSENSUS AND OPTIMIZATION(2018) Gupta, Nirupam; Chopra, Nikhil; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Distributed multi-agent collaboration is an interactive algorithm that enables agents in a multi-agent system (MAS) to achieve pre-defined collaboration objective in a distributed manner, such as agreeing upon a common value (commonly referred as distributed consensus) or optimizing the aggregate cost of the MAS (commonly referred as distributed optimization). Agents participating in a typical distributed multi-agent collaboration algorithm can lose privacy of their inputs (containing private information) to a passive adversary in two ways. The adversary can learn about agents' inputs either by corrupting some of the agents that are participating in the collaboration algorithm or by eavesdropping the communication links between the agents during an execution of the collaboration algorithm. Privacy of the agents' inputs in the former case is referred as internal privacy, and privacy of the agents' inputs in the latter case is referred as external privacy. This dissertation proposes a protocol for preserving internal privacy in two particular distributed collaborations: distributed average consensus and distributed optimization. It is shown that the proposed protocol can preserve internal privacy of sufficiently well connected honest agents (agents that are not corrupted by the adversary) against adversarial agents (agents that are corrupted by the adversary), without affecting the collaboration objective. This dissertation also investigates a model-based scheme, as an alternative to cryptographic encryptions, for external privacy in distributed collaboration algorithms that can be modeled as linear time-invariant networked control systems. It is demonstrated that the model-based scheme preserves external privacy, without affecting the collaboration objective, if the system parameters of the networked control system, that equivalently models the distributed collaboration algorithm, satisfy certain conditions. Unlike cryptographic encryptions, the model-based scheme does not rely on secure generation and distribution of keys amongst the agents for guaranteeing external privacy.Item Computational Foundations for Safe and Efficient Human-Robot Collaboration in Assembly Cells(2016) Morato, Carlos W; Gupta, Satyandra K; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.Item An Explanatory Model of Motivation for Cyber-Attacks Drawn from Criminological Theories(2013) Mandelcorn, Seymour Mordechai; Modarres, Mohammad; Mosleh, Ali; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A new influence model for Cyber Security is presented that deals with security attacks and implementation of security measures from an attacker's perspective. The underlying hypothesis of this model is that Criminological theories of Rational Choice, Desire for Control, and Low Self-Control are relevant to cybercrime and thereby aid in the understanding its basic Motivation. The model includes the roles of Consequences, Moral Beliefs such as Shame and Embarrassment together with Formal Sanctions in deterring cybercrime, as well as role of Defense Posture to limit the Opportunity to attack and increase the likelihood that an attacker will be detected and exposed. One of the motivations of the study was the observation that few attempts have been made to understand cybercrime, in the context of typical crime because: (a) an attacker may consider his actions as victimless due to remoteness of the victim; (b) ease to commit cybercrimes due to opportunities afforded by the Internet and its accessibility, and readily available tools and knowledge for an attack; and (c) vagueness of cybercrime laws that makes prosecution difficult. In developing the model, information from studies in classical crime was related to Cybercrime allowing for analysis of past cyber-attacks, and subsequently preventing future IS attacks, or mitigating their effects. The influence model's applicability is demonstrated by applying it to case studies of actual information attacks which were prosecuted through the United States Courts, and whose judges' opinions are used for statements of facts. Additional, demonstration of the use and face validity of the model is through the mapping of the model to major annual surveys' and reports' results of computer crime. The model is useful in qualitatively explaining "best practices" in protecting information assets and in suggesting emphasis on security practices based on similar results in general criminology.Item Application of Stochastic Reliability Modeling to Waterfall and Feature Driven Development Software Development Lifecycles(2011) Johnson, David Michael; Modarres, Mohammed; Smidts, Carol S; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)There are many techniques for performing software reliability modeling. In the environment of software development some models use the stochastic nature of fault introduction and fault removal to predict reliability. This thesis research analyzes a stochastic approach to software reliability modeling and its performance on two distinct software development lifecycles. The derivation of the model is applied to each lifecycle. Contrasts between the lifecycles are shown. Actual data collected from industry projects illustrate the performance of the model to the lifecycle. Actual software development fault data is used in select phases of each lifecycle for comparisons with the model predicted fault data. Various enhancements to the model are presented and evaluated, including optimization of the parameters based on partial observations.