Mechanical Engineering Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2795
Browse
Recent Submissions
Item Equilibrium Programming for Improved Management of Water-Resource Systems(2024) Boyd, Nathan Tyler; Gabriel, Steven A; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Effective water-resources management requires the joint consideration of multiple decision-makers as well as the physical flow of water in both built and natural environments. Traditionally, game-theory models were developed to explain the interactions of water decision-makers such as states, cities, industries, and regulators. These models account for socio-economic factors such as water supply and demand. However, they often lack insight into how water or pollution should be physically managed with respect to overland flow, streams, reservoirs, and infrastructure. Conversely, optimization-based models have accounted for these physical features but usually assume a single decision-maker who acts as a central planner. Equilibrium programming, which was developed in the field of operations research, provides a solution to this modeling dilemma. First, it can incorporate the optimization problems of multiple decision-makers into a single model. Second, the socio-economic interactions of these decision-makers can be modeled as well such as a market for balancing water supply and demand. Equilibrium programming has been widely applied to energy problems, but a few recent works have begun to explore applications in water-resource systems. These works model water-allocation markets subject to the flow of water supply from upstream to downstream as well as the nexus of water-quality management with energy markets. This dissertation applies equilibrium programming to a broader set of physical characteristics and socio-economic interactions than these recent works. Chapter 2 also focuses on the flow of water from upstream to downstream but incorporates markets for water recycling and reuse. Chapter 3 also focuses on water-quality management but uses a credit market to implement water-pollution regulations in a globally optimal manner. Chapter 4 explores alternative conceptions for socio-economic interactions beyond market-based approaches. Specifically, social learning is modeled as a means to lower the cost of water-treatment technologies. This dissertation's research contributions are significant to both the operations research community and the water-resources community. For the operations research community, this dissertation could serve as model archetypes for future research into equilibrium programming and water-resource systems. For instance, Chapter 1 organizes the research in this dissertation in terms of three themes: stream, land, and sea. For the water-resources community, this dissertation could make equilibrium programming more relevant in practice. Chapter 2 applies equilibrium programming to the Duck River Watershed (Tennessee, USA), and Chapter 3 applies it to the Anacostia River Watershed (Washington DC and Maryland, USA). The results also reinforce the importance of the relationships between socio-economic interactions and physical features in water resource systems. However, the risk aversion of the players acts as an important mediating role in the significance of these relationships. Future research could investigate mechanisms for the emergence of altruistic decision-making to improve equity among the players in water-resource systems.Item MOLD PROCESS INDUCED RESIDUAL STRESS PREDICTION USING CURE EXTENT DEPENDENT VISCOELASTIC BEHAVIOR(2024) Phansalkar, Sukrut Prashant; Han, Bongtae; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Epoxy molding compounds (EMC) are widely used in encapsulation of semiconductor packages. Encapsulation protects the package from physical damage or corrosion due to harsh environments. Molding processes produce residual stresses in encapsulated components. They are combined with the stresses caused by the coefficient of thermal expansion (CTE) mismatch to dictate the final warpage at room and reflow temperatures, which must be controlled for fabrication of redistribution layer (RDL) as well as yield during assembly. During molding process, EMC is continuously curing and the mechanical properties continue to evolve; more specifically, the equilibrium modulus and the relaxation modulus. The former defines behavior after complete relaxation while the latter describes the transient behavior. It is thus necessary to measure cure-dependent viscoelastic properties of EMC to be able to determine mold induced residual stresses accurately. This is the motivation for this thesis. In this thesis, a set of novel methodologies are developed and implemented to quantify a complete set of cure-dependent viscoelastic properties, including the fully cured properties. Firstly, an advanced numerical scheme has been developed to quantify cure kinetics of thermosets with both single and dual-reaction systems. Secondly, a unique methodology is proposed to measure the viscoelastic bulk modulus -K(t,T) of EMC using hydrostatic testing. The methodology is implemented with a unique test setup based on inert gas. The results of viscoelastic testing along with the shear modulus (G) and bulk modulus (K) master curves and temperature-dependent shift factors (a(T)) of fully-cured EMC are presented. Thirdly, a novel test methodology utilizing monotonic testing has been developed to measure two sets of equilibrium moduli of EMC as a function of cure extent (p). The main challenge for the measurements is that the equilibrium moduli could only be measured accurately only when the EMC has fully relaxed. The temperatures for complete relaxation typically occur above the glass transition temperature, Tg (p), where the curing rate is also high. A special measurement procedure is developed, through which the EMC moduli above Tg can be determined quickly at a near isocure state. Viscoelastic testing of partially-cured EMC is followed to determine the cure-dependent shift factors of EMC. The test durations have to be very long (several hours) and it is conducted below Tg (p) of the EMC where the reaction is slow (under diffusion-control) Finally, a numerical scheme that can utilize the measured cure-dependent viscoelastic properties is developed. It is implemented to predict the residual stress evolution of molded packages during and after molding processes.Item ANALYSIS OF THE LIFE-CYCLE COST AND CAPABILITY TRADEOFFS ASSOCIATED WITH THE PROCUREMENT AND SUSTAINMENT OF OPEN SYSTEMS(2024) Chen, Shao-Peng; Sandborn, Peter PS; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)System openness refers to the extent to which system components can be independently integrated, removed, managed, or replaced without adversely impacting the system. Openness (of a system and/or architecture), though intuitively understood, remains difficult to quantify in terms of its value for safety-, mission-, and infrastructure-critical systems. Examples of these critical systems include: aircraft, rail, industrial controls, power generation, and defense systems – all of these systems are characterized by large procurement costs, large life-cycle sustainment costs and very long support lives (e.g., it is not uncommon for these systems to be supported for 30 or more years).Generally, it is taken for granted that the use of open systems decreases the total life-cycle cost of a system. Leveraging existing open technology, including commercial-off-the-shelf (COTS) components, avoids many costs associated with designing custom components, and reduces the time required for development and eventually refresh of a system. The use of open systems helps mitigate the effects of obsolescence, lengthens the system’s support life, and allows for the incremental insertion of new technologies. Component design reuse also eliminates redundant components, thus reducing logistical costs. However, building systems from open standards and commercially available components often relies on the use of generalized technology containing unnecessary additional functionality, which increases the system’s complexity and adds new failure paths and additional qualification overhead. In other cases, it may be necessary to modify COTS components to meet performance requirements, thereby adding costs. In addition, the enterprise that manages the system often has no control over the supply-chain for COTS components, which adds supply disruption risk and introduces the risk compromised. Previous efforts to establish the value of openness have relied on highly qualitative analyses, with the results often articulated as intangible “openness scores”. Such approaches do not provide sufficient information to make a business case or understand the conditions under which life-cycle cost avoidance can be maximized (or whether there even is a cost avoidance). This dissertation is focused on creating a general model for quantifying the relationship between system openness and life-cycle cost that can be used to optimize system openness strategies for critical systems, an outcome that could significantly reduce system sustainment costs. This work is composed of the following tasks: 1) Mining of public-source materials to solidify what is known and believed about the relationship between open-systems attributes and life-cycle costs. 2) Development of a multivariate model and associated simulation that quantifies the relationship between openness and life-cycle cost for systems composed of hardware and software. 3) A case study of the life-cycle cost difference between two implementations of the same system with differing levels of openness (the US Navy A-RCI sonar system on Los Angeles and Ohio class submarines is the case study system for this dissertation). 4) Generalization of the understanding of the relationship between system life-cycle cost and openness is achieved through a generic analytic model. This model efficiently estimates the relationship between relative life-cycle cost and system openness, considering relevant parameters. It enables the determination of optimum system openness without the need for running a detailed simulation.Item VOLUMETRIC SOLAR ABSORBING FLUIDS AND THEIR APPLICATIONS IN TWO-PHASE THERMOSYPHON(2024) Zhou, Jian; Yang, Bao; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A two-phase thermosyphon is a passive system utilizing gravity to transfer working fluids. The working fluids of a two-phase thermosyphon must undergo vaporization and condensation in the same system. Two-phase thermosyphons can also be used as solar collectors. Traditional solar collectors utilize surface absorbers to convert incident solar radiation into thermal energy, but those systems feature a large temperature difference between the surface absorbers and heat transfer fluids, resulting in a reduction in the overall thermal efficiency. Volumetric solar absorbing fluids serve both as solar absorbers and heat transfer fluids, therefore significantly improving the overall efficiency of solar collectors. Comparing to pure fluids, nanofluids possess both enhanced thermal conductivity and solar absorption capacity as volumetric absorbing fluids. Nanofluids, when serving as volumetric solar absorbing fluids, are so far reported to work only at relatively low temperatures and in a single-phase heat transfer regime due to stability issue. This research investigates the possibility of using nanofluids, especially graphene oxide (GO) nanofluids, as volumetric solar absorbing fluids in two-phase thermosyphons. Despite their reputation as both stable and solar absorptive among nanofluids, graphene oxide nanofluids still deteriorate quickly under boiling-condensation processes (~100 °C). The solar transmittance of the GO nanofluids declines from 38 to 4%, during the first 24 h of testing. Further investigation shows that the stability deterioration is caused by the thermal reduction of GO nanoparticles, which mainly featured with de-carboxylation and de-hydroxylation. A commercial dye named acid black 52, when dissolved in water, exhibits great broadband solar absorption properties and excellent stability. It remains stable for over 199 days in two-phase thermosyphon, and their transmittance in solar spectral region varies less than 9%. The stability of acid black 52 aqueous solution is further confirmed with the 191-day enhanced radiation test, as it shows less than 5% transmittance change in solar spectral region.Item OPTIMAL PROBING OF BATTERY CYCLES FOR MACHINE LEARNING-BASED MODEL DEVELOPMENT(2024) Nozarijouybari, Zahra; Fathy, Hosam HF; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation examines the problems of optimizing the selection of the datasets and experiments used for parameterizing machine learning-based electrochemical battery models. The key idea is that data selection, or “probing” can empower such models to achieve greater fidelity levels. The dissertation is motivated by the potential of battery models to enable theprediction and optimization of battery performance and control strategies. The literature presents multiple battery modeling approaches, including equivalent circuit, physics-based, and machine learning models. Machine learning is particularly attractive in the battery systems domain, thanks to its flexibility and ability to model battery performance and aging dynamics. Moreover, there is a growing interest in the literature in hybrid models that combine the benefits of machine learning with either the simplicity of equivalent circuit models or the predictiveness of physics-based models or both. The focus of this dissertation is on both hybrid and purely data-driven battery models. Moreover, the overarching question guiding the dissertation is: how does the selection of the datasets and experiments used for parameterizing these models affect their fidelity and parameter identifiability? Parameter identifiability is a fundamental concept from information theory that refers to the degree to which one can accurately estimate a given model’s parameters from input-output data. There is substantial existing research in the literature on battery parameter identifiability. An important lesson from this literature is that the design of a battery experiment can affect parameter identifiability significantly. Hence, test trajectory optimization has the potential to substantially improve model parameter identifiability. The literature explores this lesson for equivalent circuit and physics-based battery models. However, there is a noticeable gap in the literature regarding identifiability analysis and optimization for both machine learning-based and hybrid battery models. To address the above gap, the dissertation makes four novel contributions to the literature. The first contribution is an extensive survey of the machine learning-based battery modeling literature, highlighting the critical need for information-rich and clean datasets for parameterizing data-driven battery models. The second contribution is a K-means clustering-based algorithm for detecting outlier patterns in experimental battery cycling data. This algorithm is used for pre-cleaning the experimental cycling datasets for laboratory-fabricated lithium-sulfur (Li-S) batteries, thereby enabling the higher-fidelity fitting of a neural network model to these datasets. The third contribution is a novel algorithm for optimizing the cycling of a lithium iron phosphate (LFP) to maximize the parameter identifiability of a hybrid model of this battery. This algorithm succeeds in improving the resulting model’s Fisher identifiability significantly in simulation. The final contribution focuses on the application of such test trajectory optimization to the experimental cycling of commercial LFP cells. This work shows that test trajectory optimization is s effective not just at improving parameter identifiability, but also at probing and uncovering higher-order battery dynamics not incorporated in the initial baseline model. Collectively, all four of these contributions show the degree to which the selection of battery cycling datasets and experiments for richness and cleanness can enable higher-fidelity data-driven and hybrid modeling, for multiple battery chemistries.Item Locomotion on Granular Media: Reduced-Order Models(2024) Nikiforidou, Christina; Balachandran, Balakumar; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Deciphering and predicting the dynamics of locomotion over compliant terrain is garnering increasing interest because of its relevance to search and rescue missions, planetary explorations, as well as robot navigation on diverse surfaces. In the realm of assistive devices for individuals with limited ankle mobility, it is valuable for an assistive device to be of use in both indoor and outdoor environments. In this thesis research, the Dynamic Data-Driven Application Systems framework is employed and this framework is used to determine lower extremity trajectories for supporting the operation of a robotic device on uneven terrains. With this framework, data obtained from simulations is combined along with noisy sensor measurements to predict the contact forces of a leg interacting with granular media. For the simulation of the dynamical responses of legged locomotion on granular media, two models are used. They are, namely, a reduced-order model based on the Resistive Forces Theory (RFT) and a high-fidelity model based on the Smoothed Particle Hydrodynamics (SPH) method. The results obtained with the two models are compared for various leg morphologies to assess how well these models can be used to capture complex contact interactions between a robot appendage and granular media. After data collection from simulations, the efficiency of the proposed data-driven framework is illustrated and discussed by examining test cases that involve the gait responses of robotic appendages interacting with granular material. Preliminary experiments of foot interactions with granular media have also been conducted and this data has also been considered in the DDDAS framework. The present work can serve as a basis for further developing the utility of the DDDAS framework for robot device operations, with a particular emphasis on assistive robots. The utility of data generated from off-line simulations and real-time data generated from sensors on these assistive robots can help the robots adapt better to different terrains.Item Metareasoning Strategies to Correct Navigation Failures of Autonomous Ground Robots(2024) Molnar, Sidney Leigh; Herrmann, Jeffrey; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Due to the complexity of autonomous systems, theoretically perfect path planning algorithms sometimes fail due to emergent behaviors that arise when interacting with different perception, mapping and goal planning subprocesses. These failures prevent mission success, especially in complex environments that have not previously been explored by the robot. To overcome these failures, many researchers have sought to develop parameter learning methods to improve either mission success or path planning convergence. Metareasoning, which can be simply described as “thinking about thinking,” offers another possible solution for mitigating these planning failures. This project offers a novel metareasoning approach that uses different methods of monitoring and control to detect and overcome path planning irregularities that contribute to path planning failures. All methods for the approaches were implemented as a part of the ARL ground autonomy stack which uses both global and local path planning ROS nodes. The proposed monitoring methods include listening to messages published to the system by the planning algorithms themselves, evaluating for the environmental context that the robot is in, the expected progress methods which use the robot’s movement capabilities to evaluate for progress that has been made from a milestone checkpoint, and the fixed radius methods which use user-selected parameters based on mission objectives to evaluate for the progress that has been made from a milestone checkpoint. The proposed control policies are the metric-based sequential policies which use benchmark robot performance metrics to select the order in which the planner combinations are to be launched, the context-based pairs policies which evaluate what happens when switching between only two planner combinations, and the restart policy which simply relaunches a new instance of the same planner combination. The study evaluated which monitoring and control policies, when paired, contributed to improved navigation performance and which policies contributed to degraded navigation performance by evaluating how close the robot was able to get to the final mission goal. Although specific methods were evaluated, the contributions of the project extend beyond the results by offering both a template for metareasoning approaches with regard to navigation as well as replicable algorithms that may be applied to any autonomous ground robot system. Additionally, this thesis presents ideas for additional research in order to determine under which conditions metareasoning will improve navigation.Item DESIGN NOVELTY EVALUATION THROUGH ORDINAL EMBEDDING: COMPARISON OF NOVELTY AND TRIPLET ERRORS(2024) Keeler, Matthew Garrett; Fuge, Mark; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A practical and well-studied method for computing the novelty of a design is to construct an embedding via a collection of pairwise comparisons between items (called triplets), and use distances within that embedding to compute which designs are farthest from the center. These triplet comparisons are posed in the form of "Is Design A closer to Design B or Design C?'', and inform the placement of designs in the similarity-space embedding. This method of creating an embedding from non-metric relationship comparisons is known as ordinal embedding. Unfortunately, ordinal embedding methods can require a large number of triplets before their primary error measure--the proportion of violated triplet comparisons--converges. But if our goal is accurate novelty estimation, is it really necessary to fully minimize all triplet violations? Can we extract useful information regarding the novelty of all or some items using fewer triplets than existing convergence rates on the saturation of triplet violations might imply? This thesis addresses this question by studying the relationship between triplet violation error and novelty score error when using ordinal embeddings. We find that estimating the novelty of a set of items via ordinal embedding can require significantly fewer human-provided triplets than is needed to converge the triplet error, and that this effect is modulated by the type of triplet sampling method (random versus uncertainty-informed active sampling). Having learned this, we propose the use of a custom metric we call the 'Expected Model Change' (EMC) which we use to observe when novelty information in the embedding has stopped updating under newly labeled triplets, so that conservative bounding functions need not be used. Moreover, to avoid the dangers of low accuracy in selecting the dimension of the ordinal embedding, we propose use of the Expected Model Change for tuning the embedding dimension to an appropriate value. In this framework, we explore the convergence properties of ordinal embeddings reconstructed from triplets taken from a variety of synthetic and real-world design spaces.Item Denoising the Design Space: Diffusion Models for Accelerated Airfoil Shape Optimization(2024) Diniz, Cashen; Fuge, Mark D; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Generative models offer the possibility to accelerate and potentially substitute parts of the often expensive traditional design optimization process. We present Aero-DDM, a novel application of a latent denoising diffusion model (DDM) capable of generating airfoil geometries conditioned on flow parameters and an area constraint. Additionally, we create a novel, diverse dataset of optimized airfoil designs that better reflects a realistic design space than has been done in previous work. Aero-DDM is applied to this dataset, and key metrics are assessed both statistically and with an open-source computational fluid dynamics (CFD) solver to determine the performance of the generated designs. We compare our approach to an optimal transport GAN, and demonstrate that our model can generate designs with superior performance statistically, in aerodynamic benchmarks, and in warm-start scenarios. We also extend our diffusion model approach, and demonstrate that the number of steps required for inference can be reduced by as much as ~86%, compared to an optimized version of the baseline inference process, without meaningful degradation in design quality, simply by using the initial design to start the denoising process.Item The Natural Response of Uniform and Nonuniform Plates in Air and Partially Submerged in a Quiescent Water Body(2024) Fishman, Edwin Barry; Duncan, James; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The free vibration of three aluminum plates (.4 m wide, 1.08 m long) oriented horizontally is studied experimentally under two fluid conditions, one with the plate surrounded by air, called the Air case, and the other with the bottom plate surface in contact with a large undisturbed pool of water, called the Half-Wet case. Measurements of the out-of-plane deflection of the upper surfaces of the plates are made using cinematic Digital Image Correlation (DIC) over the center portion of the surface and optical tracking of the center point. Three plate geometries and boundary conditions are studied: A uniform plate with 6.35 mm thickness pinned at the two opposite narrow ends (designated UP), a uniform plate with 4.83 mm thickness simply supported at one narrow end and clamped at the opposite end (UC), and a stepped plate with thickness varying from 12.7 mm to 6.35 mm along its 1.08 m length pinned at two opposite narrow ends (SP). The plate's free response is induced using an impact hammer at three locations along the center-line of the plate. Video frames of the motion of the upper surface of the plate are collected from stereoscopic cameras and processed using DaVis-Strainmaster and MATLAB to extract full-field displacements as a function of time. Two-degree-of-freedom displacements of the plate center are also collected by tracking a target attached to the center of the plate's lower surface. Time and frequency response plots are presented for comparison between the Half-Wet and Air cases and analysis of their dynamics. It is found that the added mass of the water results in lower measured natural frequencies and modified mode shapes. In the Air case, these results are compared to mode shapes/frequencies produced in Creo Simulate and found to agree. Further experiments are discussed.Item Second Wave Mechanics(2024) Fabbri, Anthony; Herrmann, Jeffrey W; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The COVID-19 pandemic experienced very well-documented "waves" of the virus's progression, which can be analyzed to predict future wave behavior. This thesis describes a data analysis algorithm for analyzing pandemic behavior and other, similar problems. This involves splitting the linear and sinusoidal elements of a pandemic in order to predict the behavior of future "waves" of infection from previous "waves" of infection, creating a very long-term prediction of a pandemic. Common wave shape patterns can also be identified, to predict the pattern of mutations that have recently occurred, but have not become popularly known as yet, to predict the remaining future outcome of the wave. By only considering the patterns in the data that could possibly have acted in tandem to generate the observed results, many false patterns can be eliminated, and, therefore, hidden variables can be estimated to a very high degree of probability. Similar mathematical relationships can reveal hidden variables in other underlying differential equations.Item A Framework for Remaining Useful Life Prediction and Optimization for Complex Engineering Systems(2024) Weiner, Matthew Joesph; Azarm, Shapour; Groth, Katrina M; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Remaining useful life (RUL) prediction plays a crucial role in maintaining the operational efficiency, reliability, and performance of complex engineering systems. Recent efforts have primarily focused on individual components or subsystems, neglecting the intricate relationships between components and their impact on system-level RUL (SRUL). The existing gap in predictive methodologies has prompted the need for an integrated approach to address the complex nature of these systems, while optimizing the performance with respect to these predictive indicators. This thesis introduces a novel methodology for predicting and optimizing SRUL, and demonstrates how the predicted SRUL can be used to optimize system operation. The approach incorporates various types of data, including condition monitoring sensor data and component reliability data. The methodology leverages probabilistic deep learning (PDL) techniques to predict component RUL distributions based on sensor data and component reliability data when sensor data is not available. Furthermore, an equation node-based Bayesian network (BN) is employed to capture the complex causal relationships between components and predict the SRUL. Finally, the system operation is optimized using a multi-objective genetic algorithm (MOGA), where SRUL is treated as a constraint and also as an objective function, and the other objective relates to mission completion time. The validation process includes a thorough examination of the component-level methodology using the C-MAPSS data set. The practical application of the proposed methodology in this thesis is through a case study involving an unmanned surface vessel (USV), which incorporates all aspects of the methodology, including system-level validation through qualitative metrics. Evaluation metrics are employed to quantify and qualify both component and system-level results, as well as the results from the optimizer, providing a comprehensive understanding of the proposed approach’s performance. There are several main contributions of this thesis. These include a new deep learning structure for component-level PHM, one that utilizes a hybrid-loss function for a multi-layer long short-term memory (LSTM) regression model to predict RUL with a given confidence interval while also considering the complex interactions among components. Another contribution is the development of a new framework for computing SRUL from these predicted component RULs, in which a Bayesian network is used to perform logic operations and determine the SRUL. These contributions advance the field of PHM, but also provide a practical application in engineering. The ability to accurately predict and manage the RUL of components within a system has profound implications for maintenance scheduling, cost reduction, and overall system reliability. The integration of the proposed method with an optimization algorithm closes the loop, offering a comprehensive solution for offline planning and SRUL prediction and optimization. The results of this research can be used to enhance the efficiency and reliability of engineering systems, leading to more informed decision-making.Item TOWARDS AUTOMATION OF HEMORRHAGE DIAGNOSTICS AND THERAPEUTICS(2024) Chalumuri, Yekanth Ram; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The main aim of the thesis is to advance the technology in the development ofalgorithms and methodologies that will advance the care in hemorrhage diagnostics and therapeutics in low resource settings. The first objective of this thesis is to develop algorithms to primarily detect internal hemorrhage using non-invasive multi-modal physiological sensing. We developed a machine learning algorithm that can classify various types of hypovolemia and is shown to be performing superior to the algorithms developed primarily based on vital signs. To address the limitations in the data-driven approaches, we explored physics-based approaches to detect internal hemorrhage. In silico analysis showed that our physics-based algorithms can not only detect hemorrhage but also can detect hemorrhage even when hemorrhage is being compensated by fluid resuscitation. The second objective is to advance the regulatory aspects of physiological closed-loopcontrol systems in maintaining blood pressure at a desired value during hemorrhage and resuscitation. Physiological closed-loop control systems offer an exciting opportunity to treat hemorrhage in low resource settings but often face regulatory challenges due to safety concerns. A physics-based model with rigorous validation can improve regulatory aspects of such systems but current validation techniques are very naive. We developed a physics-based model that can predict hemodynamics during hemorrhage and resuscitation and validated these factors using a validation framework that uses sampled digital twins. Then we utilized the validated model to evaluate its efficacy in predicting the performance capability of the model and virtual patient generator in predicting the closed-loop controller metrics of unseen experimental data. To summarize, we tried to improve the hemorrhage care using novel algorithmdevelopment and in silico validation and evaluation of computation models that can be used to treat hemorrhage.Item ON DATA-BASED MAPPING AND NAVIGATION OF UNMANNED GROUND VEHICLES(2024) Herr, Gurtajbir Singh; Chopra, Nikhil; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Unmanned ground vehicles (UGVs) have seen tremendous advancement in their capabilities and applications in the past two decades. With several key algorithmic and hardware breakthroughs and advancements in deep learning, UGVs are quickly becoming ubiquitous (finding applications as self-driving cars, for remote site inspections, in hospitals and shopping malls, among several others). Motivated by their large-scale adoption, this dissertation aims to enable the navigation of UGVs in complex environments. In this dissertation, a supervised learning-based navigation algorithm that utilizes model predictive control (MPC) for providing training data is developed. Improving MPC performance by data-based modelling of complex vehicle dynamics is then addressed. Finally, this dissertation deals with detecting and registering transparent objects that may deteriorate navigation performance. Navigation in dynamic environments poses unique challenges, particularly due to the limited knowledge of the decisions made by other agents and their objectives. In this dissertation, a solution that utilizes an MPC-based planner as an \textit{expert} to generate high-quality motion commands for a car-like robot operating in a simulated dynamic environment is proposed. These commands are then used to train a deep neural network, which learns to navigate. The deep learning-based planner is further enhanced with safety margins to improve its effectiveness in collision avoidance. The performance of the proposed method through simulations and real-world experiments, demonstrating its superiority in terms of obstacle avoidance and successful mission completion is showcased. This research has practical implications for the development of safer and more efficient autonomous vehicles. Many real-world applications rely on MPC to control UGVs due to its safety guarantees and constraint satisfaction properties. However, the performance of such MPC-based solutions is heavily reliant on the accuracy of the motion model. This dissertation addresses this challenge by exploring a data-based approach to discovering vehicle dynamics. Unlike existing physics-based models that require extensive testing setups and manual tuning for new platforms and driving surfaces, our approach leverages the universal differential equations (UDEs) framework to identify unknown dynamics from vehicle data. This innovative approach, which does not make assumptions about the unknown dynamics terms and directly models the vector field, is then deployed to showcase its efficacy. This research opens up new possibilities for more accurate and adaptable motion models for UGVs. With the increasing adoption of glass and other transparent materials, UGVS must be able to detect and register them for reliable navigation. Unfortunately, such objects are not easily detected by LiDARs and cameras. In this dissertation, algorithms for detecting and including glass objects in a Graph SLAM framework were studied. A simple and computationally inexpensive glass detection scheme to detect glass objects is utilized. The methodology to incorporate the identified objects into the occupancy grid maintained by such a framework is the presented. The issue of \textit{drift accumulation} that can affect mapping performance when operating in large environments is also addressed.Item De-conflicting management of fluid resuscitation and intravenous medication infusion(2024) Yin, Weidi; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The treatment of combat casualties frequently involves infusion of multiple drugs (e.g. sedatives, opioids and vasopressors) in addition to fluid resuscitation. Usually, fluid resuscitation is performed first to restore the patient’s volume state, followed by the infusion of drugs that can optimize the hemodynamics and/or relief the pain. In some circumstances, however, fluid and drugs must be infused simultaneously. Simultaneous administration of fluid and intravenous drugs presents a practical challenge related to the interactions between them. On one hand, fluid infused dilutes the drugs by lowering its plasma concentration, thereby weakening the drugs’ intended clinical effects. On the other hand, the clinical effects of the intravenously administered drugs on the hemodynamics can interfere with the therapeutic goal of fluid resuscitation. Yet, the vast majority of existing work on closed-loop control of fluid resuscitation and intravenous drug infusion has focused on either fluid resuscitation or intravenous drug infusion alone, while methodologies and algorithms applicable to simultaneous administration of fluid and intravenous drugs have not been rigorously investigated. In the context of control engineering, this problem might be simply considered as a multivariable control problem. Nevertheless, the intricacy and nonlinearity in the system dynamics, in conjunction with limited sensor measurements makes this problem highly challenging. Hence, our work to analyze the conflicts between multiple treatments and to develop algorithmic framework to overcome such conflicts can represent a major leap toward the realization of complex automated medical care in the future, which can make a significant impact on human wellbeing. The main objective of this thesis is to investigate on de-conflicting management of fluid resuscitation and medication infusion, which is in twofold: first and foremost, to develop a mechanistic understanding of the interactions and interferences between the two treatments and second, to come up with novel solutions to address the challenges. To achieve the first goal of this project, we developed an integrated mathematical model of cardiovascular system and pharmacokinetics-pharmacodynamics(PK-PD) model of drugs. This study involves constructing the model based on current knowledge of physiology, isolated and interactive drug effects, parameter identification using real-world data to verify and validate the model, rigorously analyzing the results to demonstrate that multiple medical treatments can endanger the safety of patient care unless the treatments are properly controlled. To accomplish the second goal, we designed a strategy that realizes a safety assurance control of multiple treatments. This study involves model-based hemodynamic monitoring, robust nonlinear dynamic feedback control, safety assurance control design and treatment target mediation. In terms of controller design, we used a 2-degree of freedom PID controller for fluid loop, and an absolute stability guaranteed PID controller based on circle criterion and linear matrix inequalities(LMI) for drug loops. This dissertation considers a 2-input 2-output model(fluid resuscitation and propofol sedation), as well as a more sophisticated 3-input 2-output model(fluid resuscitation and propofol sedation with PHP vasopressor treatment) for case study. It turned out that the proposed methods worked well on both models. In addition, having more inputs provides more flexibility in terms of controller design.Item MACHINE LEARNING IN SCARCE DATA REGIME FOR DESIGN AND DISCOVERY OF MATERIALS(2024) BALAKRISHNAN, SANGEETH; Chung, Peter; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In recent years, data-driven approaches based on machine learning have emerged as a promising method for rapid and efficient estimation of the structure-property-performance relationships, leading to the discovery of advanced materials. However, the cost and time required to obtain relevant data have limited application of these methods to only a few classes of materials where extensive property data are available. Moreover, the material property prediction poses its own unique set of challenges, in part, due to: 1) the complex non-linear response of materials in different space and time domains, 2) inherent variability in material in terms of composition and processing conditions from the atomic to the macroscopic scales, and 3) the need for accurate, rapid and less expensive predictive models for accelerated material discovery. This dissertation aims to develop three novel machine learning frameworks for constructing targeted learning frameworks and discovering novel materials when dealing with limited available data. The dissertation also highlights the future directions and challenges of such approaches. In the first approach, we develop data-driven methods to estimate the material properties under shock compression. A novel featurization approach combining synthetic and physical features was developed showing substantial improvements in the machine learning model performance. The effects of feature engineering, model choices, and uncertainty in the experimental data were investigated. In the second approach, we develop a novel joint embedding framework that enables transfer learning, with the target of locally optimizing the shock wave properties of nitrogen-rich molecules. This work is motivated by a need to overcome challenges associated with the translation of machine learning approaches to domains where there is a relative lack of domain-specific data. However, the properties studied in the second approach do not consider factors needed to assemble a complete material system. Therefore, in the third and final approach, we investigate material systems whose properties at system level are determined by various upstream design factors, such as the composition of raw materials, manufacturing variability, and considerations involved while assembling the system. We propose a stacked ensemble learning framework to make statistical inferences about the system properties.Item Automated Simulation and the Discovery of Mechanical Devices(2024) Chiu, Kevin; Fuge, Mark; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Automatically designing or finding novel devices that accomplish new or existing functions remains one of the greatest unsolved problems in Design Automation. In part, this is due to 1) the interplay of physical form and usage, 2) the emergence of complex behaviors from combinations of simple geometries, and 3) the sparsity and instability of “interesting” physical phenomena with small changes in the design space, which have historically stymied past efforts, since most approaches required 1) human intuition and creativity, 2) infeasibly large amounts of computational power, or 3) a priori targeted desired behavior. In contrast, this dissertation takes a data-driven approach to addressing the general question “What device functionality emerges organically from knowledge of various physical laws?” To make this high-level question more precise, this dissertation tackles three interrelated sub-questions that address challenges that arise when attempting to deploy data-driven methods on function discovery tasks. First, to generate diverse and high-quality datasets from which an algorithm might find novel behavior, this dissertation asks, “How do we enumerate possible boundary conditions for a given physical law that can lead to well-defined solutions to a given partial differential equation?” Chapter 3 proposes a type-based indexing scheme and two properties of that scheme that can generate valid Finite Element Method (FEM) formulations, resulting in a three-fold increase in the number of simulations we generated from our limited set of boundary conditions. Chapter 4 proposes a regression formulation for predicting physical realizability in Stokes flow simulations as estimated with the magnitude of the pressure field. Second, this dissertation asks, “How do we encapsulate the emergence of complex behaviors from interactions between different components?” Chapter 5 proposes reframing this question as an error regression, using graph neural networks to adjust for the “error” — i.e., emergent behavior — incurred by composing multiple basis Navier-Stokes simulations into one large simulation. Lastly, given solution field data, this dissertation asks, “Under what conditions can we detect novel device behaviors through computer-driven sim-ulation and exploration?” Chapter 6 proposes a boundary representation method and modified a hierarchical clustering approach, called Silhouette-optimized Hierarchical Density-Based Spatial Clustering of Applications with Noise (SHDBSCAN), to identify clusters of fluidic devices with similar behaviors. This chapter shows that the solution field representation has a significantly stronger impact on detecting novel device behaviors than the clustering algorithm used, but that a significant challenge lies in capturing “interesting” behavior in the design space in the first place. Overall, this dissertation illuminates promising simulation methods for automating functional discovery and initial work on using data-driven methods to analyze such data. It also highlights several challenges, including the curse of dimensionality, that plague such approaches.Item PHYSICS-INFORMED DEEP LEARNING FRAMEWORK FOR PROBABILISTIC MODELING OF ENVIRONMENTALLY INDUCED DEGRADATION(2024) Habibollahi Najaf Abadi, Hamidreza; Modarres, Mohammad; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Evaluating the degradation behavior and estimating the lifetime of engineering systems and structures is crucial to ensure their safe and reliable operation. Deep learning (DL) models, which are in the form of multi-layer neural networks (NN), have been widely used for the prognostics of such systems and structures, primarily by estimating their degradation intensity and remaining useful life (RUL). Although DL prognostic models have shown promising performance, there are limitations with such models that need to be considered. Firstly, they only learn the data patterns without consideration of the governing physics of degradation. Excluding physics, accompanied by the lack of interpretability in DL models, makes them prone to violating physical laws while showing a good fit to the training data. This issue may lead to weak generalization, mainly for predicting situations outside the training dataset. Secondly, they require significant data for sufficient training, which may not always be available. To estimate degradation and lifetime, NNs are typically trained in a supervised setting using labeled data that ideally have been collected at different levels of degradation up to the failure points. However, collecting that data is usually expensive and time-consuming, particularly for durable systems with long lifetimes, as material degradation (e.g., corrosion, fatigue, wear, or creep) is often slow. Therefore, there is a need for a model that possesses interpretability and follows the underlying physics of degradation that occurs in real-world conditions. Additionally, this model should be trainable with limited data.This dissertation proposes a novel data-driven framework to address the abovementioned limitations, including disregarding physics, lack of interpretability, and the need for big data in DL prognostics models. The framework comprises two NNs: a physics discovery NN and a predictive NN. The former models the underlying physics of degradation, while the latter makes probabilistic predictions for degradation intensity. The physics discovery NN guides the predictive NN and forces it to follow the underlying physics of degradation, which results in better life estimations. In this way, less data is required for sufficient training as the physics discovery model acts as a constraint and limits the search space for the parameters in the training of the predictive model. Additionally, integrating the state-of-the-art feature importance measurement methods into the physics discovery model makes it possible to identify the primary environmental factors that significantly impact the degradation process. This work enhances the interpretability by shedding light on the dominant factors influencing the system's degradation. The application of the proposed approach is demonstrated through two case studies based on publicly available datasets for degradation phenomena. The outcome of this research study can be used to develop a prognostics and health management system that can facilitate a low-cost and high-performance predictive maintenance strategy for systems experiencing environmentally induced degradation. Also, the proposed method can guide data collection from the field by revealing the influential factors that play crucial roles in the degradation of systems. Moreover, the proposed approach offers valuable benefits to designers, enabling them to incorporate appropriate preventive and mitigation strategies into their designs.Item ENERGY ANALYSIS OF A METRO TRANSIT SYSTEM FOR SUSTAINABILITY AND EFFICIENCY IMPROVEMENT(2023) Higgins, Jordan Andrew; Ohadi, Michael; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The industrial sector in the US accounted for 33% of the overall energy consumption and 23% of total GHG Emissions in 2022, necessitating the need for energy efficiency and decarbonization of this sector. This study identifies common opportunities and challenges while performing energy audits for the State of Maryland public transportation maintenance complex and proposes site-specific energy efficiency measures. Utilizing performance indices such as Energy Use Intensity (EUI) and load factor from end-use energy data, as well as walkthrough observations from energy audits, energy efficiency measures specific to each facility were formulated to augment the overall energy performance. Additionally, energy modeling helped pinpoint the additional scope of energy efficiency improvements that could have potential significant energy performance improvements and reduce on-site GHG emissions. Among the energy conservation measures considered, the re-sizing and decarbonization of HVAC equipment has the greatest contribution to energy and GHG savings, with a 100% decrease in natural gas, a 37% decrease in electricity use annually, and net decrease of 272 Mton CO2. This study aims to highlight the similarities and differences in existing transportation and maintenance facilities and the applicable technology(ies) that could streamline and serve as a guide for energy audits for transportation maintenance facilities by demonstrating the most common energy efficiency measures and subsequent achievable savings for these facilities.Item TOPOLOGICAL ANALYSIS OF DISTANCE WEIGHTED NORTH AMERICAN RAILROAD NETWORK: EFFICIENCY, ECCENTRICITY, AND RELATED ATTRIBUTES(2023) Elsibaie, Sherief; Ayyub, Bilal M.; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The North American railroad system can be well represented by a network with 302,943 links (track segments) and 250,388 nodes (stations, junctions, and waypoints), and other points of interest based on publicly accessible geographical information obtained from the Bureau of Transportation Statistics (BTS) and the Federal Railroad Administration (FRA). From this large network a slightly more consolidated subnetwork representing the major freight railroads and Amtrak was selected for analysis. Recent improvements in network and graph theory and improvements in all-pairs shortest path algorithms make it more feasible to process certain characteristics on large networks with reduced computation time and resources. The characteristics of networks at issue to support network-level risk and resilience studies include node efficiency, node eccentricity, and other attributes derived from those measures, such as network arithmetic efficiency, network geometric central node, radius, and diameter, and some distribution measures of the node characteristics. Rail distance weighting factors, representing the length of each rail line derived from BTS data, are mapped to corresponding links, and are used as link weights for the purpose of computing all pair shortest paths and subsequent characteristics. This study also compares the characteristics of North American railroad infrastructure subnetworks divided by Class I carriers, which are the largest railroad carriers classified by the Surface Transportation Board (STB) by annual operating revenue, and which together comprise most of the North American railroad network. These network characteristics can be used to inform placement of resources and plan for natural hazard and disaster scenarios. They relate to many practical applications such as network efficiency to distribute traffic and a network’s ability to recover from disruptions. The primary contribution of this thesis is the novel characterization of a detailed network representation of the North American railroad network and Class I carrier subnetworks, with established as well novel network characteristics.