Mechanical Engineering Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2795
Browse
Recent Submissions
Item DEVELOPMENT OF A NEAR-ISOTHERMAL COMPRESSION PROCESS UTILIZING LIQUID PISTON TECHNOLOGY FOR TRANSCRITICAL CO2 CYCLE(2024) Lee, Cheng-Yi; Radermacher, Reinhard; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Compressors are critical components in various industries and are commonly used in numerous applications. With the increasing concerns about global warming, there has been a significant focus on improving compressor efficiency, a subject of extensive research and innovation. This is particularly true for the heating, ventilation, air conditioning, and refrigeration industries since almost every household relies on air conditioning and refrigeration systems. Despite numerous proposed and investigated technologies for efficiency improvements, there remains substantial potential for further advancements. The compression process is often modeled as an isentropic process in which a significant portion of the work provided during compression is converted to heat, increasing the overall input power required. This inefficiency highlights the ongoing need for innovative solutions to reduce the input power of compressors. This dissertation primarily focuses on the experimental development and theoretical investigation of a near-isothermal liquid piston compressor in a transcritical CO2 cycle. The liquid piston compressor employs a liquid piston instead of traditional mechanical pistons to compress gas. This design offers high volumetric efficiency and allows for flexible compression chamber geometries. This innovative compressor design discharges compression heat and internal energy as a form of heat through a compressor-integrated gas cooler during compression. Three successively improved prototypes were constructed to validate the concept and enhance the system. A proof-of-concept test facility was fabricated to demonstrate the feasibility of this design. Furthermore, a complete refrigeration cycle system incorporating the liquid piston compressor was developed. Based on the experimental results, an improved second prototype was built and sent to the Helix Innovation Center of Copeland for field testing. The results show that the isothermal efficiency achieved is 93.5% in the proof-of-concept tests with a self-manufactured copper bare tube heat exchanger as the compression chamber. 90% isothermal efficiency was observed in the first system prototype with a microchannel heat exchanger, and 89 % isothermal efficiency in the second system prototype. The highest compressor coefficient of performance (COP) achieved was 1.82 in the second system prototype. This performance was observed under an average suction pressure of 3,800 kPa and a gas cooler pressure of 10,000 kPa under 35°C ambient temperature. Simulations revealed that the near-isothermal liquid piston compressor could achieve high isothermal efficiency by using heat transfer through the compression chamber and the chamber's thermal mass. This technology's potential applications extend beyond refrigeration, including compressed air energy storage, hydrogen storage, and compressed natural gas systems. These applications were investigated and discussed, highlighting this innovative compressor design's versatility and potential impact. The liquid piston compressor developed in this study exhibits substantial potential for reducing compression work, as supported by both experimental data and simulation modeling. Theintegrated gas cooler in the liquid piston compressor facilitates near-isothermal compression by effectively dissipating both compression heat and internal energy as a form of heat. This heat discharge enhances compression efficiency and improves overall system performance. Future work will prioritize selecting a hydraulic fluid with minimal solubility for CO2 to mitigate degassing issues during compression. Additionally, current market-available pumps do not adequately meet the requirements of the transcritical CO2 cycle. Therefore, developing a semi-hermetic pump will be crucial for the next generation of transcritical CO2 liquid piston compressors. Finally, integrating this pump with an optimized gas cooler and achieving a size comparable to traditional compressors will be essential to making the developed device commercially competitive.Item TUBE-LOAD MODEL AS A DIGITAL TWIN FOR ABDOMINAL AORTIC ANEURYSM PATIENTS(2024) Kim, Donghyeon; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Abdominal aortic aneurysm (AAA) is a life-threatening condition characterized by the abnormal dilation of the aorta, which can lead to vessel rupture and high mortality rates (>80%). Alarmingly, AAA is often asymptomatic and can remain undetected until it reaches a critical size or ruptures. Current methods for diagnosing and monitoring AAA, such as ultrasound, CT, and MRI, are effective but expensive for regular use and require specialized operators. These limitations hinder the widespread use of imaging-based techniques for regular AAA screening and surveillance. Therefore, creating a need for more accessible, affordable, and convenient tools to detect AAA in its early stages, monitor its progression, and assess treatment efficacy. This thesis explores the potential of tube-load (TL) model to non-invasively monitor AAA progression by analyzing arterial pressure waveforms, which change in response to aneurysm-induced alterations in aortic geometry and mechanical properties. These changes are captured and revealed by the parameters of the TL model. To evaluate the TL model’s capability to monitor AAA, we applied it to carotid and femoral artery tonometry waveforms collected from 79 subjects, including both controls and AAA subjects, as well as a subset of 35 AAA subjects before and after endovascular repair (EVAR) surgery. Our analysis showed that the TL model could fit the waveforms from pre-EVAR AAA subjects as accurately as those from controls and post-EVAR. Moreover, the TL model parameters exhibited physiologically explainable changes consistent with the structural changes of the aorta associated with AAA and its treatment. These findings suggest that the TL model has the potential as a digital twin to enable convenient and cost-effective personalized AAA monitoring.Item Energy Absorbing Cellular Structures for Crashworthiness Applications(2024) Murray, Colleen; Wereley, Norman; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Energy absorbing materials are utilized in many applications. Aircraft, automobiles, and helmets all use energy absorbing materials to ensure the safety of the individual during an impact event. The seats in aircraft are made from a material that can minimize the force that is transferred from the impact to the occupant. In a similar manner, the material in the front of an automobile is designed to absorb the energy from an impact event and redistribute it in a manner that minimizesthe amount of energy experienced by the main cabin. Helmets perform in the same way: by taking the impact and distributing the load to protect the wearer. The materials used in these applications were tailored to meet the needs of the application, particularly the density and strength of the material. Using cellular structures allow for more control of the design for energy absorbing applications, particularly when looking to increase the performance of the material. There are three options for increasing the energy absorption in materials for crashworthiness applications: decrease the force with a constant mean crush stress, increase the mean crush stress with a constant force, or decrease the force while increasing the mean crush stress. In a force- displacement diagram, the area under the curve is the amount of energy that a material can absorb during an impact. By decreasing that initial force, the initial peak force will begin to equilibrate with the mean crush, resulting in a higher energy absorption. The structures that have been relied on throughout history for these applications are cellular structures. Cellular structures are described as any structure that is made of one phase composed of either air or fluid. As Lakes describes in his work, foams, honeycombs, and lattices are categorized as such; the voids allow the materials to reach physical limits beyond their previous. With the improvements of technology, it is important to re-asses these structures to determine whether they too can be manufactured and remain as effective in their original crashworthiness applications as before. Throughout this work, different methods of additive manufacturing are used to create honeycomb structures specifically for energy absorption applications. Each of these studies focuses on a different attribute that additive manufacturing can help improve in energy absorption materials. In this dissertation, four case studies involving the out-of-plane compression of additively manufactured honeycomb will be discussed. The first chapter will center on the applications of visco-elastic theromplastic polyurethane (TPU) as a potential material of choice for energy absorption materials. TPU is a material that has the ability to achieve significant deformation and return to its original shape within a matter of minutes. This material is of interest due to the need to re-use helmet liners and other safety mechanisms before buying a new one. This work also focuses on the impact that adding buckling initiators will have to the structure in terms of energy absorption during quasi-static conditions. The next chapter is centered on the applications of these TPU honeycomb undergoing dynamic testing. Crashworthiness materials experience impact velocities bordering on 10- 15 m/s (22- 35 mph). These tests differ from the previous due to the velocity no longer being constant. As the impactor falls, the velocity changes, while the quasi-static tests were completed under a constant velocity. This set of dynamic tests is most representative of long term applications, however the performance of these materials change drastically as discussed. In some applications, a visco-elastic plastic is not going to be able to absorb the energy from the impact. In these situations, a stiffer material would be necessary. To provide an alternative for these applications, acrylonitrile butadiene styrene (ABS) was studied since it is a commonly used plastic when additively manufacturing. Once again, honeycomb were manufactured and tested under out of plane, uni-axial quasi static compression. The samples were studied to determine the effects of buckling initiator location as well as the effect of the inscribed diameter. For this, samples were manufactured with an internal diameter of 10, 15, or 20 mm. The buckling initiators were located either 1/2, 3/4, or at the top of the samples to determine the design which enables the best energy absorption. The final study recognizes that traditional honeycomb has been manufactured using metals like aluminum and steel. By moving towards an additively manufactured honeycomb, this work has been focusing on polymeric honeycomb instead. The metallic additive manufacturing methods require drastic safety precautions be taken. A safer alternative is proposed in this last study: combining stereolithography and electroplating. Here, an isotropic material can be the core of the structure, with a thin layer (about 150 μm) of metal creating the ductile layer. These samples demonstrate a ductile failure as opposed to their plastic only counterparts who experience a brittle failure. The energy absorption performance is then characterized as a function of buckling initiator height as well.Item Equilibrium Programming for Improved Management of Water-Resource Systems(2024) Boyd, Nathan Tyler; Gabriel, Steven A; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Effective water-resources management requires the joint consideration of multiple decision-makers as well as the physical flow of water in both built and natural environments. Traditionally, game-theory models were developed to explain the interactions of water decision-makers such as states, cities, industries, and regulators. These models account for socio-economic factors such as water supply and demand. However, they often lack insight into how water or pollution should be physically managed with respect to overland flow, streams, reservoirs, and infrastructure. Conversely, optimization-based models have accounted for these physical features but usually assume a single decision-maker who acts as a central planner. Equilibrium programming, which was developed in the field of operations research, provides a solution to this modeling dilemma. First, it can incorporate the optimization problems of multiple decision-makers into a single model. Second, the socio-economic interactions of these decision-makers can be modeled as well such as a market for balancing water supply and demand. Equilibrium programming has been widely applied to energy problems, but a few recent works have begun to explore applications in water-resource systems. These works model water-allocation markets subject to the flow of water supply from upstream to downstream as well as the nexus of water-quality management with energy markets. This dissertation applies equilibrium programming to a broader set of physical characteristics and socio-economic interactions than these recent works. Chapter 2 also focuses on the flow of water from upstream to downstream but incorporates markets for water recycling and reuse. Chapter 3 also focuses on water-quality management but uses a credit market to implement water-pollution regulations in a globally optimal manner. Chapter 4 explores alternative conceptions for socio-economic interactions beyond market-based approaches. Specifically, social learning is modeled as a means to lower the cost of water-treatment technologies. This dissertation's research contributions are significant to both the operations research community and the water-resources community. For the operations research community, this dissertation could serve as model archetypes for future research into equilibrium programming and water-resource systems. For instance, Chapter 1 organizes the research in this dissertation in terms of three themes: stream, land, and sea. For the water-resources community, this dissertation could make equilibrium programming more relevant in practice. Chapter 2 applies equilibrium programming to the Duck River Watershed (Tennessee, USA), and Chapter 3 applies it to the Anacostia River Watershed (Washington DC and Maryland, USA). The results also reinforce the importance of the relationships between socio-economic interactions and physical features in water resource systems. However, the risk aversion of the players acts as an important mediating role in the significance of these relationships. Future research could investigate mechanisms for the emergence of altruistic decision-making to improve equity among the players in water-resource systems.Item MOLD PROCESS INDUCED RESIDUAL STRESS PREDICTION USING CURE EXTENT DEPENDENT VISCOELASTIC BEHAVIOR(2024) Phansalkar, Sukrut Prashant; Han, Bongtae; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Epoxy molding compounds (EMC) are widely used in encapsulation of semiconductor packages. Encapsulation protects the package from physical damage or corrosion due to harsh environments. Molding processes produce residual stresses in encapsulated components. They are combined with the stresses caused by the coefficient of thermal expansion (CTE) mismatch to dictate the final warpage at room and reflow temperatures, which must be controlled for fabrication of redistribution layer (RDL) as well as yield during assembly. During molding process, EMC is continuously curing and the mechanical properties continue to evolve; more specifically, the equilibrium modulus and the relaxation modulus. The former defines behavior after complete relaxation while the latter describes the transient behavior. It is thus necessary to measure cure-dependent viscoelastic properties of EMC to be able to determine mold induced residual stresses accurately. This is the motivation for this thesis. In this thesis, a set of novel methodologies are developed and implemented to quantify a complete set of cure-dependent viscoelastic properties, including the fully cured properties. Firstly, an advanced numerical scheme has been developed to quantify cure kinetics of thermosets with both single and dual-reaction systems. Secondly, a unique methodology is proposed to measure the viscoelastic bulk modulus -K(t,T) of EMC using hydrostatic testing. The methodology is implemented with a unique test setup based on inert gas. The results of viscoelastic testing along with the shear modulus (G) and bulk modulus (K) master curves and temperature-dependent shift factors (a(T)) of fully-cured EMC are presented. Thirdly, a novel test methodology utilizing monotonic testing has been developed to measure two sets of equilibrium moduli of EMC as a function of cure extent (p). The main challenge for the measurements is that the equilibrium moduli could only be measured accurately only when the EMC has fully relaxed. The temperatures for complete relaxation typically occur above the glass transition temperature, Tg (p), where the curing rate is also high. A special measurement procedure is developed, through which the EMC moduli above Tg can be determined quickly at a near isocure state. Viscoelastic testing of partially-cured EMC is followed to determine the cure-dependent shift factors of EMC. The test durations have to be very long (several hours) and it is conducted below Tg (p) of the EMC where the reaction is slow (under diffusion-control) Finally, a numerical scheme that can utilize the measured cure-dependent viscoelastic properties is developed. It is implemented to predict the residual stress evolution of molded packages during and after molding processes.Item ANALYSIS OF THE LIFE-CYCLE COST AND CAPABILITY TRADEOFFS ASSOCIATED WITH THE PROCUREMENT AND SUSTAINMENT OF OPEN SYSTEMS(2024) Chen, Shao-Peng; Sandborn, Peter PS; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)System openness refers to the extent to which system components can be independently integrated, removed, managed, or replaced without adversely impacting the system. Openness (of a system and/or architecture), though intuitively understood, remains difficult to quantify in terms of its value for safety-, mission-, and infrastructure-critical systems. Examples of these critical systems include: aircraft, rail, industrial controls, power generation, and defense systems – all of these systems are characterized by large procurement costs, large life-cycle sustainment costs and very long support lives (e.g., it is not uncommon for these systems to be supported for 30 or more years).Generally, it is taken for granted that the use of open systems decreases the total life-cycle cost of a system. Leveraging existing open technology, including commercial-off-the-shelf (COTS) components, avoids many costs associated with designing custom components, and reduces the time required for development and eventually refresh of a system. The use of open systems helps mitigate the effects of obsolescence, lengthens the system’s support life, and allows for the incremental insertion of new technologies. Component design reuse also eliminates redundant components, thus reducing logistical costs. However, building systems from open standards and commercially available components often relies on the use of generalized technology containing unnecessary additional functionality, which increases the system’s complexity and adds new failure paths and additional qualification overhead. In other cases, it may be necessary to modify COTS components to meet performance requirements, thereby adding costs. In addition, the enterprise that manages the system often has no control over the supply-chain for COTS components, which adds supply disruption risk and introduces the risk compromised. Previous efforts to establish the value of openness have relied on highly qualitative analyses, with the results often articulated as intangible “openness scores”. Such approaches do not provide sufficient information to make a business case or understand the conditions under which life-cycle cost avoidance can be maximized (or whether there even is a cost avoidance). This dissertation is focused on creating a general model for quantifying the relationship between system openness and life-cycle cost that can be used to optimize system openness strategies for critical systems, an outcome that could significantly reduce system sustainment costs. This work is composed of the following tasks: 1) Mining of public-source materials to solidify what is known and believed about the relationship between open-systems attributes and life-cycle costs. 2) Development of a multivariate model and associated simulation that quantifies the relationship between openness and life-cycle cost for systems composed of hardware and software. 3) A case study of the life-cycle cost difference between two implementations of the same system with differing levels of openness (the US Navy A-RCI sonar system on Los Angeles and Ohio class submarines is the case study system for this dissertation). 4) Generalization of the understanding of the relationship between system life-cycle cost and openness is achieved through a generic analytic model. This model efficiently estimates the relationship between relative life-cycle cost and system openness, considering relevant parameters. It enables the determination of optimum system openness without the need for running a detailed simulation.Item VOLUMETRIC SOLAR ABSORBING FLUIDS AND THEIR APPLICATIONS IN TWO-PHASE THERMOSYPHON(2024) Zhou, Jian; Yang, Bao; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A two-phase thermosyphon is a passive system utilizing gravity to transfer working fluids. The working fluids of a two-phase thermosyphon must undergo vaporization and condensation in the same system. Two-phase thermosyphons can also be used as solar collectors. Traditional solar collectors utilize surface absorbers to convert incident solar radiation into thermal energy, but those systems feature a large temperature difference between the surface absorbers and heat transfer fluids, resulting in a reduction in the overall thermal efficiency. Volumetric solar absorbing fluids serve both as solar absorbers and heat transfer fluids, therefore significantly improving the overall efficiency of solar collectors. Comparing to pure fluids, nanofluids possess both enhanced thermal conductivity and solar absorption capacity as volumetric absorbing fluids. Nanofluids, when serving as volumetric solar absorbing fluids, are so far reported to work only at relatively low temperatures and in a single-phase heat transfer regime due to stability issue. This research investigates the possibility of using nanofluids, especially graphene oxide (GO) nanofluids, as volumetric solar absorbing fluids in two-phase thermosyphons. Despite their reputation as both stable and solar absorptive among nanofluids, graphene oxide nanofluids still deteriorate quickly under boiling-condensation processes (~100 °C). The solar transmittance of the GO nanofluids declines from 38 to 4%, during the first 24 h of testing. Further investigation shows that the stability deterioration is caused by the thermal reduction of GO nanoparticles, which mainly featured with de-carboxylation and de-hydroxylation. A commercial dye named acid black 52, when dissolved in water, exhibits great broadband solar absorption properties and excellent stability. It remains stable for over 199 days in two-phase thermosyphon, and their transmittance in solar spectral region varies less than 9%. The stability of acid black 52 aqueous solution is further confirmed with the 191-day enhanced radiation test, as it shows less than 5% transmittance change in solar spectral region.Item OPTIMAL PROBING OF BATTERY CYCLES FOR MACHINE LEARNING-BASED MODEL DEVELOPMENT(2024) Nozarijouybari, Zahra; Fathy, Hosam HF; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation examines the problems of optimizing the selection of the datasets and experiments used for parameterizing machine learning-based electrochemical battery models. The key idea is that data selection, or “probing” can empower such models to achieve greater fidelity levels. The dissertation is motivated by the potential of battery models to enable theprediction and optimization of battery performance and control strategies. The literature presents multiple battery modeling approaches, including equivalent circuit, physics-based, and machine learning models. Machine learning is particularly attractive in the battery systems domain, thanks to its flexibility and ability to model battery performance and aging dynamics. Moreover, there is a growing interest in the literature in hybrid models that combine the benefits of machine learning with either the simplicity of equivalent circuit models or the predictiveness of physics-based models or both. The focus of this dissertation is on both hybrid and purely data-driven battery models. Moreover, the overarching question guiding the dissertation is: how does the selection of the datasets and experiments used for parameterizing these models affect their fidelity and parameter identifiability? Parameter identifiability is a fundamental concept from information theory that refers to the degree to which one can accurately estimate a given model’s parameters from input-output data. There is substantial existing research in the literature on battery parameter identifiability. An important lesson from this literature is that the design of a battery experiment can affect parameter identifiability significantly. Hence, test trajectory optimization has the potential to substantially improve model parameter identifiability. The literature explores this lesson for equivalent circuit and physics-based battery models. However, there is a noticeable gap in the literature regarding identifiability analysis and optimization for both machine learning-based and hybrid battery models. To address the above gap, the dissertation makes four novel contributions to the literature. The first contribution is an extensive survey of the machine learning-based battery modeling literature, highlighting the critical need for information-rich and clean datasets for parameterizing data-driven battery models. The second contribution is a K-means clustering-based algorithm for detecting outlier patterns in experimental battery cycling data. This algorithm is used for pre-cleaning the experimental cycling datasets for laboratory-fabricated lithium-sulfur (Li-S) batteries, thereby enabling the higher-fidelity fitting of a neural network model to these datasets. The third contribution is a novel algorithm for optimizing the cycling of a lithium iron phosphate (LFP) to maximize the parameter identifiability of a hybrid model of this battery. This algorithm succeeds in improving the resulting model’s Fisher identifiability significantly in simulation. The final contribution focuses on the application of such test trajectory optimization to the experimental cycling of commercial LFP cells. This work shows that test trajectory optimization is s effective not just at improving parameter identifiability, but also at probing and uncovering higher-order battery dynamics not incorporated in the initial baseline model. Collectively, all four of these contributions show the degree to which the selection of battery cycling datasets and experiments for richness and cleanness can enable higher-fidelity data-driven and hybrid modeling, for multiple battery chemistries.Item Locomotion on Granular Media: Reduced-Order Models(2024) Nikiforidou, Christina; Balachandran, Balakumar; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Deciphering and predicting the dynamics of locomotion over compliant terrain is garnering increasing interest because of its relevance to search and rescue missions, planetary explorations, as well as robot navigation on diverse surfaces. In the realm of assistive devices for individuals with limited ankle mobility, it is valuable for an assistive device to be of use in both indoor and outdoor environments. In this thesis research, the Dynamic Data-Driven Application Systems framework is employed and this framework is used to determine lower extremity trajectories for supporting the operation of a robotic device on uneven terrains. With this framework, data obtained from simulations is combined along with noisy sensor measurements to predict the contact forces of a leg interacting with granular media. For the simulation of the dynamical responses of legged locomotion on granular media, two models are used. They are, namely, a reduced-order model based on the Resistive Forces Theory (RFT) and a high-fidelity model based on the Smoothed Particle Hydrodynamics (SPH) method. The results obtained with the two models are compared for various leg morphologies to assess how well these models can be used to capture complex contact interactions between a robot appendage and granular media. After data collection from simulations, the efficiency of the proposed data-driven framework is illustrated and discussed by examining test cases that involve the gait responses of robotic appendages interacting with granular material. Preliminary experiments of foot interactions with granular media have also been conducted and this data has also been considered in the DDDAS framework. The present work can serve as a basis for further developing the utility of the DDDAS framework for robot device operations, with a particular emphasis on assistive robots. The utility of data generated from off-line simulations and real-time data generated from sensors on these assistive robots can help the robots adapt better to different terrains.Item Metareasoning Strategies to Correct Navigation Failures of Autonomous Ground Robots(2024) Molnar, Sidney Leigh; Herrmann, Jeffrey; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Due to the complexity of autonomous systems, theoretically perfect path planning algorithms sometimes fail due to emergent behaviors that arise when interacting with different perception, mapping and goal planning subprocesses. These failures prevent mission success, especially in complex environments that have not previously been explored by the robot. To overcome these failures, many researchers have sought to develop parameter learning methods to improve either mission success or path planning convergence. Metareasoning, which can be simply described as “thinking about thinking,” offers another possible solution for mitigating these planning failures. This project offers a novel metareasoning approach that uses different methods of monitoring and control to detect and overcome path planning irregularities that contribute to path planning failures. All methods for the approaches were implemented as a part of the ARL ground autonomy stack which uses both global and local path planning ROS nodes. The proposed monitoring methods include listening to messages published to the system by the planning algorithms themselves, evaluating for the environmental context that the robot is in, the expected progress methods which use the robot’s movement capabilities to evaluate for progress that has been made from a milestone checkpoint, and the fixed radius methods which use user-selected parameters based on mission objectives to evaluate for the progress that has been made from a milestone checkpoint. The proposed control policies are the metric-based sequential policies which use benchmark robot performance metrics to select the order in which the planner combinations are to be launched, the context-based pairs policies which evaluate what happens when switching between only two planner combinations, and the restart policy which simply relaunches a new instance of the same planner combination. The study evaluated which monitoring and control policies, when paired, contributed to improved navigation performance and which policies contributed to degraded navigation performance by evaluating how close the robot was able to get to the final mission goal. Although specific methods were evaluated, the contributions of the project extend beyond the results by offering both a template for metareasoning approaches with regard to navigation as well as replicable algorithms that may be applied to any autonomous ground robot system. Additionally, this thesis presents ideas for additional research in order to determine under which conditions metareasoning will improve navigation.Item DESIGN NOVELTY EVALUATION THROUGH ORDINAL EMBEDDING: COMPARISON OF NOVELTY AND TRIPLET ERRORS(2024) Keeler, Matthew Garrett; Fuge, Mark; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A practical and well-studied method for computing the novelty of a design is to construct an embedding via a collection of pairwise comparisons between items (called triplets), and use distances within that embedding to compute which designs are farthest from the center. These triplet comparisons are posed in the form of "Is Design A closer to Design B or Design C?'', and inform the placement of designs in the similarity-space embedding. This method of creating an embedding from non-metric relationship comparisons is known as ordinal embedding. Unfortunately, ordinal embedding methods can require a large number of triplets before their primary error measure--the proportion of violated triplet comparisons--converges. But if our goal is accurate novelty estimation, is it really necessary to fully minimize all triplet violations? Can we extract useful information regarding the novelty of all or some items using fewer triplets than existing convergence rates on the saturation of triplet violations might imply? This thesis addresses this question by studying the relationship between triplet violation error and novelty score error when using ordinal embeddings. We find that estimating the novelty of a set of items via ordinal embedding can require significantly fewer human-provided triplets than is needed to converge the triplet error, and that this effect is modulated by the type of triplet sampling method (random versus uncertainty-informed active sampling). Having learned this, we propose the use of a custom metric we call the 'Expected Model Change' (EMC) which we use to observe when novelty information in the embedding has stopped updating under newly labeled triplets, so that conservative bounding functions need not be used. Moreover, to avoid the dangers of low accuracy in selecting the dimension of the ordinal embedding, we propose use of the Expected Model Change for tuning the embedding dimension to an appropriate value. In this framework, we explore the convergence properties of ordinal embeddings reconstructed from triplets taken from a variety of synthetic and real-world design spaces.Item Denoising the Design Space: Diffusion Models for Accelerated Airfoil Shape Optimization(2024) Diniz, Cashen; Fuge, Mark D; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Generative models offer the possibility to accelerate and potentially substitute parts of the often expensive traditional design optimization process. We present Aero-DDM, a novel application of a latent denoising diffusion model (DDM) capable of generating airfoil geometries conditioned on flow parameters and an area constraint. Additionally, we create a novel, diverse dataset of optimized airfoil designs that better reflects a realistic design space than has been done in previous work. Aero-DDM is applied to this dataset, and key metrics are assessed both statistically and with an open-source computational fluid dynamics (CFD) solver to determine the performance of the generated designs. We compare our approach to an optimal transport GAN, and demonstrate that our model can generate designs with superior performance statistically, in aerodynamic benchmarks, and in warm-start scenarios. We also extend our diffusion model approach, and demonstrate that the number of steps required for inference can be reduced by as much as ~86%, compared to an optimized version of the baseline inference process, without meaningful degradation in design quality, simply by using the initial design to start the denoising process.Item The Natural Response of Uniform and Nonuniform Plates in Air and Partially Submerged in a Quiescent Water Body(2024) Fishman, Edwin Barry; Duncan, James; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The free vibration of three aluminum plates (.4 m wide, 1.08 m long) oriented horizontally is studied experimentally under two fluid conditions, one with the plate surrounded by air, called the Air case, and the other with the bottom plate surface in contact with a large undisturbed pool of water, called the Half-Wet case. Measurements of the out-of-plane deflection of the upper surfaces of the plates are made using cinematic Digital Image Correlation (DIC) over the center portion of the surface and optical tracking of the center point. Three plate geometries and boundary conditions are studied: A uniform plate with 6.35 mm thickness pinned at the two opposite narrow ends (designated UP), a uniform plate with 4.83 mm thickness simply supported at one narrow end and clamped at the opposite end (UC), and a stepped plate with thickness varying from 12.7 mm to 6.35 mm along its 1.08 m length pinned at two opposite narrow ends (SP). The plate's free response is induced using an impact hammer at three locations along the center-line of the plate. Video frames of the motion of the upper surface of the plate are collected from stereoscopic cameras and processed using DaVis-Strainmaster and MATLAB to extract full-field displacements as a function of time. Two-degree-of-freedom displacements of the plate center are also collected by tracking a target attached to the center of the plate's lower surface. Time and frequency response plots are presented for comparison between the Half-Wet and Air cases and analysis of their dynamics. It is found that the added mass of the water results in lower measured natural frequencies and modified mode shapes. In the Air case, these results are compared to mode shapes/frequencies produced in Creo Simulate and found to agree. Further experiments are discussed.Item Second Wave Mechanics(2024) Fabbri, Anthony; Herrmann, Jeffrey W; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The COVID-19 pandemic experienced very well-documented "waves" of the virus's progression, which can be analyzed to predict future wave behavior. This thesis describes a data analysis algorithm for analyzing pandemic behavior and other, similar problems. This involves splitting the linear and sinusoidal elements of a pandemic in order to predict the behavior of future "waves" of infection from previous "waves" of infection, creating a very long-term prediction of a pandemic. Common wave shape patterns can also be identified, to predict the pattern of mutations that have recently occurred, but have not become popularly known as yet, to predict the remaining future outcome of the wave. By only considering the patterns in the data that could possibly have acted in tandem to generate the observed results, many false patterns can be eliminated, and, therefore, hidden variables can be estimated to a very high degree of probability. Similar mathematical relationships can reveal hidden variables in other underlying differential equations.Item A Framework for Remaining Useful Life Prediction and Optimization for Complex Engineering Systems(2024) Weiner, Matthew Joesph; Azarm, Shapour; Groth, Katrina M; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Remaining useful life (RUL) prediction plays a crucial role in maintaining the operational efficiency, reliability, and performance of complex engineering systems. Recent efforts have primarily focused on individual components or subsystems, neglecting the intricate relationships between components and their impact on system-level RUL (SRUL). The existing gap in predictive methodologies has prompted the need for an integrated approach to address the complex nature of these systems, while optimizing the performance with respect to these predictive indicators. This thesis introduces a novel methodology for predicting and optimizing SRUL, and demonstrates how the predicted SRUL can be used to optimize system operation. The approach incorporates various types of data, including condition monitoring sensor data and component reliability data. The methodology leverages probabilistic deep learning (PDL) techniques to predict component RUL distributions based on sensor data and component reliability data when sensor data is not available. Furthermore, an equation node-based Bayesian network (BN) is employed to capture the complex causal relationships between components and predict the SRUL. Finally, the system operation is optimized using a multi-objective genetic algorithm (MOGA), where SRUL is treated as a constraint and also as an objective function, and the other objective relates to mission completion time. The validation process includes a thorough examination of the component-level methodology using the C-MAPSS data set. The practical application of the proposed methodology in this thesis is through a case study involving an unmanned surface vessel (USV), which incorporates all aspects of the methodology, including system-level validation through qualitative metrics. Evaluation metrics are employed to quantify and qualify both component and system-level results, as well as the results from the optimizer, providing a comprehensive understanding of the proposed approach’s performance. There are several main contributions of this thesis. These include a new deep learning structure for component-level PHM, one that utilizes a hybrid-loss function for a multi-layer long short-term memory (LSTM) regression model to predict RUL with a given confidence interval while also considering the complex interactions among components. Another contribution is the development of a new framework for computing SRUL from these predicted component RULs, in which a Bayesian network is used to perform logic operations and determine the SRUL. These contributions advance the field of PHM, but also provide a practical application in engineering. The ability to accurately predict and manage the RUL of components within a system has profound implications for maintenance scheduling, cost reduction, and overall system reliability. The integration of the proposed method with an optimization algorithm closes the loop, offering a comprehensive solution for offline planning and SRUL prediction and optimization. The results of this research can be used to enhance the efficiency and reliability of engineering systems, leading to more informed decision-making.Item TOWARDS AUTOMATION OF HEMORRHAGE DIAGNOSTICS AND THERAPEUTICS(2024) Chalumuri, Yekanth Ram; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The main aim of the thesis is to advance the technology in the development ofalgorithms and methodologies that will advance the care in hemorrhage diagnostics and therapeutics in low resource settings. The first objective of this thesis is to develop algorithms to primarily detect internal hemorrhage using non-invasive multi-modal physiological sensing. We developed a machine learning algorithm that can classify various types of hypovolemia and is shown to be performing superior to the algorithms developed primarily based on vital signs. To address the limitations in the data-driven approaches, we explored physics-based approaches to detect internal hemorrhage. In silico analysis showed that our physics-based algorithms can not only detect hemorrhage but also can detect hemorrhage even when hemorrhage is being compensated by fluid resuscitation. The second objective is to advance the regulatory aspects of physiological closed-loopcontrol systems in maintaining blood pressure at a desired value during hemorrhage and resuscitation. Physiological closed-loop control systems offer an exciting opportunity to treat hemorrhage in low resource settings but often face regulatory challenges due to safety concerns. A physics-based model with rigorous validation can improve regulatory aspects of such systems but current validation techniques are very naive. We developed a physics-based model that can predict hemodynamics during hemorrhage and resuscitation and validated these factors using a validation framework that uses sampled digital twins. Then we utilized the validated model to evaluate its efficacy in predicting the performance capability of the model and virtual patient generator in predicting the closed-loop controller metrics of unseen experimental data. To summarize, we tried to improve the hemorrhage care using novel algorithmdevelopment and in silico validation and evaluation of computation models that can be used to treat hemorrhage.Item ON DATA-BASED MAPPING AND NAVIGATION OF UNMANNED GROUND VEHICLES(2024) Herr, Gurtajbir Singh; Chopra, Nikhil; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Unmanned ground vehicles (UGVs) have seen tremendous advancement in their capabilities and applications in the past two decades. With several key algorithmic and hardware breakthroughs and advancements in deep learning, UGVs are quickly becoming ubiquitous (finding applications as self-driving cars, for remote site inspections, in hospitals and shopping malls, among several others). Motivated by their large-scale adoption, this dissertation aims to enable the navigation of UGVs in complex environments. In this dissertation, a supervised learning-based navigation algorithm that utilizes model predictive control (MPC) for providing training data is developed. Improving MPC performance by data-based modelling of complex vehicle dynamics is then addressed. Finally, this dissertation deals with detecting and registering transparent objects that may deteriorate navigation performance. Navigation in dynamic environments poses unique challenges, particularly due to the limited knowledge of the decisions made by other agents and their objectives. In this dissertation, a solution that utilizes an MPC-based planner as an \textit{expert} to generate high-quality motion commands for a car-like robot operating in a simulated dynamic environment is proposed. These commands are then used to train a deep neural network, which learns to navigate. The deep learning-based planner is further enhanced with safety margins to improve its effectiveness in collision avoidance. The performance of the proposed method through simulations and real-world experiments, demonstrating its superiority in terms of obstacle avoidance and successful mission completion is showcased. This research has practical implications for the development of safer and more efficient autonomous vehicles. Many real-world applications rely on MPC to control UGVs due to its safety guarantees and constraint satisfaction properties. However, the performance of such MPC-based solutions is heavily reliant on the accuracy of the motion model. This dissertation addresses this challenge by exploring a data-based approach to discovering vehicle dynamics. Unlike existing physics-based models that require extensive testing setups and manual tuning for new platforms and driving surfaces, our approach leverages the universal differential equations (UDEs) framework to identify unknown dynamics from vehicle data. This innovative approach, which does not make assumptions about the unknown dynamics terms and directly models the vector field, is then deployed to showcase its efficacy. This research opens up new possibilities for more accurate and adaptable motion models for UGVs. With the increasing adoption of glass and other transparent materials, UGVS must be able to detect and register them for reliable navigation. Unfortunately, such objects are not easily detected by LiDARs and cameras. In this dissertation, algorithms for detecting and including glass objects in a Graph SLAM framework were studied. A simple and computationally inexpensive glass detection scheme to detect glass objects is utilized. The methodology to incorporate the identified objects into the occupancy grid maintained by such a framework is the presented. The issue of \textit{drift accumulation} that can affect mapping performance when operating in large environments is also addressed.Item De-conflicting management of fluid resuscitation and intravenous medication infusion(2024) Yin, Weidi; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The treatment of combat casualties frequently involves infusion of multiple drugs (e.g. sedatives, opioids and vasopressors) in addition to fluid resuscitation. Usually, fluid resuscitation is performed first to restore the patient’s volume state, followed by the infusion of drugs that can optimize the hemodynamics and/or relief the pain. In some circumstances, however, fluid and drugs must be infused simultaneously. Simultaneous administration of fluid and intravenous drugs presents a practical challenge related to the interactions between them. On one hand, fluid infused dilutes the drugs by lowering its plasma concentration, thereby weakening the drugs’ intended clinical effects. On the other hand, the clinical effects of the intravenously administered drugs on the hemodynamics can interfere with the therapeutic goal of fluid resuscitation. Yet, the vast majority of existing work on closed-loop control of fluid resuscitation and intravenous drug infusion has focused on either fluid resuscitation or intravenous drug infusion alone, while methodologies and algorithms applicable to simultaneous administration of fluid and intravenous drugs have not been rigorously investigated. In the context of control engineering, this problem might be simply considered as a multivariable control problem. Nevertheless, the intricacy and nonlinearity in the system dynamics, in conjunction with limited sensor measurements makes this problem highly challenging. Hence, our work to analyze the conflicts between multiple treatments and to develop algorithmic framework to overcome such conflicts can represent a major leap toward the realization of complex automated medical care in the future, which can make a significant impact on human wellbeing. The main objective of this thesis is to investigate on de-conflicting management of fluid resuscitation and medication infusion, which is in twofold: first and foremost, to develop a mechanistic understanding of the interactions and interferences between the two treatments and second, to come up with novel solutions to address the challenges. To achieve the first goal of this project, we developed an integrated mathematical model of cardiovascular system and pharmacokinetics-pharmacodynamics(PK-PD) model of drugs. This study involves constructing the model based on current knowledge of physiology, isolated and interactive drug effects, parameter identification using real-world data to verify and validate the model, rigorously analyzing the results to demonstrate that multiple medical treatments can endanger the safety of patient care unless the treatments are properly controlled. To accomplish the second goal, we designed a strategy that realizes a safety assurance control of multiple treatments. This study involves model-based hemodynamic monitoring, robust nonlinear dynamic feedback control, safety assurance control design and treatment target mediation. In terms of controller design, we used a 2-degree of freedom PID controller for fluid loop, and an absolute stability guaranteed PID controller based on circle criterion and linear matrix inequalities(LMI) for drug loops. This dissertation considers a 2-input 2-output model(fluid resuscitation and propofol sedation), as well as a more sophisticated 3-input 2-output model(fluid resuscitation and propofol sedation with PHP vasopressor treatment) for case study. It turned out that the proposed methods worked well on both models. In addition, having more inputs provides more flexibility in terms of controller design.Item MACHINE LEARNING IN SCARCE DATA REGIME FOR DESIGN AND DISCOVERY OF MATERIALS(2024) BALAKRISHNAN, SANGEETH; Chung, Peter; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In recent years, data-driven approaches based on machine learning have emerged as a promising method for rapid and efficient estimation of the structure-property-performance relationships, leading to the discovery of advanced materials. However, the cost and time required to obtain relevant data have limited application of these methods to only a few classes of materials where extensive property data are available. Moreover, the material property prediction poses its own unique set of challenges, in part, due to: 1) the complex non-linear response of materials in different space and time domains, 2) inherent variability in material in terms of composition and processing conditions from the atomic to the macroscopic scales, and 3) the need for accurate, rapid and less expensive predictive models for accelerated material discovery. This dissertation aims to develop three novel machine learning frameworks for constructing targeted learning frameworks and discovering novel materials when dealing with limited available data. The dissertation also highlights the future directions and challenges of such approaches. In the first approach, we develop data-driven methods to estimate the material properties under shock compression. A novel featurization approach combining synthetic and physical features was developed showing substantial improvements in the machine learning model performance. The effects of feature engineering, model choices, and uncertainty in the experimental data were investigated. In the second approach, we develop a novel joint embedding framework that enables transfer learning, with the target of locally optimizing the shock wave properties of nitrogen-rich molecules. This work is motivated by a need to overcome challenges associated with the translation of machine learning approaches to domains where there is a relative lack of domain-specific data. However, the properties studied in the second approach do not consider factors needed to assemble a complete material system. Therefore, in the third and final approach, we investigate material systems whose properties at system level are determined by various upstream design factors, such as the composition of raw materials, manufacturing variability, and considerations involved while assembling the system. We propose a stacked ensemble learning framework to make statistical inferences about the system properties.Item Automated Simulation and the Discovery of Mechanical Devices(2024) Chiu, Kevin; Fuge, Mark; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Automatically designing or finding novel devices that accomplish new or existing functions remains one of the greatest unsolved problems in Design Automation. In part, this is due to 1) the interplay of physical form and usage, 2) the emergence of complex behaviors from combinations of simple geometries, and 3) the sparsity and instability of “interesting” physical phenomena with small changes in the design space, which have historically stymied past efforts, since most approaches required 1) human intuition and creativity, 2) infeasibly large amounts of computational power, or 3) a priori targeted desired behavior. In contrast, this dissertation takes a data-driven approach to addressing the general question “What device functionality emerges organically from knowledge of various physical laws?” To make this high-level question more precise, this dissertation tackles three interrelated sub-questions that address challenges that arise when attempting to deploy data-driven methods on function discovery tasks. First, to generate diverse and high-quality datasets from which an algorithm might find novel behavior, this dissertation asks, “How do we enumerate possible boundary conditions for a given physical law that can lead to well-defined solutions to a given partial differential equation?” Chapter 3 proposes a type-based indexing scheme and two properties of that scheme that can generate valid Finite Element Method (FEM) formulations, resulting in a three-fold increase in the number of simulations we generated from our limited set of boundary conditions. Chapter 4 proposes a regression formulation for predicting physical realizability in Stokes flow simulations as estimated with the magnitude of the pressure field. Second, this dissertation asks, “How do we encapsulate the emergence of complex behaviors from interactions between different components?” Chapter 5 proposes reframing this question as an error regression, using graph neural networks to adjust for the “error” — i.e., emergent behavior — incurred by composing multiple basis Navier-Stokes simulations into one large simulation. Lastly, given solution field data, this dissertation asks, “Under what conditions can we detect novel device behaviors through computer-driven sim-ulation and exploration?” Chapter 6 proposes a boundary representation method and modified a hierarchical clustering approach, called Silhouette-optimized Hierarchical Density-Based Spatial Clustering of Applications with Noise (SHDBSCAN), to identify clusters of fluidic devices with similar behaviors. This chapter shows that the solution field representation has a significantly stronger impact on detecting novel device behaviors than the clustering algorithm used, but that a significant challenge lies in capturing “interesting” behavior in the design space in the first place. Overall, this dissertation illuminates promising simulation methods for automating functional discovery and initial work on using data-driven methods to analyze such data. It also highlights several challenges, including the curse of dimensionality, that plague such approaches.