Electrical & Computer Engineering Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2765

Browse

Recent Submissions

Now showing 1 - 20 of 1133
  • Item
    INGESTIBLE BIOIMPEDANCE SENSING DEVICE FOR GASTROINTESTINAL TRACT MONITORING
    (2024) Holt, Brian Michael; Ghodssi, Reza; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Gastrointestinal (GI) diseases, such as inflammatory bowel disease (IBD), result in dilated adherens and tight junctions, altering mucosal tissue permeability. Few monitoring techniques have been developed for in situ monitoring of local mucosal barrier integrity, and none are capable of non-invasive measurement beyond the esophagus. In this work, this technology gap is addressed through the development of a noise-resilient, flexible bioimpedance sensor integrated ingestible device containing electronics for low-power, four-wire impedance measurement and Bluetooth-enabled wireless communication. Through electrochemical deposition of a conductive polymeric film, the sensor charge transfer capacity is increased 51.4-fold, enabling low-noise characterization of excised intestinal tissues with integrated potentiostat circuitry for the first time. A rodent animal trial is performed, demonstrating successful differentiation of healthy and permeable mice colonic tissues using the developed device. In accordance with established mucosal barrier evaluation methodologies, mucosal impedance was reduced between 20.3 ± 9.0% and 53.6 ± 10.7% of its baseline value in response to incrementally induced tight junction dilation. Ultimately, this work addresses the fundamental challenges of electrical resistance techniques hindering localized, non-invasive IBD diagnostics. Through the development of a simple and reliable bioimpedance sensing module, the device marks significant progress towards explicit quantification of “leaky gut” patterns in the GI tract.
  • Item
    Active Power Decoupling (APD) Converter for PV Microinverter Applications
    (2024) Shen, Yidi; Khaligh, Alireza; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Under global challenges in climate change, the demand for renewable energy is continuously growing. Photovoltaic (PV) power and its integration into the utility grid are gaining increasing traction. To lower the levelized cost of energy (LCOE) of PV systems, enhance the adoption of PV applications, and ensure the delivery of high-quality power to the utility grid, there is a growing need for reliable, cost-effective, efficient, and compact PV inverters. One key challenge in single-phase PV systems is the short lifetime and poor reliability of electrolytic capacitors used for decoupling the double line frequency (DLF) power. To eliminate the less reliable electrolytic capacitor, the active power decoupling (APD) technique is widely adopted. Various topologies can be used for APD, but the selection of proper topology, modulation scheme, and circuit components, along with the control strategy, will enhance the efficiency, power density, reliability, and cost of the overall PV microinverter. This Ph.D. dissertation proposes an APD converter circuit suitable for PV microinverters, designed for optimized efficiency, power density, and cost. The proposed APD converter is controlled to achieve good power decoupling performance and to optimize the system's maximum power point tracking (MPPT) efficiency. The proposed APD converter circuit is analyzed in the low-frequency domain for power flow and in the high-frequency domain for modulation strategy, where different topologies are considered, taking into account the voltage and current ratings of active devices and decoupling capacitors. Two modulation approaches, continuous conduction mode (CCM) and critical conduction mode (CRM), are compared, considering detailed zero voltage switching (ZVS) operation and different loss mechanisms. Parametric design and multi-objective optimization are performed for CCM and CRM to select circuit components and switching frequency for each modulation strategy to minimize power loss, volume, and costs. With the results of multi-objective optimization, Pareto-optimal designs for CCM and CRM are analyzed in terms of the impact of various circuit elements, namely: switching device output capacitance and on-state resistance, inductor winding turns and core geometries, as well as capacitor dimensions and capacitance. With the optimal CCM- and CRM-operated APD realizations, closed-loop control algorithms are designed, and the corresponding system characteristics are compared. A simple pulse width modulation (PWM) based control strategy that does not rely on zero-crossing detection (ZCD) is proposed to implement closed-loop CRM modulation. In addition, advanced control technologies, including double sampling-based average current control, current observer-based reduced sensor control, and sensorless predictive control, are proposed to improve APD converter performance, reduce system complexity, and lower circuit cost. The proposed APD converter operation is extended to different application scenarios, including burst-mode operation and non-sinusoidal power delivery, including systems with non-linear circuit components, non-linear local loads, or non-ideal grids. A feed-forward control solution is proposed to enable power decoupling for non-sinusoidal power with improved control accuracy and reduced closed-loop design burden. The circuit design, associated analyses, and control approaches are validated by the design, development, and testing of 400 VA APD hardware prototypes.
  • Item
    Understanding and Improving Reliability of Predictive and Generative Deep Learning Models
    (2024) Kattakinda, Priyatham; Feizi, Soheil; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Deep learning models are prone to acquiring spurious correlations and biases during training and adversarial attacks during inference. In the context of predictive models, this results in inaccurate predictions relying on spurious features. Our research delves into this phenomenon specifically concerning objects placed in uncommon settings, where they are not conventionally found in the real world (e.g., a plane on water or a television in a cave). We introduce the "FOCUS: Familiar Objects in Common and Uncommon Settings" dataset which aims to stress-test the generalization capabilities of deep image classifiers. By leveraging the power of modern search engines, we deliberately gather data containing objects in common and uncommon settings in a wide range of locations, weather conditions, and time of day. Our comprehensive analysis of popular image classifiers on the FOCUS dataset reveals a noticeable decline in performance when classifying images in atypical scenarios. FOCUS only consists of natural images which are extremely challenging to collect as by definition it is rare to find objects in unusual settings. To address this challenge, we introduce an alternative dataset named Diffusion Dreamed Distribution Shifts (D3S). D3S comprises synthetic images generated through StableDiffusion, utilizing text prompts and image guides derived from placing a sample foreground image onto a background template image. This scalable approach allows us to create 120,000 images featuring objects from all 1000 ImageNet classes set against 10 diverse backgrounds. Due to the incredible photorealism of the diffusion model, our images are much closer to natural images than previous synthetic datasets. To alleviate this problem, we propose two methods of learning richer and more robust image representations. In the first approach, we harness the foreground and background labels within D3S to learn a foreground (background)representation resistant to changes in background (foreground). This is achieved by penalizing the mutual information between the foreground (background) features and the background (foreground) labels. We demonstrate the efficacy of these representations by training classifiers on a task with strong spurious correlations. Thus far, our focus has centered on predictive models, scrutinizing the robustness of the learned object representations, particularly when the contextual surroundings are unconventional. In the second approach, we propose to use embeddings of objects and their relationships extracted using off-the-shelf image segmentation models and text encoders respectively as input tokens to a transformer. This leads to remarkably richer features that improve performance on downstream tasks such as image retrieval. Large language models are also prone to failures during inference. Given the widespread use of LLMs, understanding the propensity of these models to fail given adversarial inputs is crucial. To that end we propose a series of fast adversarial attacks called BEAST that uses beam search to add adversarial tokens to a given input prompt. These attacks induce hallucination, cause the models to jailbreak and facilitate unintended membership inference from model outputs. Our attacks are fast and are executable in relatively compute constrained environments.
  • Item
    Enhancement and Robustness of Large Timely Gossip Networks
    (2024) Kaswan, Priyanka; Ulukus, Sennur; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis, we explore the subject of fast dissemination of real-time data from a source to multiple users, critical for time-sensitive applications such as autonomous driving sytems, internet of things (IoT), augmented reality (AR), virtual reality (VR), and real-time content sharing in social networks, that feature dense interconnected networks of devices and humans. In face of network resource limitations and increasingly dynamic data generated by various sources in these networks, it is imperative that all nodes within these networks have timely information, i.e., the latest possible updates about the source nodes at all times for seamless functioning of these networks. Although it might seem straightforward to transmit changing data at high speed to all users, practical challenges such as limited bandwidth, server servicing speed, and intermittent connectivity hinder this approach. Therefore, to achieve the goal of timeliness, measured by metrics such as the age of information, this thesis leverages gossip algorithms, which are decentralized algorithms whose popularity stems from their efficiency and scalablility for information dissemination in such constrained and uncertain networks. This thesis aims to deepen our understanding of the capabilities and limitations of timely gossip networks of various topologies in managing large volumes of dynamic data. As importantly, this thesis explores the unique threats and vulnerabilities these next-generation networks face in the evolving landscape of 5G and 6G technologies. We open our analysis of this subject by first exploring efficient strategies for timely dissemination of time-varying data files from a source to a user via a simple network of parallel relays. We find that consolidating file update rates to the minimum number of relays improves timeliness, contrary to using all relays for each file. By solving an auxiliary single-cache problem and adapting its solution to our multi-cache network, we provide a sub-optimal solution, such that the upper bound on its gap to the optimal policy is independent of the number of files. Next, we explore complex network topologies and examine the resilience of gossip networks as a function of their connectivity to jamming attacks. We analyze the average version age in the presence of $\tilde{n}$ jammers for both connectivity-constrained ring and connectivity-rich fully connected topologies of $n$ nodes. Our findings reveal that a ring network is robust against up to $\sqrt{n}$ jammers, while a fully connected network withstands $n\log{n}$ jammers, showing that higher connectivity enhances resilience to jamming. To maximize age deterioration in the network, the jammers should attack in a manner that consolidates all remaining unjammed links into a dense cluster of the fewest possible nodes, leaving a higher number of nodes isolated. Next, we uncover a new type of attack in age-based networks, called timestomping, where an adversary manipulates timestamps of information packets, causing nodes to discard fresh packets for stale ones. We show that in fully connected networks, a single infected node can increase the expected age from $O(\log n)$ to $O(n)$, highlighting how full connectivity can expedite adversarial impacts. Conversely, in unidirectional ring networks with sparse inter-node connectivity, we find that the adversarial impact on node age scaling is confined by its distance from the adversary, maintaining an age scaling of $O(\sqrt{n})$ for a significant fraction of the network. Then, we demonstrate how the age-specific nature of file exchange protocols also makes gossip networks susceptible to the propagation of misinformation. We consider networks where packets can potentially get mutated during inter-node gossiping, creating misinformation. Nodes prefer latest versions of information, however, when a receiving node encounters both accurate information and misinformation for the same version, we consider two models: one where truth prevails over misinformation and another where misinformation prevails over truth. Using stochastic hybrid systems (SHS) modeling, we examine the expected fraction of nodes with correct information and the version age. We show that higher or lower gossiping rates effectively reduce misinformation when truth prevails, whereas moderate rates increase its spread. Conversely, misinformation prevalence rises with increased gossiping under the misinformation-prevailing scenario. Then, we consider the balance between freshness and reliability of information in an age-based gossip network, where two sources (reliable and unreliable) disseminate updates to $n$ network nodes. Nodes wish to have fresh information, however, they have preference for packets that originate at the reliable source and are willing to sacrifice their version age of information by up to $G$ versions to switch from an unreliable packet to a reliable packet. We show that increasing $G$ reduces unreliable packets but raises the network version age, revealing a freshness-reliability trade-off. Next, we develop a theory of timeliness for non-Poisson updating and study cache-aided networks where the inter-update times on the links are not necessarily exponentially distributed. We characterize the expressions for instantaneous age and version age in arbitrary networks, then derive their closed form expressions in case of tree networks, where they exhibit an additive structure. Finally, we analyze age of information in networks where update processes on the links become sparse as network size increases, noting that in symmetric fully connected networks, expected age scales as $O(\log{n})$. Then, we study a system where a group of users, interested in closely tracking a time-varying event and maintaining their expected version ages below a threshold, choose between either preferably relying on gossip from their neighbors or directly subscribing to a server publishing about the event, to meet the timeliness requirements. The server wishes to maximize its profit by boosting subscriptions from users and minimizing event sampling frequency to reduce costs, setting up a Stackelberg game between the server and the users. We analyze equilibrium strategies in both directed and undirected networks, finding that well-connected networks have fewer subscribers since well-connected users dissuade their multiple neighboring nodes from subscribing. Next, we consider a gossip network of $n$ users hosting a library of files, such that each file is initially present at exactly one node, designated as the file source. The source gets updated with newer versions of the file according to an arbitrary distribution in real-time, and the other users in the network wish to acquire the latest possible version of the file. We present a class of gossip protocols that achieve $O(1)$ age at a typical node in a single-file system and $O(n)$ age at a typical node for a given file in an $n$-file system. We show that file slicing and network coding based protocols fall under the presented class of protocols. Finally, we further explore timestomping attacks in a simplified communication model, where a source attempts to minimize the age of a user, but due to a power constraint, the source can only transmit updates directly to the user for a fraction of timeslots over a fixed time horizon. A cache node, which can afford more frequent transmissions, lies in between the source and the user, however the communication link between the cache and the user is under attack by a timestomping adversary. We formulate this adversarial cache updating problem as an online learning problem and study the achievable competitive ratios for this problem.
  • Item
    Learning Autonomous Underwater Navigation with Bearing-Only Data
    (2024) Robertson, James; Duraiswami, Ramani; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Recent applications of deep reinforcement learning in controlling maritime autonomoussurface vessels have shown promise for integration into maritime transportation. These could have the potential to reduce at-sea incidents such as collisions and groundings which are majorly attributed to human error. With this in mind the goal of this work is to evaluate how well a similar deep reinforcement learning agent could perform the same task in submarines but using passive SONAR rather than the ranging data provided by active RADAR aboard surface vessels. A simulated submarine outfitted with a passive spherical, hull-mounted SONAR sensor is placed into contact scenarios under the control of a reinforcement learning agent and directed to make its way to a navigational waypoint while avoiding interfering surface vessels. In order to see how this best translates to lower power autonomous vessels (vice warship submarines), no estimation for the range of the surface vessels is maintained in order to cut down on computing requirements. Inspired by my time aboard U.S. Navy submarines, the agent is provided with simply the simulated passive SONAR data. I show that this agent is capable of navigating to a waypoint while avoiding crossing, overtaking, and head-on surface vessels and thus could provide a recommended course to a submarine contact management team in ample time since the maneuvers made by the agent are not instantaneous in contrast to the assumptions of traditional target tracking with bearing-only data. Additionally, an in-progress plugin for Epic Games’ Unreal Engine is presented with the ability to simulate underwater acoustics inside the 3D development software. Unreal Engine is a powerful 3D game engine that is incredibly flexible and capable of being integrated into many different forms of scientific research. This plugin could provide researchers with the ability to conduct useful simulations in intuitively designed 3D environments.
  • Item
    POLYMORPHIC CIRCUITS: THE IDENTIFICATION OF POSSIBLE SOURCES AND APPLICATIONS
    (2024) Dunlap, Timothy; Qu, Gang; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Polymorphic gates are gates whose function depends on some external or environmental conditions. While there has been research into both the creation and applications of polymorphic gates, much remains unknown. This dissertation, motivated by the recent security applications of polymorphic gates, seeks a systematic approach to generating polymorphic gates.Its contributions include a polymorphic interoperability framework, the first study on the source of polymorphism, time-based polymorphic gates, and polymorphism in the sub-threshold design. Polymorphic circuits are commonly created with evolutionary algorithms [3]. Because the evolutionary algorithm operates in ways that are not always obvious, precise mechanisms of polymorphism are not immediately clear in the resulting gates and has not been reported before. This dissertation, for the first time, identifies multiple structures that impact the polymorphic nature of the gates, which sheds light on how to create polymorphic gates. This discovery is based on a categorization methodology that evaluates the quality of polymorphic gates and finds the robust ones for further investigation of polymorphism. By combining the discovered structures with the evolutionary algorithm, high quality polymorphic gates can be generated faster as demonstrated in the subthreshold design domain. Time-based polymorphism was discovered during the time analysis of evolved polymorphic circuits while searching for the sources of polymorphism. This occurs when the function of the circuit depends on the sample rate of the circuit and is based on some input combinations not quickly reaching the output they move towards. Therefore, when the circuit is running at different clock frequency, it may exhibit different functionality. This is time-based polymorphism. As one application of polymorphic gates, this dissertation presents a framework that can enhance the fault coverage of any fault testing method by utilizing polymorphic gates. The proposed framework starts with any traditional fault testing approach, and when it becomes less effective in covering uncovered faults, it employs a gate replacement strategy to selectively replace certain standard logic gates by polymorphic gates of specific polymorphism. This concept is demonstrated in the dissertation with examples of a D flip-flop and the ISCAS85 C17 benchmark. This work has high practical value in subthreshold design where circuit manufacture defects increase significantly. In summary, this dissertation presents multiple contributions to the study of polymorphic circuits. It discovers multiple sources of polymorphism and how the results of an evolutionary algorithm can be filtered into higher quality solutions. It also examines time-based polymorphism as a new form of polymorphism with security applications. Finally, an enhancement to stuck-at fault testing using polymorphic gates is presented. This allows for easier testing of corner-cases that are hard to detect using traditional methodologies and holds promise for improving the reliability of testing, particularly in the subthreshold domain.
  • Item
    Learning in Large Multi-Agent Systems
    (2024) Kara, Semih; Martins, Nuno C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation, we study a framework of large-scale multi-agent strategic interactions. The agents are nondescript and use a learning rule to repeatedly revise their strategies based on their payoffs. Within this setting, our results are structured around three main themes: (i) Guaranteed learning of Nash equilibria, (ii) The inverse problem, i.e. estimating the payoff mechanism from the agents' strategy choices, and (iii) Applications to the placement of electric vehicle charging stations. In the traditional setup, the agents' inter-revision times follow identical and independent exponential distributions. We expand on this by allowing these intervals to depend on the agents' strategies or have Erlang distributions. These extensions enhance the framework's modeling capabilities, enabling it to address problems such as task allocation with varying service times or multiple stages. We also explore a third generalization, concerning the accessibility among strategies. Majority of the existing literature assume that the agents can transition between any two strategies, whereas we allow only certain alternatives to be accessible from certain others. This adjustment further improves the framework's modeling capabilities, such as by incorporating constraints on strategy switching related to spatial and informational factors. For all of these extensions, we use Lyapunov's method and passivity-based techniques to find conditions on the revision rates, learning rule, and payoff mechanism that ensure the agents learn to play a Nash equilibrium of the payoff mechanism. For our second class of problems, we adopt a multi-agent inverse reinforcement learning perspective. Here, we assume that the learning rule is known but, unlike in existing work, the payoff mechanism is unknown. We propose a method to estimate the unknown payoff mechanism from sample path observations of the populations' strategy profile. Our approach is two-fold: We estimate the agents' strategy transitioning probabilities, which we then use - along with the known learning rule - to obtain a payoff mechanism estimate. Our findings regarding the estimation of transitioning probabilities are general, while for the second step, we focus on linear payoff mechanisms and three well-known learning rules (Smith, replicator, and Brown-von Neumann-Nash). Additionally, under certain assumptions, we show that we can use the payoff mechanism estimate to predict the Nash equilibria of the unknown mechanism and forecast the strategy profile induced by other rules. Lastly, we contribute to a traffic simulation tool by integrating electric vehicles, their charging behaviors, and charging stations. This simulation tool is based on spatial-queueing principles and, although less detailed than some microscopic simulators, it runs much faster and accurately represents traffic rules. Using this tool, we identify optimal charging station locations (on real roadway networks) that minimize the overall traffic.
  • Item
    Studies in Differential Privacy and Federated Learning
    (2024) Zawacki, Christopher Cameron; Abed, Eyad H; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In the late 20th century, Machine Learning underwent a paradigm shift from model-driven to data-driven design. Rather than field specific models, advances in sensors, data storage, and computing power enabled the collection of increasing amounts of data. The abundance of new data allowed researchers to fit flexible models directly to observed data. The influx of information made possible numerous advances, including the development of novel medicines, increases in efficiency of markets, and the proliferation of vast sensor networks. However, not all data should be freely accessible. Sensitive medical records, personal finances, and private IDs are all currently stored on digital devices across the world with the expectation that they remain private. However, at the same time, such data is frequently instrumental in the development of predictive models. Since the beginning of the 21st century, researchers have recognized that traditional methods of anonymizing data are inadequate for protecting client identities. This dissertation's primary focus is the advancement of two fields of data privacy: Differential Privacy and Federated Learning. Differential Privacy is one of the most successful modern privacy methods. By injecting carefully structured noise into a dataset, Differential Privacy obscures individual contributions while allowing researchers to extract meaningful information from the aggregate. Within this methodology, the Gaussian mechanism is one of the most common privacy mechanisms due to its favorable properties such as the ability of each client to apply noise locally before transmission to a server. However, the use of this mechanism yields only an approximate form of Differential Privacy. This dissertation introduces the first in-depth analysis of the Symmetric alpha-Stable (SaS) privacy mechanism, demonstrating its ability to achieve pure-Differential Privacy while retaining local applicability. Based on these findings, the dissertation advocates for using the SaS privacy mechanism in protecting the privacy of client data. Federated Learning is a sub-field of Machine Learning, which trains Machine Learning models across a collection (federation) of client devices. This approach aims to protect client privacy by limiting the type of information that clients transmit to the server. However, this distributed environment poses challenges such as non-uniform data distributions and inconsistent client update rates, which reduces the accuracy of trained models. To overcome these challenges, we introduce Federated Inference, a novel algorithm that we show is consistent in federated environments. That is, even when the data is unevenly distributed and the clients' responses to the server are staggered in time (asynchronous), the algorithm is able to converge to the global optimum. We also present a novel result in system identification in which we extend a method known as Dynamic Mode Decomposition to accommodate input delayed systems. This advancement enhances the accuracy of identifying and controlling systems relevant to privacy-sensitive applications such as smart grids and autonomous vehicles. Privacy is increasingly pertinent, especially as investments in computer infrastructure constantly grow in order to cater to larger client bases. Privacy failures impact an ever-growing number of individuals. This dissertation reports on our efforts to advance the toolkit of data privacy tools through novel methods and analysis while navigating the challenges of the field.
  • Item
    SYMMETRIC-KEY CRYPTOGRAPHY AND QUERY COMPLEXITY IN THE QUANTUM WORLD
    (2024) Bai, Chen; Katz, Jonathan; Alagic, Gorjan; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Quantum computers are likely to have a significant impact on cryptography. Many commonly used cryptosystems will be completely broken once large quantum computers are available. Since quantum computers can solve the factoring problem in polynomial time, the security of RSA would not hold against quantum computers. For symmetric-key cryptosystems, the primary quantum attack is key recovery via Grover search, which provides a quadratic speedup. One way to address this is to double the key length. However, recent results have shown that doubling the key length may not be sufficient in all cases. Therefore, it is crucial to understand the security of various symmetric-key constructions against quantum attackers. In this thesis, we give the first proof of post-quantum security for certain symmetric primitives. We begin with a fundamental block cipher, the Even-Mansour cipher, and the tweakable Even-Mansour construction. Our research shows that both are secure in a realistic quantum attack model. For example, we prove that 2^{n/3} quantum queries are necessary to break the Even-Mansour cipher. We also consider the practical applications that our work implies. Using our framework, we derive post-quantum security proofs for three concrete symmetric-key schemes: Elephant (an Authenticated Encryption (AE) finalist of NIST’s lightweight cryptography standardization effort), Chaskey (an ISO-standardized Message Authentication Code), and Minalpher (an AE second-round candidate of the CAESAR competition). In addition, we consider the two-sided permutation inversion problem in the quantum query model. In this problem, given an image y and quantum oracle access to a permutation P (and its inverse oracle), the goal is to find its pre-image x such that P(x)=y. We prove an optimal lower bound \Omega(\sqrt{2^n}) for this problem against an adaptive quantum adversary. Moreover, we apply our lower bound above to show that a natural encryption scheme constructed from random permutations is secure against quantum attacks.
  • Item
    SYNPLAY: IMPORTING REAL-WORLD DIVERSITY FOR A SYNTHETIC HUMAN DATASET
    (2024) Yim, Jinsub; Bhattacharyya, Shuvra S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In response to the growing demand for large-scale training data, synthetic datasets have emerged as practical solutions. However, existing synthetic datasets often fall short of replicating the richness and diversity of real-world data. Synthetic Playground (SynPlay) is introduced as a new synthetic human dataset that aims to bring out the diversity of human appearance in the real world. In this thesis, We focus on two factors to achieve a level of diversity that has not yet been seen in previous works: i) realistic human motions and poses and ii) multiple camera viewpoints towards human instances. We first use a game engine and its library-provided elementary motions to create games where virtual players can take less-constrained and natural movements while following the game rules (i.e., rule-guided motion design as opposed to detail-guided design). We then augment the elementary motions with real human motions captured with a motion capture device. To render various human appearances in the games from multiple viewpoints, we use seven virtual cameras encompassing the ground and aerial views, capturing abundant aerial-vs-ground and dynamic-vs-static attributes of the scene. Through extensive and carefully-designed experiments, we show that using SynPlay in model training leads to enhanced accuracy over existing synthetic datasets for human detection and segmentation. Moreover, the benefit of SynPlay becomes even greater for tasks in the data-scarce regime, such as few-shot and cross-domain learning tasks. These results clearly demonstrate that SynPlay can be used as an essential dataset with rich attributes of complex human appearances and poses suitable for model pretraining.
  • Item
    Autonomous Robot Navigation in Challenging Real-World Indoor and Outdoor Environments
    (2024) Sathyamoorthy, Adarsh Jagan; Manocha, Dr. Dinesh; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The use of autonomous ground robots for various indoor and outdoor applications has burgeoned over the years. In indoor settings, their applications range from waiters in hotels, helpers in hospitals, cleaners in airports and malls, transporters of goods in warehouses, surveillance robots, etc. In unstructured outdoor settings, they have been used for exploration in off-road environments, search and rescue, package delivery, etc. To successfully accomplish these tasks, robots must overcome several challenges and navigate to their goal. In this dissertation, we present several novel algorithms for learning-based perception combined with model-based autonomous navigation in real-world indoor and outdoor environments. The presented algorithms address the problems of avoiding collisions in dense crowds (< 1 to 2 persons/sq.meter), reducing the occurrence of the freezing robot problem, navigating in a socially compliant manner without being obtrusive to humans, and avoiding transparent obstacles in indoor settings. In outdoor environments, they address challenges in estimating the traversabilityof off-road terrains and vegetation, and understanding explicit social rules (e.g. crossing streets using crosswalks). The presented algorithms are designed to operate in real-time using the limited computational capabilities on-board real wheeled and legged robots such as the Turtlebot 2, Clearpath Husky, and Boston Dynamics Spot. Furthermore, the algorithms have been evaluated in real-world environments with dense crowds, transparent obstacles, off-road terrains, and vegetation such as tall grass, bushes, trees, etc. They have demonstrated significant improvements in terms of several metrics such as increasing success rates by at least 50% (robot avoids collisions and reaches its goal), lowering freezing rates by at least 80% (robot does not halt/oscillate indefinitely), increasing pedestrian friendliness up to 100% higher, reducing vibrations experienced in off-road terrains by up to 22%, etc over the state-of-the-art algorithms in various test scenarios. The first part of this dissertation deals with socially-compliant navigation approaches for crowded indoor environments. The initial methods focus on collision avoidance, handling the freezing robot problem in crowds of varying densities by tracking individual pedestrians, and modeling regions the robot must avoid based on their future positions. Subsequent works expand on these models by considering pedestrian group behaviors. The next part of this dissertation focuses on outdoor navigation methods that estimate the traversability of various terrains, and complex vegetation (e.g. pliable obstacles such as tall grass) using perception inputs to navigate on safe, and stable terrains. The final part of the dissertation elaborates on methods designed for detecting and navigating complex obstacles in indoor and outdoor environments. It also explores a technique leveraging recent advancements in large vision language models for navigation in both settings. All proposed methods have been implemented and evaluated on real wheeled and legged robots.
  • Item
    Quantum Dots in Photonic Crystals for Hybrid Integrated Silicon Photonics
    (2024) Rahaman, Mohammad Habibur; Waks, Edo Prof.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Quantum dots are excellent sources of on-demand single photons and can function as stable quantum memories. Additionally, advanced fabrication techniques of III-V materials and various hybrid integration methods make quantum dots an ideal candidate for integration into fiber- and silicon-based photonic circuits. However, efficiently extracting and integrating quantum dot emissions into fiber- and silicon-based photonic circuits, particularly with high efficiency and low power consumption, presents a continued challenge. This dissertation addresses this challenge by utilizing photonic crystals to couple quantum dot emissions into fiber- and silicon-based photonic circuits. In this dissertation, we first demonstrate an efficient fiber-coupled single photon source at the telecom C-band using InAs/InP quantum dots coupled to a nanobeam photonic crystal. The tapered nanobeam structure facilitates directional emission that is mode-matched to a lensed fiber, resulting in a collection efficiency of up to 65% from the nanobeam to a single-mode fiber. Using this approach, we demonstrate a bright single photon source with a 575 ± 5 Kcps count rate. Additionally, we observe a single photon purity of 0.015 ± 0.03 and Hong-Ou Mandel interference from emitted photons with a visibility of 0.84 ± 0.06. A high-quality factor photonic crystal cavity is needed to further improve the brightness of the single-photon source through Purcell enhancement. However, photonic crystal cavities often suffer from low-quality factors due to fabrication imperfections that create surface states and optical absorption. To address this challenge, we employed atomic layer deposition-based surface passivation of the InP photonic crystal nanobeam cavities to improve the quality factor. We demonstrated 140% higher quality factors by applying a coating of Al2O3 via atomic layer deposition to terminate dangling bonds and reduce surface absorption. Additionally, changing the deposition thickness enabled precise tuning of the cavity mode wavelength without compromising the quality factor. This Al2O3 atomic layer deposition approach holds great promise for optimizing nanobeam cavities, which are well-suited for integration with a wide range of photonic applications. Finally, we propose a hybrid Si-GaAs photonic crystal cavity design that operates at telecom wavelengths and can be fabricated without the need for careful alignment. The hybrid cavity consists of a patterned silicon waveguide that is coupled to a wider GaAs slab featuring InAs quantum dots. We show that by changing the width of the silicon cavity waveguide, we can engineer hybrid modes and control the degree of coupling to the active material in the GaAs slab. This provides the ability to tune the cavity quality factor while balancing the device’s optical gain and nonlinearity. With this design, we demonstrate cavity mode confinement in the GaAs slab without directly patterning it, enabling strong interaction with the embedded quantum dots for applications such as low-power-threshold lasing and optical bistability (156 nW and 18.1 µW, respectively). In addition to classical applications, this cavity is promising for alignment-free, large-scale integration of single photon sources in a silicon chip.
  • Item
    Advances in Concrete Cryptanalysis of Lattice Problems and Interactive Signature Schemes
    (2024) Kippen, Hunter Michael; Dachman-Soled, Dana; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Advanced cryptography that goes beyond what is currently deployed to service our basic internet infrastructure is continuing to see widespread adoption. The enhanced functionality achieved by these schemes frequently yields an increase in complexity. Solely considering the asymptotic security of the underlying computational assumptions is often insufficient to realize practical and secure instantiations.In these cases, determining the risk of any particular deployment involves analyzing the concrete security (the exact length of time it would take to break the encryption) as well as quantifying how concrete security can degrade over time due to any exploitable information leakage. In this dissertation, we examine two such cryptographic primitives where assessing concrete security is paramount. First, we consider the cryptanalysis of lattice problems (used as the basis for current standard quantum resistant cryptosystems). We develop a novel side-channel attack on the FrodoKEM key encapsulation mechanism as submitted to the NIST Post Quantum Cryptography (PQC) standardization process. Our attack involves poisoning the FrodoKEM Key Generation (KeyGen) process using a security exploit in DRAM known as “Rowhammer”. Additionally, we revisit the security of the lattice problem known as Learning with Errors (LWE) in the presence of information leakage. We further enhance the robustness of prior methodology by viewing side information from a geometric perspective. Our approach provides the rigorous promise that, as hints are integrated, the correct solution is a (unique) lattice point contained in an ellipsoidal search space. Second, we study the concrete security of interactive signature schemes (used as part of many Privacy Enhancing Technologies). To this end, we complete a new analysis of the performance of Wagner’s k-list algorithm [CRYPTO ‘02], which has found significant utility in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr [ICICS ‘01].
  • Item
    Graph-based Methods for Efficient, Interpretable and Reliable Machine Learning
    (2024) Ma, Yujunrong; Bhattacharyya, Shuvra SSB; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Machine learning algorithms have revolutionized fields such as computer vision, natural language processing, and speech recognition by offering the capability to analyze and extract information from vast datasets, a task far beyond human capacity. The deployment of these algorithms in high-stakes applications, including medical diagnosis, computational finance and criminal justice, underscores their growing importance. However, the decision-making processes of the so-called black-box models used in such areas raise considerable concerns. Therefore, enhancing the interpretability of these models is crucial, as it helps address issues like biases and inconsistencies in predictions, thereby making the models more comprehensible and trustworthy to end-users. Moreover, interpretability facilitates a deeper understanding of model behavior, such as the distribution of contributions across inputs. This deeper understanding can be applied to significantly improve efficiency. This is especially relevant as machine learning models find applications on edge devices, where computational resources are often limited. For such applications, significant improvements in energy efficiency and resource requirements can be obtained by optimizing and adapting model implementations based on an understanding of the models' internal behavior. However, such optimization introduces new challenges that arise due to factors such as complex, dynamically-determined dependency management among computations. This thesis presents five main contributions. The first contribution is the development of a novel type of interpretable machine learning model for applications in criminology and criminal justice (CCJ). The model involves graphical representations in the form of single decision trees, where the trees are constructed in an optimized fashion using a novel evolutionary algorithm. This approach not only enhances intrinsic interpretability but also enables users to understand the decision-making process more transparently, addressing the critical need for clarity in machine learning models' predictions. At the same time, the application of evolutionary algorithm methods enables such interpretability to be provided without significant degradation in model accuracy. In the second contribution, we develop new multi-objective evolutionary algorithm methods to find a balance between fairness and predictive accuracy in CCJ applications. We build upon the single-decision-tree framework developed in the first contribution of the thesis, and systematically integrate considerations of fairness and multi-objective optimization. In the third contribution, we develop new methods for crime forecasting applications. In particular, we develop new interpretable, attention-based methods using convolutional long short-term memory (ConvLSTM) models. These methods combine the power of ConvLSTM models in capturing spatio-temporal patterns with the interpretability of attention mechanisms. This combination of capabilities allows for the identification of key geographic areas in the input data that contribute to predictions from the model. The fourth contribution introduces a dynamic dataflow-graph-based framework to enhance the computational efficiency and run-time adaptability of inference processes, considering the constraints of available resources. Our proposed model maintains a high degree of analyzability while providing greater freedom than static dataflow models in being able to manipulate the computations associated with inference process at run-time. The fifth contribution of the thesis builds on insights developed in the fourth, and introduces a new parameterized design approach for image-based perception that enables efficient and dynamic reconfiguration of convolutions using channel attention. Compared to switching among sets of multiple complete neural network models, the proposed reconfiguration approach is much more streamlined in terms of resource requirements, while providing a high level of adaptability to handle unpredictable and dynamically-varying operational scenarios.
  • Item
    Dynamic EM Ray Tracing for Complex Outdoor and Indoor Environments with Multiple Receivers
    (2024) Wang, Ruichen; Manocha, Dinesh; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Ray tracing models for visual, aural, and EM simulations have advanced, gaining traction in dynamic applications such as 5G, autonomous vehicles, and traffic systems. Dynamic ray tracing, modeling EM wave paths and their interactions with moving objects, leads to many challenges in complex urban areas due to environmental variability, data scarcity, and computational needs. In response to these challenges, we've developed new methods that use a dynamic coherence-based approach for ray tracing simulations across EM bands. Our approach is designed to enhance efficiency by improving the recomputation of bounding volume hierarchy (BVH) and by caching propagation paths. With our formulation, we've observed a reduction in computation time by about 30%, all while maintaining a level of accuracy comparable to that of other simulators. Building on our dynamic approach, we've made further refinements to our algorithm to better model channel coherence, spatial consistency, and the Doppler effect. Our EM ray tracing algorithm can incrementally improve the accuracy of predictions relating to the movement and positioning of dynamic objects in the simulation. We've also integrated the Uniform Geometrical Theory of Diffraction (UTD) with our ray tracing algorithm. Our enhancement is designed to allow for more accurate simulations of diffraction around smooth surfaces, especially in complex indoor settings, where accurate prediction is important. Taking another step forward, we've combined machine learning (ML) techniques with our dynamic ray tracing framework. Leveraging a modified conditional Generative Adversarial Network (cGAN) that incorporates encoded geometry and transmitter location, we demonstrate better efficiency and accuracy of simulations in various indoor environments with 5X speedup. Our method aims to not only improve the prediction of received power in complex layouts and reduce simulation times but also to lay a groundwork for future developments in EM simulation technologies, potentially including real-time applications in 6G networks. We evaluate the performance of our methods in various environments to highlight the advantages. In dynamic urban scenes, we demonstrate our algorithm’s scalability to vast areas and multiple receivers with maintained accuracy and efficiency compared to prior methods; for complex geometries and indoor environments, we compare the accuracy with analytical solutions as well as existing EM ray tracing systems.
  • Item
    Nonlinear and Stochastic Dynamics of Optoelectronic Oscillators
    (2024) Ha, Meenwook; Chembo, Yanne K.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Optoelectronic oscillators (OEOs) are nonlinear, time-delayed and self-sustained microwave photonic systems capable of generating ultrapure radiofrequency (RF) signals with extensive frequency tunabilities. Their hybrid architectures, comprising both optical and electronic paths, underscore their merits. One of the most notable points of OEOs can be unprecedentedly high quality factors, achieved by storing optical energies for RF signal generations. Thanks to their low phase noise and broad frequency tunabilities, OEOs have found diverse applications including chaos cryptography, reservoir computing, radar communications, parametric oscillator, clock recovery, and frequency comb generation. This thesis pursues two primary objectives. Firstly, we delve into the nonlinear dynamics of various OEO configurations, elucidating their universal behaviors by deriving corresponding envelope equations. Secondly, we present a stochastic equation delineating the dynamics of phases and explore the intricacies of the phase dynamics. The outputs of OEOs are defined in their RF ports, with our primary focus directed towards understanding the dynamics of these RF signals. Regardless of their structural complexities, we employ a consistent framework to explore these dynamics, relying on the same underlying principles that determine the oscillation frequencies of OEOs. To comprehend behaviors of OEOs, we analyze the dynamics of a variety of OEOs. For simpler systems, we can utilize the dynamic equations of bandpass filters, whereas more complex physics are required for expressing microwave photonic filtering. Utilizing an envelope approach, which characterizes the dynamics of OEOs in terms of complex envelopes of their RF signals, has proven to be an effective method for studying them. Consequently, we derive envelope equations of these systems and research nonlinear behaviors through analyses such as investigating bifurcations, stability evaluations, and numerical simulations. Comparing the envelope equations of different models reveals similarities in their dynamic equations, suggesting that their dynamics can be governed by a generalized universal form. Thus, we introduce the universal equation, which we refer to as the universal microwave envelope equation and conduct analytical investigations to further understand its implications. While the deterministic universal equation offers a comprehensive tool for simultaneous exploration of various OEO dynamics, it falls short in describing the stochastic phase dynamics. Our secondary focus lies in investigating phase dynamics through the implementation of a stochastic approach, enabling us to optimize and comprehend phase noise performance effectively. We transform the deterministic universal envelope equation into a stochastic delay differential form, effectively describing the phase dynamics. In our analysis of the oscillators, we categorize noise sources into two types: additive noise contribution, due to random environmental and internal fluctuations, and multiplicative noise contribution, arising from noisy loop gains. The existence of the additive noise is independent of oscillation existence, while the multiplicative noise is intertwined with the noisy loop gains, nonlinearly mixing with signals above the threshold. Therefore, we investigate both sub- and above-threshold regimes separately, where the multiplicative noise can be characterized as white noise and colored noise in respective regimes. For the above-threshold regime, we present the stochastic phase equation and derive an equation for describing phase noise spectra. We conduct thorough investigations into this equation and validate our approaches through experimental verification. In the sub-threshold regime, we introduce frameworks to experimentally quantify the noise contributions discussed in the above-threshold part. Since no signal is present here and the oscillator is solely driven by the stochastic noise, it becomes feasible to reverse-engineer the noise powers using a Fourier transform formalism. Here, we introduce a stochastic expression written in terms of the real-valued RF signals, not the envelopes, and the transformation facilitates the expressions of additive and multiplicative noise contributions as functions of noisy RF output powers. The additive noise can be defined by deactivating the laser source or operating the intensity modulator at the minimum transmission point, given its independence from the loop gains. Conversely, the expression for the multiplicative noise indicates a dependence on the gain, however, experimental observations suggest that its magnitude may remain relatively constant beyond the threshold.
  • Item
    Quantum and Stochastic Dynamics of Kerr Microcombs
    (2024) Liu, Fengyu; Chembo, Yanne K.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Kerr microcombs are sets of discrete, equidistant spectral lines and are typically generated by pumping a high-quality factor optical resonator with a continuous- or pulse-wave resonant laser. They have emerged as one of the most important research topics in photonics nowadays, with applications related to spectroscopy, sensing, aerospace, and communication engineering. A key characteristic of these microcombs is the threshold pump power. Below the threshold, two pump photons are symmetrically up- and down-converted as twin photons via spontaneous four-wave mixing, and they can be entangled across up to a hundred eigenmodes. These chipscale, high-dimensional, and room-temperature systems are expected to play a major role in quantum engineering. Above the threshold, the four-wave mixing process is stimulated, ultimately leading to the formation of various types of patterns in the spatio-temporal domain, which can be extended (such as roll patterns), or localized (bright or dark solitons). The semiclassical dynamics of Kerr microcombs have been studied extensively in the last ten years and the deterministic characteristics are well understood. However, the quantum dynamics of the twin-photon generation process, and the stochastic dynamics led by the noise-driven fluctuations, are still not so clear. In the first part of our investigation, we introduce the theoretical framework to study the semiclassical dynamics of the Kerr microcombs based on the slowly varying envelope of the intracavity electrical fields. Two equivalent models -- the coupled-mode model and the Lugiato-Lefever model are used to analyze the spectro- and spatio-temporal dynamics, respectively. These models can determine the impact of key parameters on the Kerr microcomb generation process, such as detuning, losses, and pump power, as well as critical values of the system, such as threshold power. Various types of patterns and combs can be observed through simulations that follow experimental parameters. Furthermore, we show an eigenvalue analysis method to determine the stability of the microcomb, and this method is applied to an unstable microcomb solution to understand the generation of subcombs surrounding the primary comb. In the second and third parts, we investigate a stochastic model where noise is added to the coupled-mode equations governing the microcomb dynamics to monitor the influence of random noise on the comb dynamics. We find the model with additive Gaussian white noise allows us to characterize the noise-induced broadening of spectral lines and permits us to determine the phase noise spectra of the microwaves generated via comb photodetection. Our analysis indicates that the low-frequency part of the phase noise spectra is dominated by pattern drift while the high-frequency part is dominated by pattern deformation. The dynamics of the Kerr microcomb with multiplicative noises, including thermal and photothermal fluctuations, are also investigated in the end. We propose that the dynamics of the noise can be included in the simulation of stochastic dynamics equations, introduce the methods to solve the dynamics of the noise, and study a quiet point method for phase noise reduction. In the fourth part, we use canonical quantization to obtain the quantum dynamics for Kerr microcombs generated by spontaneous four-wave mixing below the threshold and develop the study of them using frequency-bin quantum states. We introduce a method to find the quantum expansion of the output state and explore the properties of the eigenkets. A theoretical framework is also developed to obtain explicit solutions for density operators of quantum microcombs, which allows us to obtain their complete characterization, as well as for the analytical determination of various performance metrics such as fidelity, purity, and entropy. Finally, we describe a quantum Kerr microcomb generator with a pulse-wave laser and propose the time-bin entangled states generated by it.
  • Item
    Dynamics and applications of long-distance laser filamentation in air
    (2024) Goffin, Andrew; Milchberg, Howard; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Femtosecond laser pulses with sufficient power will form long, narrow high-intensity light channels in a propagation medium. These structures, called “filaments”, form due to nonlinear self-focusing collapse in a runaway process that is arrested by a mechanism that limits the peak intensity. For near-infrared pulses in air, the arrest mechanism is photoionization of air molecules and the resulting plasma-induced defocusing. The interplay between plasma-induced defocusing and nonlinear self-focusing enables high-intensity filament propagation over long distances in air, much longer than the Rayleigh range (~4 cm) corresponding to the ~200 µm diameter filament core. In this thesis, the physics of atmospheric filaments is studied in detail along with several applications. Among the topics of this thesis: (1) Using experiments and simulations, we studied the pulse duration dependence of filament length and energy deposition in the atmosphere, revealing characteristic axial oscillations intimately connected to the delayed rotational response of air molecules. This measurement used a microphone array to record long segments of the filament propagation path in a single shot. These results have immediate application to the efficient generation of long air waveguides. (2) We investigated the long-advertised ability of filaments to clear fog by measuring the dynamics of single water droplets in controlled locations near a filament. We found that despite claims in the literature that droplets are cleared by filament-induced acoustic waves, they are primarily cleared through optical shattering. (3) We demonstrated optical guiding in the longest-filament induced air waveguides to date (~50 m, a length increase of ~60×) using multi-filamentation of Laguerre-Gaussian LG01 modes with pulse durations informed by experiment (1). (4) We demonstrated the first continuously operating air waveguide, using a high-repetition-rate laser to replenish the waveguide faster than it could thermally dissipate. For each of the air waveguide experiments, extension to much longer ranges and steady state operation is discussed.
  • Item
    NOVEL QUASI-FREESTANDING EPITAXIAL GRAPHENE ELECTRON SOURCE HETEROSTRUCTURES FOR X-RAY GENERATION
    (2024) Lewis, Daniel; Daniels, Kevin M; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Graphene, the 2D allotrope of carbon, boasts numerous exceptional qualities like strength, flexibility, and conductivity unmatched for its scale, and amongst its lesser-known capabilities is electron emission at temperatures and electric fields too low to allow for conventional thermionic or field emission sources to function. Driven by the mechanism of Phonon-Assisted Electron Emission (PAEE), planar microstructures fabricated from quasi-freestanding epitaxial graphene (QEG) on silicon carbide have exhibited emission currents of up to 8.5 μA at temperatures and applied fields as low as 200 C and 1 kV/cm, orders of magnitude below conventional electron source requirements.These emission properties can be influenced through variations in microstructure design morphology, and performance is controllable via device temperature and applied field in the same manner as thermionic or field emission sources. As 2D planar devices, graphene microstructure electron emitters can also be encapsulated with a thermally evaporated oxide, granting electrical isolation and environmental resistance, and can even exhibit emission current enhancement under these conditions. Graphene electron emitters expressed as heterostructure material stacks could see implementation as electron emission sources in environments or devices where conventional thermionic or field emission sources can’t be supported due to thermal, power system, or physical size limitations, the presence of contaminants, or even poor vacuum containment. An explorable application could see an oxide-encapsulated graphene electron source paired with a layered interaction-emission anode to create a micron-scale vertical alignment x-ray source with no need of vacuum containment. We investigate these properties with using hydrogen-intercalated quasi-freestanding bilayer epitaxial graphene, a rare and difficult to manufacture formulation that allows the graphene to behave as if it were a freestanding structure, while still benefiting from the macro-scale mechanical strength and fabrication process compatibility afforded by its silicon carbide substrate. The quasi-freestanding nature of the graphene limits substrate phonon interactions, allowing the graphene phonon-electron interactions to dominate, in turn empowering the PAEE mechanic. Our devices benefit from an ease of interaction that is untenable for processes not employing QEG, with the speed and simplicity of fabrication being a hallmark of our investigations. We begin our exploration of how the PAEE mechanism itself can be influenced in our designs, and how process and fabrication optimizations can be leveraged for device applications. Graphene’s role in the fields of microelectronics, condensed matter physics, and materials science is still novel, and rapidly expanding, and our investigations explore a unique facet of this wonder material’s capabilities.
  • Item
    Image Reconstruction for Hyperpolarized Carbon-13 Metabolic Magnetic Resonance Imaging with Iterative Methods
    (2024) Zhu, Minjie; Babadi, Behtash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Magnetic resonance imaging (MRI) with hyperpolarized carbon-13 (13C) agents is an emerging in vivo medical imaging technique. 13C MRI gives a series of images that show the evolution of the injected substrate and its metabolic products in the imaging volume, which leads to various medical applications including monitoring tumor progression and post-treatment response in both animal models and clinical trials. This dissertation focuses on the application of novel iterative image reconstruction methods for 13C MRI that aim to improve image quality and temporal resolution.One of the challenges for the existing 13C MRI reconstruction method is the difficulty in quantification of lower intensity metabolites due to noise and overlapping peaks in the aliased spectrum. In the first part of the dissertation, a model-based iterative reconstruction method is proposed to overcome such difficulty. The proposed method utilizes prior knowledge of the properties of the metabolites in the imaging volume, including off-resonance frequency, T2* decay constants, and the image acquisition trajectory in spatial and frequency domain. Metabolic images are reconstructed through solving the linear equation between acquired signal and images with least square error estimation. The reconstruction results on in vivo imaging data sets demonstrate that the proposed method can separate two overlapped peaks in an aliased spectrum while the conventional method fails. Another challenge for 13C MRI is to reconstruct metabolic images from under-sampled acquisitions. Due to the short lifetime of the injected substrate and the physical limitation of the MRI scanner, only a few temporal frames can be acquired for 13C MRI with one injection. Under-sampling in the image acquisition can provide more frames, but certain reconstruction methods are required to remove the artifacts from direct reconstruction on the under-sampled data. In the second part of the dissertation, a customized low-rank plus sparse (L+S) reconstruction method is proposed to produce artifact-free images from under-sampled data. Digital phantom simulations are performed to evaluate the optimal reconstruction parameters. Simulation with digital phantom and in vivo mouse imaging on 2D and 3D dynamic imaging data demonstrate the effectiveness in acceleration without introducing image artifacts using the proposed reconstruction method. In the third part of the dissertation, we present a preclinical application of 13C MRI to study brain metabolism and identify the source of metabolic products based on the metabolic images derived. In vivo metabolic imaging with different flow-suppression levels was performed on rats in the brain region. Results show that metabolic product, lactate, has no significant dependence on the level of suppression while the substrate pyruvate is strongly dependent. This supports our hypothesis that lactate seen in metabolic images is generated in the brain. Additional high-resolution metabolic imaging was performed to show different signal distributions for pyruvate and lactate clearly. Our proposed L+S reconstruction method was applied to the dynamic image data to reduce the background noise. The derived dynamic images show distinct dynamics for pyruvate and lactate, further supporting our hypothesis.