Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
26 results
Search Results
Item Advances in Concrete Cryptanalysis of Lattice Problems and Interactive Signature Schemes(2024) Kippen, Hunter Michael; Dachman-Soled, Dana; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Advanced cryptography that goes beyond what is currently deployed to service our basic internet infrastructure is continuing to see widespread adoption. The enhanced functionality achieved by these schemes frequently yields an increase in complexity. Solely considering the asymptotic security of the underlying computational assumptions is often insufficient to realize practical and secure instantiations.In these cases, determining the risk of any particular deployment involves analyzing the concrete security (the exact length of time it would take to break the encryption) as well as quantifying how concrete security can degrade over time due to any exploitable information leakage. In this dissertation, we examine two such cryptographic primitives where assessing concrete security is paramount. First, we consider the cryptanalysis of lattice problems (used as the basis for current standard quantum resistant cryptosystems). We develop a novel side-channel attack on the FrodoKEM key encapsulation mechanism as submitted to the NIST Post Quantum Cryptography (PQC) standardization process. Our attack involves poisoning the FrodoKEM Key Generation (KeyGen) process using a security exploit in DRAM known as “Rowhammer”. Additionally, we revisit the security of the lattice problem known as Learning with Errors (LWE) in the presence of information leakage. We further enhance the robustness of prior methodology by viewing side information from a geometric perspective. Our approach provides the rigorous promise that, as hints are integrated, the correct solution is a (unique) lattice point contained in an ellipsoidal search space. Second, we study the concrete security of interactive signature schemes (used as part of many Privacy Enhancing Technologies). To this end, we complete a new analysis of the performance of Wagner’s k-list algorithm [CRYPTO ‘02], which has found significant utility in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr [ICICS ‘01].Item Language-Based Techniques for Secure Programming(2022) Sweet, Ian Nicholas; Hicks, Michael; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Secure Computation (SC) encompasses many different cryptographic techniques for computing over encrypted data. In particular, Secure Multiparty Computation enables multiple parties to jointly compute a function over their secret inputs. MPC languages offer programmers a familiar environment in which to express their programs, but fall short when confronted with problems that require flexible coordination. More broadly, SC languages do not protect non-expert programmers from violating obliviousness or expected bounds on information leakage. We aim to show that secure programming can be made safer through language-based techniques for expressive, coordinated MPC; probabilistically oblivious execution; and quantitative analysis of information flow. We begin by presenting Symphony, an expressive MPC language that provides flexible coordination of many parties, which has been used to implement the secure shuffle of Laur, Willemson, and Zhang. Next, we present λObliv, a core language guaranteeing that well-typed programs are probabilistically oblivious, which has been used to type check tree-based, nonrecursive ORAM (NORAM). Finally, we present a novel application of dynamic analysis techniques to an existing system for enforcing bounds on information leakage, providing a better balance of precision and performance.Item LOW-POWER AND SECURE IMPLEMENTATION OF NEURAL NETWORKS BY APPROXIMATION(2022) Xu, Qian; Qu, Gang; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Neural networks (NN), one type of machine learning (ML) algorithms, have emerged as a powerful paradigm for sophisticated applications such as pattern recognition and natural language processing. In this dissertation, we study how to apply the principle of approximate computing to solve two challenging problems in neural networks, namely energy efficiency and security. More specifically, we investigate approximation across three stacks in the implementation of NN models: computation units, data storage, and the NN model itself. Computation units, such as adders and multipliers, have been one of the main targets for power-efficient implementation of neural networks. Many low-power approximate adders and multipliers have been proposed. NNs also require complex operations like logarithms, despite the heavy usage and high energy consumption on such operations, they are not optimized for energy efficiency. Our first contribution is a truncation-based approximation method that can balance the computation precision and energy consumption of the popular floating-point logarithmic operation. We derive a formula for the most energy-efficient implementation of the logarithm unit given an error variance range. Based on this theoretical result, we propose BWOLF (Bit-Width Optimization for Logarithmic Function) to determine the bit-width of operands in the logarithms computation in order to minimize the energy consumption in delivering the required computation precision. We evaluate the efficacy of BWOLF in terms of energy savings on two widely used applications: Kullback-Leibler Divergence and Bayesian Neural Network. The experimental results validate the correctness of our analysis and show significant energy savings, from 27.18% to 95.92%, over the full-precision computation and a baseline approximation method based on uniform truncation. Storage approximation by reducing the supply voltage for dynamic random access memory (DRAM) is effective in saving the power for neural networks. However, this will introduce physical errors in DRAM and could impact the performance of NN models. In the second part of this dissertation, we explore the potential of storage approximation in improving NN system’s security in training data privacy. More specifically, we consider the Model Inversion Attacks (MIAs) that extrapolate the training data from model parameters. Our proposed solution -- MIDAS: Model Inversion Defenses with an Approximate memory System -- intentionally introduces memory faults by overscaling voltage to thwart MIA without compromising the original ML model. We use detailed SPICE simulations to build the DRAM fault model and evaluate MIDAS against state-of-the-art MIAs. Experiments demonstrate that MIDAS can effectively protect training data from run-time MIAs. In terms of the Pearson Correlation Coefficient (PCC) similarity between the original training data and the recovered version, MIDAS reduces the PCC value by 55% and 40% for shallow and deep neural networks under 1.5% accuracy relaxation. Although MIDAS shows promising security benefits through storage approximation, such approximation modifies the neural network parameters and may reduce the NN model’s accuracy. In the third part of this dissertation, we propose model approximation which aims at generating an approximate NN model to correct the errors during training and consequently reduce the possible degradation of NN’s classification results. We demonstrate this concept on gradient inversion attacks which utilize transmitted gradients between the nodes in a federated learning system to reconstruct the training data. Therefore, we propose DAGIA, a Data Augmentation defense against Gradient Inversion Attacks, to deliberately extend the training dataset and report the corresponding gradient updates to protect the original data. For multiple data augmentation techniques, we empirically evaluate the trade-off between test accuracy and information leakage to select the best technique for DAGIA. According to the Structural Similarity (SSIM) between reconstructed training data and the original CIFAR-10 dataset, the experimental results show that DAGIA can reduce the SSIM by 54% with a slightly increased test accuracy for the ConvNet model. In summary, this dissertation focuses on the role of approximation in energy efficiency and security during the implementation of neural networks. We show that computation units for complex operators can be approximated to reduce energy, the storage for neural network weights can be approximated to improve both energy efficiency and security (against information leak), and the NN model itself could be approximated during training for security enhancement. This dissertation work demonstrates that approximation is a promising method to improve the performance of neural networks. It opens the door to applying the principle of approximate computing to the implementation and optimization of neural networks where there are abundant opportunities for approximation.Item A Human-Centric Approach to Software Vulnerability Discovery(2020) Votipka, Daniel Jared; Mazurek, Michelle L; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Software security bugs | referred to as vulnerabilities | persist as an important and costly challenge. Significant effort has been exerted toward automatic vulnerability discovery, but human intelligence generally remains required and will remain necessary for the foreseeable future. Therefore, many companies have turned to internal and external (e.g., penetration testing, bug bounties) security experts to manually analyze their code for vulnerabilities. Unfortunately, there are a limited number of qualified experts. Therefore, to improve software security, we must understand how experts search for vulnerabilities and how their processes could be made more efficient, by improving tool usability and targeting the most common vulnerabilities. Additionally, we seek to understand how to improve training to increase the number of experts. To answer these questions, I begin with an in-depth qualitative analysis of secure development competition submissions to identify common vulnerabilities developers introduce. I found developers struggle to understand and implement complex security concepts, not recognizing how nuanced development decisions could lead to vulnerabilities. Next, using a cognitive task analysis to investigate experts' and non-experts' vulnerability discovery processes, I observed they use the same process, but dier in the variety of security experiences which inform their searches. Together, these results suggest exposure to an in-depth understanding of potential vulnerabilities as essential for vulnerability discovery. As a first step to leverage both experts and non-experts, I pursued two lines of work: education to support experience development and vulnerability discovery automation interaction improvements. To improve vulnerability discovery tool interaction, I conducted observational interviews of experts' reverse engineering process, an essential and time-consuming component of vulnerability discovery. From this, I provide guidelines for more usable interaction design. For security education, I began with a pedagogical review of security exercises to identify their current strengths and weaknesses. I also developed a psychometric measure for secure software development self-efficacy to support comparisons between educational interventions.Item DECOUPLING CONSISTENCY DETERMINATION AND TRUST FROM THE UNDERLYING DISTRIBUTED DATA STORES(2018) Lekakis, Vasileios; Keleher, Peter J; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Building applications on cloud services is cost-effective and allows for rapid development and release cycles. However, relying on cloud services can severely limit applications’ ability to control their own consistency policies, and their ability to control data visibility during replication. To understand the tension between strong consistency and security guarantees on one hand and high availability, flexible replication, and performance on the other, it helps to consider two questions. First, is it possible for an application to achieve stricter consistency guarantees than what the cloud provider offers? If we solely rely on the provider service interface, the answer is no. However, if we allow the applications to determine the implementation and the execution of the consistency protocols, then we can achieve much more. The second question is, can an application relay updates over untrusted replicas without revealing sensitive information while maintaining the desired consistency guarantees? Simply encrypting the data is not enough. Encryption does not eliminate information leakage that comes from the meta-data needed for the execution of any consistency protocol. The alternative to encryption—allowing the flow of updates only through trusted replicas— leads to predefined communication patterns. This approach is prone to failures that can cause partitioning in the system. One way to answer “yes” to this question is to allow trust relationships, defined at the application level, to guide the synchronization protocol. My goal in this thesis is to build systems that take advantage of the performance, scalability, and availability of the cloud storage services while, at the same time, bypassing the limitations imposed by cloud service providers’ design choices. The key to achieving this is pushing application-specific decisions where they belong: the application. I defend the following thesis statement: By decoupling consistency determination and trust from the underlying distributed data store, it is possible to (1) support application-specific consistency guarantees; (2) allow for topology independent replication protocols that do not compromise application privacy. First I design and implement Shell, a system architecture for supporting strict consistency guarantees over eventually consistent data stores. Shell is a software layer designed to isolate consistency implementations and cloud-provider APIs from the application code. Shell consists of four internal modules and an application store, which together abstract consistency-related operations and encapsulate communication with the underlying storage layers. Apart from consistency protocols tailored to application needs, Shell provides application-aware conflict resolution without relying on generic heuristics such as the “last write wins.” Shell does not require the application to maintain dependency-tracking in- formation for the execution of the consistency protocols as other existing approaches do. I experimentally evaluate Shell over two different data-stores using real-application traces. I found that using Shell can reduce the inconsistent updates by 10%. I also measure and show the overheads that come from introducing the Shell layer. Second, I design and implement T.Rex, a system for supporting topology-independent replication without the assumption of trust between all the participating replicas. T.Rex uses role-based access control to enable flexible and secure sharing among users with widely varying collaboration types: both users and data items are assigned roles, and a user can access data only if it shares at least one role. Building on top of this abstraction, T.Rex includes several novel mechanisms: I introduce role proofs to prove role membership to others in the role without leaking information to those not in the role. Additionally, I introduce role coherence to prevent updates from leaking across roles. Finally, I use Bloom filters as opaque digests to enable querying of remote cache state without being able to enumerate it. I combine these mechanisms to develop a novel, cryptographically secure, and efficient anti-entropy protocol, T.Rex-Sync. I evaluate T.Rex on a local test-bed, and I show that it achieves security with modest computational and storage overheads.Item Machine Learning of Facial Attributes Using Explainable, Secure and Generative Adversarial Networks(2018) Samangouei, Pouya; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)"Attributes" are referred to abstractions that humans use to group entities and phenomena that have a common characteristic. In machine learning (ML), attributes are fundamental because they bridge the semantic gap between humans and ML systems. Thus, researchers have been using this concept to transform complicated ML systems into interactive ones. However, training the attribute detectors which are central to attribute-based ML systems can still be challenging. It might be infeasible to gather attribute labels for rare combinations to cover all the corner cases, which can result in weak detectors. Also, it is not clear how to fill in the semantic gap with attribute detectors themselves. Finally, it is not obvious how to interpret the detectors' outputs in the presence of adversarial noise. First, we investigate the effectiveness of attributes for bridging the semantic gap in complicated ML systems. We turn a system that does continuous authentication of human faces on mobile phones into an interactive attribute-based one. We employ deep multi-task learning in conjunction with multi-view classification using facial parts to tackle this problem. We show how the proposed system decomposition enables efficient deployment of deep networks for authentication on mobile phones with limited resources. Next, we seek to improve the attribute detectors by using conditional image synthesis. We take a generative modeling approach for manipulating the semantics of a given image to provide novel examples. Previous works condition the generation process on binary attribute existence values. We take this type of approaches one step further by modeling each attribute as a distributed representation in a vector space. These representations allow us to not only toggle the presence of attributes but to transfer an attribute style from one image to the other. Furthermore, we show diverse image generation from the same set of conditions, which was not possible using existing methods with a single dimension per attribute. We then investigate filling in the semantic gap between humans and attribute classifiers by proposing a new way to explain the pre-trained attribute detectors. We use adversarial training in conjunction with an encoder-decoder model to learn the behavior of binary attribute classifiers. We show that after our proposed model is trained, one can see which areas of the image contribute to the presence/absence of the target attribute, and also how to change image pixels in those areas so that the attribute classifier decision changes in a consistent way with human perception. Finally, we focus on protecting the attribute models from un-interpretable behaviors provoked by adversarial perturbations. These behaviors create an inexplainable semantic gap since they are visually unnoticeable. We propose a method based on generative adversarial networks to alleviate this issue. We learn the training data distribution that is used to train the core classifier and use it to detect and denoise test samples. We show that the method is effective for defending facial attribute detectors.Item REACTIONS REGARDING ONLINE DISCRIMINATION AND AD PROFILES(2017) Plane, Angelisa Carol; Mazurek, Michelle L.; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The accessibility and amount of information obtained by online companies has grown over the past decade. This growth has led to the ability of companies to target a desired population to show certain content, products and other services. The two studies conducted for this thesis examine different aspects of online targeting and users’ reactions using advertising as the primary tool. One of the main goals for the studies was to develop policy recommendations and guide policymakers into making ethical decisions. But to do this effectively some of the primary elements that we need to know is what the user understands, what they care about and why that concerns them. Keeping that in mind, we conducted three surveys that made up the first study which examined scenarios around discriminatory ads. For each scenario, we asked the user about their perception when it came to the level of problem and ethical behavior. For the second study, we conducted interviews that had participants look at the profiles that Google and Facebook have created about them based on their online activity. We were able to ask questions in regard to their comfort level, their understanding of why certain interests might be shown to them, and their general understanding of how the profiling works. These two studies were analyzed independently of each other, but the results and possible implications of each were combined to make recommendations to businesses and policy makers. From the first study, we found 43% of participants were moderately or very concerned by the scenarios, even when discrimination took place as result of online behavioral targeting. From the second study, we found several themes emerge from the interviews including the idea that more inaccurate inferences made make them feel more uncomfortable than accurate inferences. That sentiment was expressed by 64% of the participants.Item Security and Energy Efficiency in Resource-Constrained Wireless Multi-hop Networks(2016) Paraskevas, Evripidis; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In recent decades, there has been a huge improvement and interest from the research community in wireless multi-hop networks. Such networks have widespread applications in civil, commercial and military applications. Paradigms of this type of networks that are critical for many aspects of human lives are mobile ad-hoc networks, sensor networks, which are used for monitoring buildings and large agricultural areas, and vehicular networks with applications in traffic monitoring and regulation. Internet of Things (IoT) is also envisioned as a multi-hop network consisting of small interconnected devices, called ``things", such as smart meters, smart traffic lights, thermostats etc. Wireless multi-hop networks suffer from resource constraints, because all the devices have limited battery, computational power and memory. Battery level of these devices should be preserved in order to ensure reliability and communication across the network. In addition, these devices are not a priori designed to defend against sophisticated adversaries, which may be deployed across the network in order to disrupt network operation. In addition, the distributed nature of this type of networks introduces another limitation to protocol performance in the presence of adversaries. Hence, the inherit nature of this type of networks poses severe limitations on designing and optimizing protocols and network operations. In this dissertation, we focus on proposing novel techniques for designing more resilient protocols to attackers and more energy efficient protocols. In the first part of the dissertation, we investigate the scenario of multiple adversaries deployed across the network, which reduce significantly the network performance. We adopt a component-based and a cross-layer view of network protocols to make protocols secure and resilient to attacks and to utilize our techniques across existing network protocols. We use the notion of trust between network entities to propose lightweight defense mechanisms, which also satisfy performance requirements. Using cryptographic primitives in our network scenario can introduce significant computational overhead. In addition, behavioral aspects of entities are not captured by cryptographic primitives. Hence, trust metrics provide an efficient security metric in these scenarios, which can be utilized to introduce lightweight defense mechanisms applicable to deployed network protocols. In the second part of the dissertation, we focus on energy efficiency considerations in this type of networks. Our motivation for this work is to extend network lifetime, but at the same time maintain critical performance requirements. We propose a distributed sleep management framework for heterogeneous machine-to-machine networks and two novel energy efficient metrics. This framework and the routing metrics are integrated into existing routing protocols for machine-to-machine networks. We demonstrate the efficiency of our approach in terms of increasing network lifetime and maintaining packet delivery ratio. Furthermore, we propose a novel multi-metric energy efficient routing protocol for dynamic networks (i.e. mobile ad-hoc networks) and illustrate its performance in terms of network lifetime. Finally, we investigate the energy-aware sensor coverage problem and we propose a novel game theoretic approach to capture the tradeoff between sensor coverage efficiency and energy consumption.Item A HUMAN-CENTERED APPROACH TO IMPROVING THE USER EXPERIENCE OF SOFTWARE UPDATES(2016) Mathur, Arunesh; Chetty, Marshini; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Software updates are critical to the security of software systems and devices. Yet users often do not install them in a timely manner, leaving their devices open to security exploits. This research explored a re-design of automatic software updates on desktop and mobile devices to improve the uptake of updates through three studies. First using interviews, we studied users’ updating patterns and behaviors on desktop machines in a formative study. Second, we distilled these findings into the design of a low-fi prototype for desktops, and evaluated its efficacy for automating updates by means of a think-aloud study. Third, we investigated individual differences in update automation on Android devices using a large scale survey, and interviews. In this thesis, I present the findings of all three studies and provide evidence for how automatic updates can be better appropriated to fit users on both desktops and mobile devices. Additionally, I provide user interface design suggestions for software updates and outline recommendations for future work to improve the user experience of software updates.Item Language-based Techniques for Practical and Trustworthy Secure Multi-party Computations(2016) Rastogi, Aseem; Hicks, Michael; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
- «
- 1 (current)
- 2
- 3
- »