Computer Science Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2756
Browse
107 results
Search Results
Item Practical Cryptography for Blockchains: Secure Protocols with Minimal Trust(2024) Glaeser, Noemi; Katz, Jonathan; Malavolta, Giulio; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In 2008, Satoshi Nakamoto introduced Bitcoin, the first digital currency without a trusted authority whose security is maintained by a decentralized blockchain. Since then, a plethora of decentralized applications have been proposed utilizing blockchains as a public bulletin board. This growing popularity has put pressure on the ecosystem to prioritize scalability at the expense of trustlessness and decentralization. This work explores the role cryptography has to play in the blockchain ecosystem to improve performance while maintaining minimal trust and strong security guarantees. I discuss a new paradigm for scalability, called naysayer proofs, which sits between optimistic and zero-knowledge approaches. Next, I cover two on-chain applications: First, I show how to improve the security of a class of coin mixing protocols by giving a formal security treatment of the construction paradigm and patching the security of an existing, insecure protocol. Second, I show how to construct practical on-chain protocols for a large class ofelections and auctions which simultaneously offer fairness and non-interactivity without relying on a trusted third party. Finally, I look to the edges of the blockchain and formalize new design requirements for the problem of backing up high-value but rarely-used secret keys, such as those used to secure the reserves of a cryptocurrency exchange, and develop a protocol which efficiently meets these new challenges. All of these works will be deployed in practice or have seen interest from practitioners. These examples show that advanced cryptography has the potential to meaningfully nudge the blockchain ecosystem towards increased security and reduced trust.Item Structured discovery in graphs: Recommender systems and temporal graph analysis(2024) Peyman, Sheyda Do'a; Lyzinski, Vince V.; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Graph-valued data arises in numerous diverse scientific fields ranging from sociology, epidemiology and genomics to neuroscience and economics.For example, sociologists have used graphs to examine the roles of user attributes (gender, class, year) at American colleges and universities through the study of Facebook friendship networks and have studied segregation and homophily in social networks; epidemiologists have recently modeled Human-nCov protein-protein interactions via graphs, and neuroscientists have used graphs to model neuronal connectomes. The structure of graphs, including latent features, relationships between the vertex and importance of each vertex are all highly important graph properties that are main aspects of graph analysis/inference. While it is common to imbue nodes and/or edges with implicitly observed numeric or qualitative features, in this work we will consider latent network features that must be estimated from the network topology.The main focus of this text is to find ways of extracting the latent structure in the presence of network anomalies. These anomalies occur in different scenarios: including cases when the graph is subject to an adversarial attack and the anomaly is inhibiting inference, and in the scenario when detecting the anomaly is the key inference task. The former case is explored in the context of vertex nomination information retrieval, where we consider both analytic methods for countering the adversarial noise and also the addition of a user-in-the-loop in the retrieval algorithm to counter potential adversarial noise. In the latter case we use graph embedding methods to discover sequential anomalies in network time series.Item A Pedagogical Approach to Ramsey Multiplicity(2023) Brady, Robert; Gasarch, William; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)It is well known that for all 2-colorings of the edges of $K_6$ there is amonochromatic triangle. Less well known is that there are two monochromatic triangles. More generally, for all 2-colorings of the edges of $K_n$ there are roughly $\ge n^3/24$ monochromatic triangles. Another way to state this is that the density of monochromatic triangles is at least $1/4$. The Ramsey Multiplicity of $k$ is (asymptotically) the greatest $\alpha$ such that for every coloring of $K_n$ the density of monochromatic $K_k$'s is $\alpha$. This concept has been studied for many years. We survey the area and provide proofs that are more complete, more motivated, and using modern notation.Item QUANTUM COMBINATORIAL OPTIMIZATION ALGORITHMS FOR PACKING PROBLEMS IN CLASSICAL COMPUTING AND NETWORKING(2023) Unsal, Cem Mehmet; Oruc, Yavuz A; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In Computer Engineering, packing problems play a central role in many aspects of hardware control. The field aims to maximize computer processing speed, network throughput, and dependability in industry applications. Many of these constrained maximization problems can be expressed as packing problems in integer programming when working with restrictions such as latency, memory size, race conditions, power, and component availability. Some of the most crucial of these integer programming problems are NP-hard for the global optimum. Therefore, real-world applications heavily rely on heuristics and meta-heuristics to find good solutions. With recent developments in quantum meta-heuristic methods and promising results in experimental quantum computing systems, quantum computing is rapidly becoming more and more relevant for complex real-world combinatorial optimization tasks. This thesis is about applications of quantum combinatorial optimization algorithms in classical computer engineering problems. These include novel quantum computing techniques that respect the constraints of state-of-the-art experimental quantum systems. This thesis includes five projects. FASTER QUANTUM CONCENTRATION VIA GROVER'S SEARCH:One of the most important challenges in information networks is to gather data from a larger set of nodes to a smaller set of nodes. This can be done via the use of a concentrator architecture in the connection topology. This chapter is a proof-of-concept that demonstrates a quantum-based controller in large interconnection networks can asymptotically perform this task faster. We specifically present quantum algorithms for routing concentration assignments on full-capacity fat-and-slim concentrators, bounded fat-and-slim concentrators, and regular fat-and-slim concentrators. Classically, the concentration assignment takes O(n) time on all these concentrators, where n is the number of inputs. Powered by Grover's quantum search algorithm, our algorithms take O(√(nc) ln(c)) time, where c is the capacity of the concentrator. Thus, our quantum algorithms are asymptotically faster than their classical counterparts when (c ln^2(c))=o(n). In general, c = n^μ satisfies (c ln^2(c))=o(n), implying a time complexity of O(n^(0.5(1+ μ )) ln (n)), for any μ, 0 < μ < 1. QUANTUM ADVERSARIAL LEARNING IN EMULATION OF MONTE-CARLO METHODS FOR MAX-CUT APPROXIMATION: QAOA IS NOT OPTIMAL:One of the leading candidates for near-term quantum advantage is the class of Variational Quantum Algorithms. However, these algorithms suffer from classical difficulty in optimizing the variational parameters as the number of parameters increases. Therefore, it is important to understand the expressibility and power of various ansätze to produce target states and distributions. To this end, we apply notions of emulation to Variational Quantum Annealing and the Quantum Approximate Optimization Algorithm (QAOA) to show that variational annealing schedules with equivalent numbers of parameters outperform QAOA. Our Variational Quantum Annealing schedule is based on a novel polynomial parameterization that can be optimized in a similar gradient-free way as QAOA, using the same physical ingredients. In order to compare the performance of ansätze types, we have developed statistical notions of Monte-Carlo methods. Monte-Carlo methods are computer programs that generate random variables that approximate a target number that is computationally hard to calculate exactly. While the most well-known Monte-Carlo method is Monte-Carlo integration (e.g., Diffusion Monte-Carlo or path-integral quantum Monte-Carlo), QAOA is itself a Monte-Carlo method that finds good solutions to NP-complete problems such as Max-cut. We apply these statistical Monte-Carlo notions to further elucidate the theoretical framework around these quantum algorithms. SCHEDULING JOBS IN A SHARED HIGH-PERFORMANCE COMPUTER WITH A NISQ COMPUTER:Several quantum approximation algorithms for NP-hard optimization problems have been described in the literature. The properties of quantum approximation algorithms have been well-explored for optimization problems of Ising type with 2-local Hamiltonians. A wide range of optimization problems can be mapped to Ising problems. However, the mapping overhead of many problem instances puts them out of the reach of Noisy Intermediate-scale Quantum (NISQ) devices. In this chapter, we develop a way of mapping constrained optimization problems to higher-order spin interactions to put a larger set of problem instances within reach of spin interaction devices with potential NISQ applications. We demonstrate the growth in the practicable set of problem instances by comparing the resource requirements as a function of coupling. As an example, we have demonstrated our techniques on the problem of scheduling jobs in a high-performance computer queue with limited memory and CPUs. PROTEIN STRUCTURES WITH OSCILLATING QPACKER:A significant challenge in designing proteins for therapeutic purposes is determining the structure of a protein to find the sidechain identities given a protein backbone. This problem can be easily and efficiently encoded as a quadratic binary optimization problem. There has been a significant effort to find ways to solve these problems in the field of quantum information, both exactly and approximately. An important initiative has applied experimental quantum annealing platforms to solve this problem and got promising results. This project is about optimizing the annealing schedule for the sidechain identity problem, inspired by cutting-edge developments in the algorithmic theory of quantum annealing. ON THE COMPLEXITY OF GENERALIZED DISCRETE LOGARITHM PROBLEM:The Generalized Discrete Logarithm Problem (GDLP) is an extension of the Discrete Logarithm Problem where the goal is to find x∈ℤ_s such that g^x mod s=y for a given g,y∈ℤ_s. The generalized discrete logarithm is similar, but instead of a single base element, it uses a number of base elements that do not necessarily commute. In this chapter, we prove that GDLP is NP-hard for symmetric groups. The lower-bound complexity of GDLP has been an open question since GDLP was defined in 2008 until our proof. Furthermore, we prove that GDLP remains NP-hard even when the base elements are permutations of at most three elements. Lastly, we discuss the implications and possible implications of our proofs in classical and quantum complexity theory.Item Non-Hermitian approaches for pair-excitation in quantum Boson dynamics(2022) Sorokanich, Stephen; Margetis, Dionisios; Grillakis, Manoussos; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The topic of this thesis is the mathematical analysis of physically motivated modelsfor a trapped dilute Bose gas with repulsive pairwise atomic interactions at zero temper- ature. Our goal is to develop the spectral theory for excited many-body quantum states of these systems by accounting for the scattering of atoms in pairs from the macroscopic state (condensate). This general methodology, known as pair-excitation, was introduced in the physics literature in the 1960s – the work of this thesis provides the first compre- hensive mathematical treatment of many aspects of pair-excitation. This includes, e.g., the spectral theory for pair-transformed approximate Hamiltonians, a general existence theory for the pair-excitation kernel, and the connection between the pair-excitation for- malism to quasiparticle excitations in the Bose gas. We formulate the method of pair-excitation for several historical models of the Bosegas from the physics literature. In particular, we focus on the seminal works of Wu, Fetter, Griffin, and Lee, Huang, and Yang. Each of these models introduce unique features to the mathematical analysis, but the general strategy remains the same: transform the approximate Hamiltonian using a suitably-defined pair-excitation operator. This operator is not determined a priori, but is chosen as part of the problem in order to simplify the expression of excited states of the transformed system. The study begins with models for the Bose gas in the non-translation-invariant set-ting, where the particles are spatially-confined in an external trapping potential. In this setting, formulating the pair-excitation method entails solving a nonlinear integro-partial- differential equation for the pair-excitation kernel. We provide a general existence theory for this kernel via a variational approach. The kernel which we find allows us to connect the pair-excitation method to the more widely-studied unitary transformation of quadratic Hamiltonians via Bogoliubov rotation. The theory for the kernel also allows us to write a simple formula for excited many-body states, which can be adapted to the various models which we consider in this work. We then study the problem for the pair-excited transformed approximate Hamilto-nian for Bosons in a periodic box. In this setting, the description of the effective Hamil- tonian in the momentum basis is particularly simple. However, the lack of particle con- servation means that the pair-excitation transform is unbounded in operator norm, and spectral methods developed in earlier chapters are enriched with new tools.Item HOMOTOPY CONTINUATION METHODS FOR PHASE RETRIEVAL(2021) Bekkerman, David; Balan, Radu; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation, we discuss the problem of recovering a signal from a set of phaseless measurements. This type of problem shows up in numerous applications and is known for its numerical difficulty. It finds use in X-ray Crystallography, Microscopy, Quantum Information, and many others. We formulate the problem using a non-convex quadratic loss function whose global minimum recovers the phase of the measurement.Our approach to this problem is via a Homotopy Continuation Method. These methods have found great use in solving systems of nonlinear equations in numer- ical algebraic geometry. The idea is to initialize the solution of a related system at a known global optimal, then continuously deform the criterion and follow the solution path until we find the minimum of the desired loss function. We analyze convergence properties and asymptotic results for these algorithms, as well as gather some numerical statistics. The main contribution of this thesis is deriving conditions for convergence of the algorithm and an asymptotic rate for when these conditions are satisfied. We also show that the algorithm achieves good numerical accuracy. The dissertation is split into several chapters, and further divided by the real and complex case. Chapter 1 gives some background to Abstract Phase Retrieval and Homotopy Continuation Methods. Chapter 2 covers the nature of the algorithm (named the Golden Retriever), gives a summary and description of the theoretical results, and shows some numerical results. Chapter 3 covers the details of the derivation and results in the real case, and Chapter 4 covers the same for the complex case.Item Part I: On the Stability threshold of Couette flow in a uniform magnetic field; Part II: Quantitative convergence to equilibrium for hypoelliptic stochastic differential equations with small noise(2021) Liss, Kyle; Bedrossian, Jacob; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation contains two parts. In Part I, We study the stability of the Couette flow (y,0,0) in the presence of a uniform magnetic field a*(b, 0, 1) on TxRxT using the 3D incompressible magnetohydrodynamics (MHD) equations. We consider the inviscid, perfect conductor limit Re^(-1) = Rm^(-1) << 1 and prove that for strong and suitably oriented background fields the Couette flow is asymptotically stable to perturbations that are O(Re^(-1)) in the Sobolev space H^N. More precisely, we establish the decay estimates predicted by a linear stability analysis and show that the perturbations u(t,x+yt,y,z) and b(t,x+yt,y,z) remain O(Re^(-1)) in H^M for some 1 << M(b) < N. In the Navier-Stokes case, high regularity control on the perturbation in a coordinate system adapted to the mixing of the Couette flow is known only under the stronger assumption of O(Re^(-3/2)) data. The improvement in the MHD setting is possible because the magnetic field induces time oscillations that partially suppress the lift-up effect, which is the primary transient growth mechanism for the Navier-Stokes equations linearized around Couette flow. In Part II, we study the convergence rate to equilibrium for a family of Markov semigroups (parametrized by epsilon>0) generated by a class of hypoelliptic stochastic differential equations on R^d, including Galerkin truncations of the incompressible Navier-Stokes equations, Lorenz-96, and the shell model SABRA. In the regime of vanishing, balanced noise and dissipation, we obtain a sharp (in terms of scaling) quantitative estimate on the exponential convergence in terms of the small parameter epsilon. By scaling, this regime implies corresponding optimal results both for fixed dissipation and large noise limits or fixed noise and vanishing dissipation limits. As part of the proof, and of independent interest, we obtain uniform-in-epsilon upper and lower bounds on the density of the stationary measure. Upper bounds are obtained by a hypoelliptic Moser iteration, the lower bounds by a De Giorgi-type iteration (both uniform in epsilon). The spectral gap estimate on the semigroup is obtained by a weak Poincar\'e inequality argument combined with quantitative hypoelliptic regularization of the time-dependent problem.Item Reinforcement Learning Methods for Conic Finance(2020) Chopra, Sahil; Madan, Dilip; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Conic Finance is a world of two-prices, a more grounded reality than the theory of one-price. The world, however, is constructed by considering nonadditive expectations of risks or value functions. This makes some of the optimization algorithms incompatible with this universe, if not infeasible. It is more evident in the application of Reinforcement Learning algorithms where the underlying principle of TD learning and Bellman equations are based on the additivity of value functions. Hence, the task undertaken here is to mold the recent advances in the field of Distributional Reinforcement Learning to be conducive to learning in the setting of nonadditive dynamics. Algorithms for discrete and continuous actions are described and illustrated on sample problems in finance.Item Modeling Imatinib-Treated Chronic Myelogenous Leukemia and the Immune System(2019) Peters, Cara Disa; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Chronic myelogenous leukemia can be considered as a chronic condition thanks to the development of tyrosine kinase inhibitors in the early 2000s. Most CML patients are able to manage the disease, but unending treatment can affect quality of life. The focus of much clinical research has thus transitioned to treatment cessation, where many clinical trials have demonstrated that treatment free remission is possible. While there are a lot of existing questions surrounding the criteria for cessation candidates, much evidence indicates the immune system plays a significant role. Mathematical modeling provides a complementary component to clinical research. Existing models well-describe the dynamics of CML in the first phase of treatment where most patients experience a biphasic decline in the BCR-ABL ratio. The Clapp model is one of the first to incorporate the immune system and capture the often-seen oscillations in the BCR-ABL ratio that occur later in therapy. However, these models are far from capable of being used in a predictive manner and do not fully capture the dynamics surrounding treatment cessation. Based on clinical research demonstrating the importance of immune response, we hypothesize that a mathematical model of CML should include a more detailed description of the immune system. We therefore present a new model that is an extension of the Clapp model. The model is then fit to patient data and determined to be a good qualitative description of CML dynamics. With this model it can be shown that treatment free remission is possible. However, the model introduces new parameters that must be correctly identified in order for it to have predictive power. We next consider the parameter identification problem. Since the dynamics of CML can be considered in two phases, the biphasic decline of and oscillations in the BCR-ABL ratio, we hypothesize that parameter values may differ over the course of treatment and look to identify which parameters are most variable by refitting the model to different windows of data. It is determined that parameters associated with immune response and regulation are most difficult to identify and could be key to selecting good treatment cessation candidates. To increase the predictive power of our model, we consider data assimilation techniques which are successfully used in weather forecasting. The extended Kalman filter is used to assimilate CML patient data. Although we determine that the EKF is not the ideal technique for our model, it is shown that data assimilation methods in general hold promising value to the search for a predictive model of CML. In order to have the most success, new techniques should be considered, data should be collected more frequently, and immune assay data should be made available.Item Mathematical Models of Underlying Dynamics in Acute and Chronic Immunology(2019) Wyatt, Asia Alexandria; Levy, Doron; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)During an immune response, it is understood that there are key differences between the cells and cytokines that are present in a primary response versus those present in subsequent responses. Specifically, after a primary response, memory cells are present and drive the clearance of antigen in these later immune responses. When comparing acute infections to chronic infections, there are also differences in the dynamics of the immune system. In this dissertation, we develop three mathematical models to explore these differences in the immune response to acute and chronic infections through the creation, activation, regulation, and long term maintenance of T cells. We mimic this biological behavior through the use of delayed differential equation (DDE) models. The first model explores the dynamics of adaptive immunity in primary and secondary responses to acute infections. It is shown that while we observe similar amounts of antigen stimulation from both immune responses, with the incorporation of memory T cells, we see an increase in both the amount of effector T cells present and the speed of activation of the immune system in the secondary response. We conclude that our model is robust and can be applied to study different types of antigen from viral to bacterial. Extending our work to chronic infections, we develop our second and third models to explore breast cancer dormancy and T cell exhaustion. For our breast cancer dormancy model, we find that our model behaves similar to acute infections, but with constant antigen stimulation. Moreover, we observe the importance of immune protection on the long term survival of breast cancer cells. In our third model we find that while memory T cells play a major role in the effectiveness of the immune system in acute infection, in chronic infections, over long periods of time, T cell exhaustion prevents proper immune function and clearance of antigen. We also observe how the lack of long term maintenance of memory T cells plays an important role in the final outcome of the system. Finally, we propose two potential extensions to the three models developed: creating a simplified acute infection model and creating a combined breast cancer dormancy model with T cell exhaustion.