Mathematics

Permanent URI for this communityhttp://hdl.handle.net/1903/2261

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    Item
    QUANTUM COMBINATORIAL OPTIMIZATION ALGORITHMS FOR PACKING PROBLEMS IN CLASSICAL COMPUTING AND NETWORKING
    (2023) Unsal, Cem Mehmet; Oruc, Yavuz A; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In Computer Engineering, packing problems play a central role in many aspects of hardware control. The field aims to maximize computer processing speed, network throughput, and dependability in industry applications. Many of these constrained maximization problems can be expressed as packing problems in integer programming when working with restrictions such as latency, memory size, race conditions, power, and component availability. Some of the most crucial of these integer programming problems are NP-hard for the global optimum. Therefore, real-world applications heavily rely on heuristics and meta-heuristics to find good solutions. With recent developments in quantum meta-heuristic methods and promising results in experimental quantum computing systems, quantum computing is rapidly becoming more and more relevant for complex real-world combinatorial optimization tasks. This thesis is about applications of quantum combinatorial optimization algorithms in classical computer engineering problems. These include novel quantum computing techniques that respect the constraints of state-of-the-art experimental quantum systems. This thesis includes five projects. FASTER QUANTUM CONCENTRATION VIA GROVER'S SEARCH:One of the most important challenges in information networks is to gather data from a larger set of nodes to a smaller set of nodes. This can be done via the use of a concentrator architecture in the connection topology. This chapter is a proof-of-concept that demonstrates a quantum-based controller in large interconnection networks can asymptotically perform this task faster. We specifically present quantum algorithms for routing concentration assignments on full-capacity fat-and-slim concentrators, bounded fat-and-slim concentrators, and regular fat-and-slim concentrators. Classically, the concentration assignment takes O(n) time on all these concentrators, where n is the number of inputs. Powered by Grover's quantum search algorithm, our algorithms take O(√(nc) ln(c)) time, where c is the capacity of the concentrator. Thus, our quantum algorithms are asymptotically faster than their classical counterparts when (c ln^2(c))=o(n). In general, c = n^μ satisfies (c ln^2(c))=o(n), implying a time complexity of O(n^(0.5(1+ μ )) ln (n)), for any μ, 0 < μ < 1. QUANTUM ADVERSARIAL LEARNING IN EMULATION OF MONTE-CARLO METHODS FOR MAX-CUT APPROXIMATION: QAOA IS NOT OPTIMAL:One of the leading candidates for near-term quantum advantage is the class of Variational Quantum Algorithms. However, these algorithms suffer from classical difficulty in optimizing the variational parameters as the number of parameters increases. Therefore, it is important to understand the expressibility and power of various ansätze to produce target states and distributions. To this end, we apply notions of emulation to Variational Quantum Annealing and the Quantum Approximate Optimization Algorithm (QAOA) to show that variational annealing schedules with equivalent numbers of parameters outperform QAOA. Our Variational Quantum Annealing schedule is based on a novel polynomial parameterization that can be optimized in a similar gradient-free way as QAOA, using the same physical ingredients. In order to compare the performance of ansätze types, we have developed statistical notions of Monte-Carlo methods. Monte-Carlo methods are computer programs that generate random variables that approximate a target number that is computationally hard to calculate exactly. While the most well-known Monte-Carlo method is Monte-Carlo integration (e.g., Diffusion Monte-Carlo or path-integral quantum Monte-Carlo), QAOA is itself a Monte-Carlo method that finds good solutions to NP-complete problems such as Max-cut. We apply these statistical Monte-Carlo notions to further elucidate the theoretical framework around these quantum algorithms. SCHEDULING JOBS IN A SHARED HIGH-PERFORMANCE COMPUTER WITH A NISQ COMPUTER:Several quantum approximation algorithms for NP-hard optimization problems have been described in the literature. The properties of quantum approximation algorithms have been well-explored for optimization problems of Ising type with 2-local Hamiltonians. A wide range of optimization problems can be mapped to Ising problems. However, the mapping overhead of many problem instances puts them out of the reach of Noisy Intermediate-scale Quantum (NISQ) devices. In this chapter, we develop a way of mapping constrained optimization problems to higher-order spin interactions to put a larger set of problem instances within reach of spin interaction devices with potential NISQ applications. We demonstrate the growth in the practicable set of problem instances by comparing the resource requirements as a function of coupling. As an example, we have demonstrated our techniques on the problem of scheduling jobs in a high-performance computer queue with limited memory and CPUs. PROTEIN STRUCTURES WITH OSCILLATING QPACKER:A significant challenge in designing proteins for therapeutic purposes is determining the structure of a protein to find the sidechain identities given a protein backbone. This problem can be easily and efficiently encoded as a quadratic binary optimization problem. There has been a significant effort to find ways to solve these problems in the field of quantum information, both exactly and approximately. An important initiative has applied experimental quantum annealing platforms to solve this problem and got promising results. This project is about optimizing the annealing schedule for the sidechain identity problem, inspired by cutting-edge developments in the algorithmic theory of quantum annealing. ON THE COMPLEXITY OF GENERALIZED DISCRETE LOGARITHM PROBLEM:The Generalized Discrete Logarithm Problem (GDLP) is an extension of the Discrete Logarithm Problem where the goal is to find x∈ℤ_s such that g^x mod s=y for a given g,y∈ℤ_s. The generalized discrete logarithm is similar, but instead of a single base element, it uses a number of base elements that do not necessarily commute. In this chapter, we prove that GDLP is NP-hard for symmetric groups. The lower-bound complexity of GDLP has been an open question since GDLP was defined in 2008 until our proof. Furthermore, we prove that GDLP remains NP-hard even when the base elements are permutations of at most three elements. Lastly, we discuss the implications and possible implications of our proofs in classical and quantum complexity theory.
  • Thumbnail Image
    Item
    Stabilizing Column Generation via Dual Optimal Inequalities with Applications in Logistics and Robotics
    (2020) Haghani, Naveed; Balan, Radu; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This work addresses the challenge of stabilizing column generation (CG) via dual optimal inequalities (DOI). We present two novel classes of DOI for the general context of set cover problems. We refer to these as Smooth DOI (S-DOI) and Flexible DOI (F-DOI). S-DOI can be interpreted as allowing for the undercovering of items at the cost of overcovering others and incurring an objective penalty. SDOI leverage the fact that dual values associated with items should change smoothly over space. F-DOI can be interpreted as offering primal objective rewards for the overcovering of items. We combine these DOI to produce a joint class of DOI called Smooth-Flexible DOI (SF-DOI). We apply these DOI to three classical problems in logistics and operations research: the Single Source Capacitated Facility Location Problem, the Capacitated p-Median Problem, and the Capacitated Vehicle Routing Problem. We prove that these DOI are valid and are guaranteed to not alter the optimal solution of CG. We also present techniques for their use in the case of solvingCG with relaxed column restrictions. This work also introduces a CG approach to Multi-Robot Routing (MRR). MRR considers the problem of routing a fleet of robots in a warehouse to collectively complete a set of tasks while prohibiting collisions. We present two distinct formulations that tackle unique problem variants. The first we model as a set packing problem, while the second we model as a set cover problem. We show that the pricing problem for both approaches amounts to an elementary resource constrained shortest path problem (ERCSPP); an NP-hard problem commonly studied in other CG problem contexts. We present an efficient implementation of our CG approach that radically reduces the state size of the ERCSPP. Finally, we present a novel heuristic algorithm for solving the ERCSPP and offer probabilistic guarantees forits likelihood to deliver the optimal solution.
  • Thumbnail Image
    Item
    Solving, Generating, and Modeling Arc Routing Problems
    (2017) Lum, Oliver; Golden, Bruce; Wasil, Edward; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Arc routing problems are an important class of network optimization problems. In this dissertation, we develop an open source library with solvers that can be applied to several uncapacitated arc routing problems. The library has a flexible architecture and the ability to visualize real-world street networks. We also develop a software tool that allows users to generate arc routing instances directly from an open source map database. Our tool has a visualization capability that can produce images of routes overlaid on a specific instance. We model and solve two variants of the standard arc routing problem: (1) the windy rural postman problem with zigzag time windows and (2) the min-max K windy rural postman problem. In the first variant, we allow servicing of both sides of some streets in a network, that is, a vehicle can service a street by zigzagging. We combine insertion and local search techniques to produce high-quality solutions to a set of test instances. In the second variant, we design a cluster-first, route-second heuristic that compares favorably to an existing heuristic and produces routes that are intuitively appealing. Finally, we show how to partition a street network into routes that are compact, balanced, and visually appealing.
  • Thumbnail Image
    Item
    Spectral Frame Analysis and Learning through Graph Structure
    (2016) Clark, Chae Almon; Okoudjou, Kasso A; Czaja, Wojciech K; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
  • Thumbnail Image
    Item
    STATISTICAL AND OPTIMAL LEARNING WITH APPLICATIONS IN BUSINESS ANALYTICS
    (2015) Han, Bin; Ryzhov, Ilya O; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization. We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs. We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%. Given these findings, when new scenario arises, however, new data need to be acquired to update our model and decisions, which is studied under optimal learning framework. The goal becomes discovering a sequential information collection strategy that learns the best campaign design alternative as quickly as possible. Regression structure is used to learn about a set of unknown parameters, which alternates with optimization to design new data points. Such problems have been extensively studied in the ranking and selection (R&S) community, but traditional R&S procedures experience high computational costs when the decision space grows combinatorially. We present a value of information procedure for simultaneously learning unknown regression parameters and unknown sampling noise. We then develop an approximate version of the procedure, based on semi-definite programming relaxation, that retains good performance and scales better to large problems. We also prove the asymptotic consistency of the algorithm in the parametric model, a result that has not previously been available for even the known-variance case.
  • Thumbnail Image
    Item
    Measuring Deformations and Illumination Changes in Images with Applications to Face Recognition
    (2012) Jorstad, Anne; Jacobs, David; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis explores object deformation and lighting change in images, proposing methods that account for both variabilities within a single framework. We construct a deformation- and lighting-insensitive metric that assigns a cost to a pair of images based on their similarity. The primary applications discussed will be in the domain of face recognition, because faces provide a good and important example of highly structured yet deformable objects with readily available datasets. However, our methods can be applied to any domain with deformations and lighting change. In order to model variations in expression, establishing point correspondences between faces is essential, and a primary goal of this thesis is to determine dense correspondences between pairs of face images, assigning a cost to each point pairing based on a novel image metric. We show that an image manifold can be defined to model deformations and illumination changes. Images are considered as points on a high-dimensional manifold given local structure by our new metric, where costs are based on changes in shape and intensity. Curves on this manifold describe transformations such as deformations and lighting changes to connect nearby images, or larger identity changes connecting images far apart. This allows deformations to be introduced gradually over the course of several images, where correspondences are well-defined between every pair of adjacent images along a path. The similarity between two images on the manifold can be defined as the length of the geodesic that connects them. The new local metric is validated in an optical flow-like framework where it is used to determine a dense correspondence vector field between pairs of images. We then demonstrate how to find geodesics between pairs of images on a Riemannian image manifold. The new lighting-insensitive metric is described in the wavelet domain where it is able to handle moderate amounts of deformation, and allows us to derive an algorithm where the analytic geodesics between images can be computed extremely efficiently. To handle larger deformations in addition to changes in illumination, we consider an algorithmic framework where deformations are modeled with diffeomorphisms. We present preliminary implementations of the diffeomorphic framework, and suggest how this work can be extended for further applications.
  • Thumbnail Image
    Item
    Column Generation in Infeasible Predictor-Corrector Methods for Solving Linear Programs
    (2009) Nicholls, Stacey Oneeta; O'Leary, Dianne P.; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Primal &ndash dual interior &ndash point methods (IPMs) are distinguished for their exceptional theoretical properties and computational behavior in solving linear programming (LP) problems. Consider solving the primal &ndash dual LP pair using an IPM such as a primal &ndash dual Affine &ndash Scaling method, Mehrotra's Predictor &ndash Corrector method (the most commonly used IPM to date), or Potra's Predictor &ndash Corrector method. The bulk of the computation in the process stems from the formation of the normal equation matrix, AD2A T, where A \in \Re {m times n} and D2 = S{-1}X is a diagonal matrix. In cases when n >> m, we propose to reduce this cost by incorporating a column generation scheme into existing infeasible IPMs for solving LPs. In particular, we solve an LP problem based on an iterative approach where we select a &ldquo small &rdquo subset of the constraints at each iteration with the aim of achieving both feasibility and optimality. Rather than n constraints, we work with k = |Q| \in [m,n] constraints at each iteration, where Q is an index set consisting of the k most nearly active constraints at the current iterate. The cost of the formation of the matrix, AQ DQ2 AQT, reduces from &theta(m2 n) to &theta(m2 k) operations, where k is relatively small compared to n. Although numerical results show an occasional increase in the number of iterations, the total operation count and time to solve the LP using our algorithms is, in most cases, small compared to other &ldquo reduced &rdquo LP algorithms.