Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
3 results
Search Results
Item Infeasible Constraint Reduction for Linear and Convex Quadratic Optimization(2011) He, Meiyun; Tits, Andre L; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Constraint-reduction schemes have been proposed in interior-point methods for linear and convex quadratic programs with many more inequality constraints than variables in standard dual form. Such schemes have been shown to be provably convergent and highly efficient in practice. A critical requirement of these schemes, however, is the availability of a dual-feasible initial point. The thesis first addresses this requirement for linear optimization. Building on a general framework (which encompasses several previously proposed approaches) for dual-feasible constraint-reduced interior-point optimization, for which we prove convergence to a single point of the sequence of dual iterates, we propose a framework for ``infeasible'' constraint-reduced interior-point. Central to this framework is an exact ($ell_1$ or $ell_infty$) penalty function scheme endowed with a scheme for iterative adjustment of the penalty parameter aiming to yield, after a finite number of updates, a value that guarantees feasibility (for the original problem) of the minimizers. Finiteness of the sequence of penalty parameter adjustments is proved under mild feasibility assumptions for all algorithms that fit within the framework, including ``infeasible'' extensions of a ``dual'' algorithm proposed in the early 1990s and of two recently recently proposed ``primal-dual'' algorithms. One of the latter two, a constraint-reduced variant of Mehrotra's Predictor Corrector algorithm is then more specifically considered. Further convergence results are proved for the corresponding infeasible method. Furthermore, this infeasible method is analyzed without feasibility assumptions. Next, the constraint-reduced scheme with arbitrary initial points is extended for the more general case of convex quadratic optimization. A stronger global convergence result is proven that generalizes the result in the linear case. Numerical results are reported that demonstrate that the infeasible constraint-reduced algorithms are of practical interest. In particular, in the application of model-predictive-control-based rotorcraft control, our algorithms yield a speed-up of over two times for both altitude control and trajectory following.Item Column Generation in Infeasible Predictor-Corrector Methods for Solving Linear Programs(2009) Nicholls, Stacey Oneeta; O'Leary, Dianne P.; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Primal &ndash dual interior &ndash point methods (IPMs) are distinguished for their exceptional theoretical properties and computational behavior in solving linear programming (LP) problems. Consider solving the primal &ndash dual LP pair using an IPM such as a primal &ndash dual Affine &ndash Scaling method, Mehrotra's Predictor &ndash Corrector method (the most commonly used IPM to date), or Potra's Predictor &ndash Corrector method. The bulk of the computation in the process stems from the formation of the normal equation matrix, AD2A T, where A \in \Re {m times n} and D2 = S{-1}X is a diagonal matrix. In cases when n >> m, we propose to reduce this cost by incorporating a column generation scheme into existing infeasible IPMs for solving LPs. In particular, we solve an LP problem based on an iterative approach where we select a &ldquo small &rdquo subset of the constraints at each iteration with the aim of achieving both feasibility and optimality. Rather than n constraints, we work with k = |Q| \in [m,n] constraints at each iteration, where Q is an index set consisting of the k most nearly active constraints at the current iterate. The cost of the formation of the matrix, AQ DQ2 AQT, reduces from &theta(m2 n) to &theta(m2 k) operations, where k is relatively small compared to n. Although numerical results show an occasional increase in the number of iterations, the total operation count and time to solve the LP using our algorithms is, in most cases, small compared to other &ldquo reduced &rdquo LP algorithms.Item Matrix Factorizations, Triadic Matrices, and Modified Cholesky Factorizations for Optimization(2006-08-07) Fang, Haw-ren; O'Leary, Dianne P; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This thesis focuses on the Cholesky-related factorizations of symmetric matrices and their application to Newton-type optimization. A matrix is called triadic if it has at most two nonzero off-diagonal elements in each column. Tridiagonal matrices are a special case of these. We prove that the triadic structure is preserved in the Cholesky-related factorizations We analyze its numerical stability and also present our perturbation analysis. Newton-like methods solve nonlinear programming problems whose objective function and constraint functions are twice continuously differentiable. At each iteration, a search direction is computed by solving a linear symmetric system Ax=b. When A is not positive definite, the computed search direction may not be a descent direction. Modified Newton methods add a perturbation E to A, so that A+E is positive definite, where E is symmetric positive semidefinite. We study the modified Newton methods in the literature, and develop other stable and efficient algorithms. One of them exploits the merits of triadic structure. We apply our modified Newton methods to the Euclidean distance matrix completion problem (EDMCP). Given n points in Euclidean space, the Euclidean distance matrix (EDM) is the real symmetric matrix with (i,j) entry being the square of the Euclidean distance between i-th and j-th points. Given a partial Euclidean distance matrix (i.e., some entries are not specified), the EDMCP is to find a EDM that includes the specified entries. We tackle the EDMCP by transforming it into a global optimization problem and applying modified Newton methods. We also develop a dimensional relaxation method for global minimization and test it on sample problems including protein structure prediction.