# MATRIX REDUCTION IN NUMERICAL OPTIMIZATION

 dc.contributor.advisor O'Leary, Dianne P en_US dc.contributor.author Park, Sungwoo en_US dc.date.accessioned 2011-07-08T05:30:14Z dc.date.available 2011-07-08T05:30:14Z dc.date.issued 2011 en_US dc.identifier.uri http://hdl.handle.net/1903/11751 dc.description.abstract Matrix reduction by eliminating some terms in the expansion of a matrix has been applied to a variety of numerical problems in many different areas. Since matrix reduction has different purposes for particular problems, the reduced matrices also have different meanings. In regression problems in statistics, the reduced parts of the matrix are considered to be noise or observation error, so the given raw data are purified by the matrix reduction. In factor analysis and principal component analysis (PCA), the reduced parts are regarded as idiosyncratic (unsystematic) factors, which are not shared by multiple variables in common. In solving constrained convex optimization problems, the reduced terms correspond to unnecessary (inactive) constraints which do not help in the search for an optimal solution. In using matrix reduction, it is both critical and difficult to determine how and how much we will reduce the matrix. This decision is very important since it determines the quality of the reduced matrix and the final solution. If we reduce too much, fundamental properties will be lost. On the other hand, if we reduce too little, we cannot expect enough benefit from the reduction. It is also a difficult decision because the criteria for the reduction must be based on the particular type of problem. In this study, we investigatematrix reduction for three numerical optimization problems. First, the total least squares problem uses matrix reduction to remove noise in observed data which follow an underlying linear model. We propose a new method to make the matrix reduction successful under relaxed noise assumptions. Second, we apply matrix reduction to the problem of estimating a covariance matrix of stock returns, used in financial portfolio optimization problem. We summarize all the previously proposed estimation methods in a common framework and present a new and effective Tikhonov method. Third, we present a new algorithm to solve semidefinite programming problems, adaptively reducing inactive constraints. In the constraint reduction, the Schur complement matrix for the Newton equations is the object of the matrix reduction. For all three problems, we propose appropriate criteria to determine the intensity of the matrix reduction. In addition, we verify the correctness of our criteria by experimental results and mathematical proof. en_US dc.title MATRIX REDUCTION IN NUMERICAL OPTIMIZATION en_US dc.type Dissertation en_US dc.contributor.publisher Digital Repository at the University of Maryland en_US dc.contributor.publisher University of Maryland (College Park, Md.) en_US dc.contributor.department Computer Science en_US dc.subject.pqcontrolled Computer Science en_US dc.subject.pqcontrolled Statistics en_US dc.subject.pqcontrolled Applied Mathematics en_US dc.subject.pquncontrolled constraint reduction en_US dc.subject.pquncontrolled covariance matrix en_US dc.subject.pquncontrolled errors in variables en_US dc.subject.pquncontrolled matrix reduction en_US dc.subject.pquncontrolled semidefinite programming en_US dc.subject.pquncontrolled total least squares en_US
﻿