Technical Reports from UMIACS
Permanent URI for this collectionhttp://hdl.handle.net/1903/7
Browse
47 results
Search Results
Item On Orthogonalization in the Inverse Power Method(1999-10-13) Stewart, G. W.When the inverse power method is used to compute eigenvectors of a symmetric matrix corresponding to close eigenvalues, the computed eigenvectors may not be orthogonal. The cure for the problem is to orthogonalize the vectors using the Gram--Schmidt algorithm. In this note it is shown that the orthogonalization process does not cause the quality of the eigenvectors to deteriorate. Also cross-referenced as UMIACS-TR-99-64Item An Analysis of the Rayleigh--Ritz Method for Approximating Eigenspaces\symbolmark{1}(1999-05-11) Jia, Zhongxiao; Stewart, G. W.This paper concerns the Rayleigh--Ritz method for computing an approximation to an eigenspace $\clx$ of a general matrix $A$ from a subspace $\clw$ that contains an approximation to $\clx$. The method produces a pair $(N, \tilde X)$ that purports to approximate a pair $(L, X)$, where $X$ is a basis for $\clx$ and $AX = XL$. In this paper we consider the convergence of $(N, \tilde X)$ as the sine $\epsilon$ of the angle between $\clx$ and $\clw$ approaches zero. It is shown that under a natural hypothesis\,---\,called the uniform separation condition\,---\,the Ritz pairs $(N, \tilde X)$ converge to the eigenpair $(L, X)$. When one is concerned with eigenvalues and eigenvectors, one can compute certain refined Ritz vectors whose convergence is guaranteed, even when the uniform separation condition is not satisfied. An attractive feature of the analysis is that it does not assume that $A$ has distinct eigenvalues or is diagonalizable. (Also cross-referenced as UMIACS-TR-99-24)Item On the Convergence of Ritz Values, Ritz Vectors, and Refined Ritz Vectors\symbolmark(1999-01-29) Jai, Zhongxiao; Stewart, G. W.This paper concerns the Rayleigh--Ritz method for computing an approximation to an eigenpair $(\lambda, x)$ of a non-Hermitian matrix $A$. Given a subspace $\clw$ that contains an approximation to $x$, this method returns an approximation $(\mu, \tilde x)$ to $(\lambda, x)$. We establish four convergence results that hold as the deviation $\epsilon$ of $x$ from $\clw$ approaches zero. First, the Ritz value $\mu$ converges to $\lambda$. Second, if the residual $A\tilde x-\mu\tilde x$ approaches zero, then the Ritz vector $\tilde x$ converges to $x$. Third, we give a condition on the eigenvalues of the Rayleigh quotient from which the Ritz pair is computed that insures convergence of the Ritz vector. Finally, we show that certain unconditionally. (Also cross-referenced as UMIACS-TR-99-08)Item Three Results on Iterative Regularization(1998-11-03) Kilmer, Misha; Stewart, G. W.In this paper we present three theorems which give insight into the regularizing properties of {\minres}. While our theory does not completely characterize the regularizing behavior of the algorithm, it provides a partial explanation of the observed behavior of the method. Unlike traditional attempts to explain the regularizing properties of Krylov subspace methods, our approach focuses on convergence properties of the residual rather than on convergence analysis of the harmonic Ritz values. The import of our analysis is illustrated by two examples. In particular, our theoretical and numerical results support the following important observation: in some circumstances the dimension of the optimal Krylov subspace can be much smaller than the number of the components of the truncated spectral solution that must be computed to attain comparable accuracy. Also cross-referenced as UMIACS-TR-98-62Item Two Algorithms for the The Efficient Computation of Truncated Pivoted QR Approximations to a Sparse Matrix(1998-10-15) Stewart, G. W.In this note we propose two algorithms to compute truncated pivoted QR approximations to a sparse matrix. One is based on the Gram--Schmidt algorithm, and the other on Householder triangularization. Both algorithms leave the original matrix unchanged, and the only additional storage requirements are arrays to contain the factorization itself. Thus, the algorithms are particularly suited to determining low-rank approximations to a sparse matrix. (Also cross-referenced as UMIACS-TR-98-12)Item On the Adjoint Matrix(1998-10-15) Stewart, G. W.The adjoint $A\adj$ of a matrix $A$ is the transpose of the matrix of the cofactors of the elements of $A$. The computation of the adjoint from its definition involves the computation of $n^{2}$ determinants of order $(n-1)$\,---\,a prohibitively expensive $O(n^{4})$ process. On the other had the computation from the formula $A\adj = \det(A)A\inv$ breaks down when $A$ is singular and is potentially unstable when $A$ is ill-conditioned. In this paper we first show that the ajdoint can be perfectly conditioned, even when $A$ is ill-conditioned. We then show that if due care is taken the adjoint can be accurately computed from the inverse, even when the latter has been inaccurately computed. In an appendix to this paper we establish a folk result on the accuracy of computed inverses. \end{minipage} \end{center} Also cross-referenced as UMIACS-TR-98-02Item On an Inexpensive Triangular Approximation to the Singular Value Decomposition(1998-10-15) Stewart, G. W.In this paper we introduce a new decomposition called the pivoted QLP~decomposition. It is computed by applying pivoted orthogonal triangularization to the columns of the matrix $X$ in question to get an upper triangular factor $R$ and then applying the same procedure to the rows of $R$ to get a lower triangular matrix $L$. The diagonal elements of $R$ are called the R-values of $X$; those of $L$ are called the L-values. Numerical examples show that the L-values track the singular values of $X$ with considerable fidelity\,---\,far better than the R-values. At a gap in the L-values the decomposition provides orthonormal bases of analogues of row, column, and null spaces provided of $X$. The decomposition requires no more than twice the work required for a pivoted QR~decomposition. The computation of $R$ and $L$ can be interleaved, so that the computation can be the rows of $R$ to get a lower triangular matrix $L$. The diagonal elements of $R$ are called the R-values of $X$; those of $L$ are called the L-values. Numerical examples show that the L-values track the singular values of $X$ with considerable fidelity\,---\,far better than the R-values. At a gap in the L-values the decomposition provides orthonormal bases of analogues of row, column, and null spaces provided of $X$. The decomposition requires no more than twice the work required for a pivoted QR~decomposition. The computation of $R$ and $L$ can be interleaved, so that the computation can be terminated at any suitable point, which makes the decomposition especially suitable for low-rank determination problems. The interleaved algorithm also suggests a new, efficient 2-norm estimator. (Also cross-referenced as UMIACS-TR-97-75)Item On the Perturbation of LU and Cholesky Factors*(1998-10-15) Stewart, G. W.In a recent paper, Chang and Paige have shown that the usual perturbation bounds for Cholesky factors can systematically overestimate the errors. In this note we sharpen their results and extend them to the factors of the LU decomposition. The results are based on a new formula for the first order terms of the error in the factors. (Also cross-referenced as UMIACS-TR-95-93)Item On Sublinear Convergence(1998-10-15) Stewart, G. W.This note develops a theory of sublinearly converging sequences, including a categorization of the rates of convergence and a method for determining the rate from an iteration function. (Also cross-referenced as UMIACS-TR-95-92)Item The Triangular Matrices of Gaussian Elimination and Related Decompositions(1998-10-15) Stewart, G. W.It has become a commonplace that triangular systems are solved to higher accuracy than their condition would warrant. This observation is not true in general, and counterexamples are easy to construct. However, it is often true of the triangular matrices from pivoted LU or QR decompositions. It is shown that this fact is closely connected with the rank-revealing character of these decompositions. (Also cross-referenced as UMIACS-TR-95-91)