Browsing by Author "Sorensen, D.C."
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item A DEIM Induced CUR Factorization(2014-07) Sorensen, D.C.; Embree, M.We derive a CUR matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a low rank approximate decomposition of the form A ≈ CUR, where C and R are subsets of the columns and rows of A, and U is constructed to make CUR a good approximation. Given a low-rank singular value decomposition A ≈ VSWT, the DEIM procedure uses V and W to select the columns and rows of A that form C and R. Through an error analysis applicable to a general class of CUR factorizations, we show that the accuracy tracks the optimal approximation error within a factor that depends on the conditioning of submatrices of V and W. For large-scale problems, V and W can be approximated using an incremental QR algorithm that makes one pass through A. Numerical examples illustrate the favorable performance of the DEIM-CUR method, compared to CUR approximations based on leverage scores.Item A DEIM Induced CUR Factorization(SIAM, 2016) Sorensen, D.C.; Embree, MarkWe derive a CUR approximate matrix factorization based on the discrete empirical interpolation method (DEIM). For a given matrix ${\bf A}$, such a factorization provides a low-rank approximate decomposition of the form ${\bf A} \approx \bf C \bf U \bf R$, where ${\bf C}$ and ${\bf R}$ are subsets of the columns and rows of ${\bf A}$, and ${\bf U}$ is constructed to make $\bf C\bf U \bf R $ a good approximation. Given a low-rank singular value decomposition ${\bf A} \approx \bf V \bf S \bf W^T$, the DEIM procedure uses ${\bf V}$ and ${\bf W}$ to select the columns and rows of ${\bf A}$ that form ${\bf C}$ and ${\bf R}$. Through an error analysis applicable to a general class of CUR factorizations, we show that the accuracy tracks the optimal approximation error within a factor that depends on the conditioning of submatrices of ${\bf V}$ and ${\bf W}$. For very large problems, ${\bf V}$ and ${\bf W}$ can be approximated well using an incremental QR algorithm that makes only one pass through ${\bf A}$. Numerical examples illustrate the favorable performance of the DEIM-CUR method compared to CUR approximations based on leverage scores.Item A Modified Low-Rank Smith Method for Large-Scale Lyapunov Equations(2001-05) Antoulas, A.C.; Sorensen, D.C.; Gugercin, S.In this note we present a modified cyclic low-rank Smith method to compute low-rank approximations to solutions of Lyapunov equations arising from large-scale dynamical systems. Unlike the original cyclic low-rank Smith method introduced by Penzl in [18], the number of the columns in the approximate solutions does not necessarily increase at each step. The number of columns required by the modified method is usually much lower than the original cyclic low-rank Smith method. The modified method never requires more columns than the original. Upper bounds are established for the errors in the low-rank approximate solutions and also for the errors in the resulting approximate Hankel singular values. Numerical results are given to verify the efficiency and accuracy of the new algorithm.Item A New Matrix-Free Algorithm for the Large-Scale Trust-Region Subproblem(1995-07) Santos, S.A.; Sorensen, D.C.The trust region subproblem arises frequently in linear algebra and optimization applications. Recently, matrix-free methods have been introduced to solve large-scale trust-region subproblems. These methods only require a matrix-vector product and do not rely on matrix factorizations. These approaches recast the trust-region subproblem in terms of a parameterized eigenvalue problem and then adjust the parameter to find the optimal solution from the eigenvector corresponding to the smallest eigenvalue of the parameterized eigenvalue problem. This paper presents a new matrix-free algorithm for the large-scale trust-region subproblem. The new algorithm improves upon the previous algorithms by introducing a unified iteration that naturally includes the so called hard case. The new iteration is shown to be superlinearly convergent in all cases. Computational results are presented to illustrate convergence properties and robustness of the method.Item A Quadratically Constrained Minimization Problem Arising from PDE of Monge-Ampére Type(2008-05) Sorensen, D.C.; Glowinski, RolandThis note develops theory and a solution technique for a quadratically constrained eigenvalue minimization problem. This class of problems arises in the numerical solution of fully-nonlinear boundary value problems of Monge-Ampére type. Though it is most important in the three dimensional case, the solution method is directly applicable to systems of arbitrary dimension. The focus here is on solving the minimization subproblem which is part of a method to numerically solve a Monge-Ampére type equation. These subproblems must be evaluated many times in this numerical solution technique and thus efficiency is of utmost importance. A novelty of this minimization algorithm is that it is finite of complexity O(N^3) with the exception of solving a very simple rational function of one variable. This function is essentially the same for any dimension. This result is quite surprising given the nature of the minimization problem.Item A Survey of Model Reduction Methods for Large-Scale Systems(2000-12) Antoulas, A.C.; Sorensen, D.C.; Gugercin, S.An overview of model reduction methods and a comparison of the resulting algorithms are presented. These approaches are divided into two broad categories, namely SVD based and moment matching based methods. It turns out that the approximation error in the former case behaves better globally in frequency while in the latter case the local behavior is better.Item A Symmetry Preserving Singular Value Decomposition(2005-01) Shah, M.; Sorensen, D.C.A reduced order representation of a large data set is often realized through a principle component analysis based upon a singular value decomposition (SVD) of the data. The left singular vectors of a truncated SVD provide the reduced basis. In several applications such as facial analysis and protein dynamics, structural symmetry is inherent in the data. Typically, reflective or rotational symmetry is expected to be present in these applications. In protein dynamics, determining this symmetry allows one to provide SVD major modes of motion that best describe the symmetric movements of the protein. In face detection, symmetry in the SVD allows for more efficient compression algorithms. Here, we present a method to compute the plane of reflective symmetry or the axis of rotational symmetry of a large set of points. Moreover, we develop a symmetry preserving singular value decomposition (SPSVD) that best approximates the given set while respecting the symmetry. Interesting subproblems arise in the presence of noisy data or in situations where most, but not all, of the structure is symmetric. An important part of the determination of the axis of rotational symmetry or the plane of reflective symmetry is an iterative re-weighting scheme. This scheme is rapidly convergent in practice and seems to be very effective in ignoring outliers (points that do not respect the symmetry).Item A Truncated RQ-iteration for Large Scale Eigenvalue Calculations(1996-04) Sorensen, D.C.; Yang, C.We introduce a new Krylov subspace iteration for large scale eigenvalue problems that is able to accelerate the convergence through an inexact (iterative) solution to a shift-invert equation. The new method can take also full advantage of an exact solution when it is possible to apply a sparse direct method to solve the shift-invert equations. We call this new iteration the Truncated RQ Iteration (TRQ). It is based upon a recursion that develops in the leading kcolumns of the implicitly shifted RQ-Iteration for dense matrices. The main advantage in the large scale setting is that inverse-iteration like convergence occurs in the leading column of the updated basis vectors. The leading k-terms of a Schur decomposition rapidly emerge with desired eigenvalues appearing on the leading diagonal elements of the triangular matrix of the Schur decomposition. The updating equations for TRQ have a great deal in common with the update equations that define the Rational Krylov Method of Ruhe, and also the projected correction equations that define the Jacobi-Davidson Method of Van der Vorst et. al. The TRQ Iteration is quite competitive with the Rational Krylov Method when the shift-invert equations can be solved directly and with the Jacobi-Davidson Method when these equations are solved inexactly with a preconditioned iterative method. The TRQ Iteration is derived directly from the RQ-Iteration and thus inherits the convergence properties of that method. Existing RQ deflation strategies may be employed when necessary.Item A-Posteriori Error Estimation for DEIM Reduced Nonlinear Dynamical Systems(2012-08) Wirtz, D.; Sorensen, D.C.; Haasdonk, B.In this work an efficient approach for a-posteriori error estimation for POD-DEIM reduced nonlinear dynamical systems is introduced. The considered nonlinear systems may also include time and parameter-affine linear terms as well as parametrically dependent inputs and outputs. The reduction process involves a Galerkin projection of the full system and approximation of the system's nonlinearity by the DEIM method [Chaturantabut & Sorensen (2010)]. The proposed a-posteriori error estimator can be efficiently decomposed in an offline/online fashion and is obtained by a one dimensional auxiliary ODE during reduced simulations. Key elements for efficient online computation are partial similarity transformations and matrix DEIM approximations of the nonlinearity Jacobians. The theoretical results are illustrated by application to an unsteady Burgers equation and a cell apoptosis model.Item Accelerating the Lanczos Algorithm via Polynomial Spectral Transformations(1997-11) Sorensen, D.C.; Yang, C.We consider the problem of computing a few clustered and/or interior eigenvalues of a symmetric matrix A without using a matrix factorization. This can be done by applying the Lanczos algorithm to p(A), where p(lambda) is a polynomial that maps the clustered and/or interior eigenvalues of A to extremal and well separated eigenvalues of p(A). We will demonstrate and compare several techniques of constructing these polynomials. Numerical examples are presented to illustrate the effectiveness of using these polynomial to accelerate the Lanczos process.Item Accelerating the LSTRS Algorithm(2009-07) Lampe, J.; Rojas, M.; Sorensen, D.C.; Voss, H.In a recent paper [Rojas, Santos, Sorensen: ACM ToMS 34 (2008), Article 11] an efficient method for solvingthe Large-Scale Trust-Region Subproblem was suggested which is based on recasting it in terms of a parameter dependent eigenvalue problem and adjusting the parameter iteratively. The essential work at each iteration is the solution of an eigenvalue problem for the smallest eigenvalue of the Hessian matrix (or two smallest eigenvalues in the potential hard case) and associated eigenvector(s). Replacing the implicitly restarted Lanczos method in the original paper with the Nonlinear Arnoldi method makes it possible to recycle most of the work from previous iterations which can substantially accelerate LSTRS.Item An Efficient Algorithm for Calculating the Heat Capacity of a Large-scale Molecular System(2001-02) Yang, C.; Noid, D.W.; Sumpter, B.G.; Sorensen, D.C.; Tuzun, R.E.We present an efficient algorithm for computing the heat capacity of a large-scale molecular system. The new algorithm is based on a special Gaussian quadrature whose abscissas and weights are obtained by a simple Lanczos iteration. Our numerical results have indicated that this new computational scheme is quite accurate. We have also shown that this method is at least a hundred times faster than the earlier apporach that is based on esitimating the density of states and integrating with a simple quadrature formula.Item An Implementation of a Divide and Conquer Algorithm for the Unitary Eigenproblem(1990-06) Ammar, G.S.; Reichel, L.; Sorensen, D.C.Item Approximation of large-scale dynamical systems: An Overview(2001-02) Antoulas, A.C.; Sorensen, D.C.In this paper we review the state of affairs in the area of approximation of large-scale systems. We distinguish among three basic categories, namely the SVD-based, the Krylov-based and the SVD-Krylov-based approximation methods. The first two were developed independently of each other and have distinct sets of attributes and drawbacks. The third approach seeks to combine the best attributes of the first two.Item Convergence of Polynomial Restart Krylov Methods for Eigenvalue Computation(2003-08) Beattie, Christopher A.; Embree, Mark; Sorensen, D.C.The convergence of Krylov subspace eigenvalue algorithms can be robustly measured by the angle the approximating Krylov space makes with a desired invariant subspace. This paper describes a new bound on this angle that handles the complexities introduced by non-Hermitian matrices, yet has a simpler derivation than similar previous bounds. The new bound reveals that ill-conditioning of the desired eigenvalues has little impact on convergence, while instability of unwanted eigenvalues plays an essential role. Practical computations usually require the approximating Krylov space to be restarted for efficiency, whereby the starting vector that generates the subspace is improved via a polynomial filter. Such filters dynamically steer a low-dimensional Krylov space toward a desired invariant subspace. We address the design of these filters, and illustrate with examples the subtleties involved in restarting non-Hermitian iterations.Item Domain Decomposition and Model Reduction of Systems with Local Nonlinearities(2007-11) Sun, K.; Glowinski, R.; Heinkenschloss, M.; Sorensen, D.C.The goal of this paper is to combine balanced truncation model reduction and domain decomposition to derive reduced order models with guaranteed error bounds for systems of discretized partial differential equations (PDEs) with a spatially localized nonlinearities. Domain decomposition techniques are used to divide the problem into linear subproblems and small nonlinear subproblems. Balanced truncation is applied to the linear subproblems with inputs and outputs determined by the original in- and outputs as well as the interface conditions between the subproblems. The potential of this approach is demonstrated for a model problem.Item Efficient Numerical Methods for Least-Norm Regularization(2010-03) Sorensen, D.C.; Rojas, M.The problem min ||x||, s.t. ||b-Ax||≤ ε arises in the regularization of discrete forms of ill-posed problems when an estimate of the noise level in the data is available. After deriving necessary and sufficient optimality conditions for this problem, we propose two different classes of algorithms: a factorization-based algorithm for small to medium problems, and matrix-free iterations for the large-scale case. Numerical results illustrating the performance of the methods demonstrate that both classes of algorithms are efficient, robust, and accurate. An interesting feature of our formulation is that there is no situation corresponding to the so-called hard case that occurs in the standard trust-region subproblem. Neither small singular values nor vanishing coefficients present any difficulty to solving the relevant secular equations.Item Gramians of Structured Systems and an Error Bound for Structure-Preserving Model Reduction(2004-09) Sorensen, D.C.; Antoulas, A.C.In this paper a general framework is posed for defining the reachability and controllability gramians of structured linear dynamical systems. The novelty is that a formula for the gramian is given in the frequency domain. This formulation is surprisingly versatile and may be applied in a variety of structured problems. Moreover, this formulation enables a rather straightforward development of apriori error bounds for model reduction in the H2 norm. The bound applies to a reduced model derived from projection onto the dominant eigenspace of the appropriate gramian. The reduced models are structure preserving because they arise as direct reduction of the original system in the reduced basis. A derivation of the bound is presented and verified computationally on a second order system arising from structural analysis.Item Lyapunov, Lanczos, and Inertia(2000-05) Antoulas, A.C.; Sorensen, D.C.We present a new proof of the inertia result associated with Lyapunov equations. Furthermore we present a connection between the Lyapunov equation and the Lanczos process which is closely related to the Schwarz form of a matrix. We provide a method for reducing a general matrix to Schwarz form in a finite number of steps (O(n3)). Hence, we provide a finite method for computing inertia without computing eigenvalues. This scheme is unstable numerically and hence is primarily of theoretical interest.Item Minimization of Large Scale Quadratic Function Subject to an Ellipsoidal Constraint(1994-07) Sorensen, D.C.An important problem in linear algebra and optimization is the Trust Region Problem: Minimize a quadratic function subject to an ellipsoidal constraint. This basic problem has several important large scale applications including seismic inversion and forcing convergence in optimization methods. Existing methods to solve the trust region problem require matrix factorizations that are not feasible in the large scale setting. This paper presents an algorithm for solving the large scale trust region problem that requires a fixed size limited storage proportional to order of the quadratic and that relies only on matrix-vector products. The algorithm recasts the trust region problem in terms of parameterized eigenvalue problem and adjusts the parameter with a superlinearly convergent iteration to find the optimal solution from the eigenvector of the parameterized problem. Only the smallest eigenvalue and corresponding eigenvector of the parameterized problem needs to be computed.