Browsing by Author "Yang, Chao"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Accelerating the Arnoldi iteration: Theory and practice(1998) Yang, Chao; Sorensen, Danny C.The Arnoldi iteration is widely used to compute a few eigenvalues of a large sparse or structured matrix. However, the method may suffer from slow convergence when the desired eigenvalues are not dominant or well separated. A systematic approach is taken in this dissertation to address the issue of how to accelerate the convergence of the Arnoldi algorithm within a subspace of limited size. The acceleration strategies presented here are grouped into three categories. They are the method of restarting, the method of spectral transformation and the Newton-like acceleration. Simply put, the method of restarting repeats a k-step Arnoldi iteration after improving the starting vector. The method is further divided into polynomial and rational restarting based on the way the starting vector is modified. We show that both mechanisms can be implemented in an implicit fashion by relating the restarted Arnoldi to a truncated QR or the RQ iteration. The rational restarting via a Truncated RQ (TRQ) iteration converges extremely fast. However, a linear system must be solved for each restarting. The possibility of replacing a direct linear solver with a preconditioned iterative solver while maintaining the rapid convergence of TRQ is explored in this thesis. The method of spectral transformation is based on the idea of transforming the original eigenvalue problem into one that is easier to solve. Again, both polynomial and rational transformations are possible. Practical issues regarding the design and implementation of effective spectral transformations are discussed. Finally, one can treat the eigenvalue problem as a nonlinear equation upon which Newton-like methods can be applied. The Jacobi-Davidson (JD) algorithm proposed by Sleijpen and Van der Vorst takes this approach. In JD, the Arnoldi iteration is merely used as a global searching tool to provide a good starting point for the Newton iteration. This algorithm shares many similar properties with the TRQ iteration. Numerical comparisons between these two methods are made in this thesis.Item Trace-Penalty Minimization for Large-scale Eigenspace Computation(2013-02) Wen, Zaiwen; Yang, Chao; Liu, Xin; Zhang, YinThe Rayleigh-Ritz (RR) procedure, including orthogonalization, constitutes a major bottleneck in computing relatively high-dimensional eigenspaces of large sparse matrices. Although operations involved in RR steps can be parallelized to an extent, their parallel scalability, limited by some inherent sequentiality, is lower than dense matrix-matrix multiplications. The primary motivation of this paper is to develop a methodology that reduces the use of the RR procedure in exchange for matrix-matrix multiplications. We propose an unconstrained penalty model and establish its equivalence to the eigenvalue problem. This model enables us to deploy gradient-type algorithms heavily dominated by dense matrixmatrix multiplications. Although the proposed algorithm does not necessarily reduce the total number of arithmetic operations, it leverages highly optimized operations on modern high performance computers to achieve parallel scalability. Numerical results based on a preliminary implementation, parallelized using OpenMP, show that our approach is promising.