Browsing by Author "Sorensen, Danny C."
Now showing 1 - 20 of 46
Results Per Page
Sort Options
Item A large-scale trust-region approach to the regularization of discrete ill-posed problems(1999) Rojas, Marielba; Sorensen, Danny C.We consider the problem of computing the solution of large-scale discrete ill-posed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete ill-posed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.Item A Matlab implementation of the Implicitly Restarted Arnoldi Method for solving large-scale eigenvalue problems(1996) Radke, Richard Joseph; Sorensen, Danny C.This thesis describes a Matlab implementation of the Implicitly Restarted Arnoldi Method for computing a few selected eigenvalues of large structured matrices. Shift-and-invert methods allow the calculation of the eigenvalues nearest any point in the complex plane, and polynomial acceleration techniques aid in computing eigenvalues of operators which are defined by m-files instead of Matlab matrices. These new Matlab functions will be incorporated into the upcoming version 5 of Matlab and will greatly extend Matlab's capability to deal with many real-world eigenvalue problems that were intractable in version 4. The thesis begins with a discussion of the Implicitly Restarted Arnoldi Method. The bulk of the thesis is a user's manual for the Matlab functions which implement this algorithm. The user's guide not only describes the functions' syntax and structure but also discusses some of the difficulties that were overcome during their development. The thesis concludes with several examples of the functions applied to realistic test problems which illustrate the versatility and power of this new tool.Item A new algorithm for continuation and bifurcation analysis of large scale free surface flows(2005) Castillo, Zenaida; Sorensen, Danny C.This thesis presents a new algorithm to find and follow particular solutions of parameterized nonlinear systems. Important applications often arise after spatial discretization of time dependent PDEs. We embed a block eigenvalue solver in a continuation framework for the computation of some specific eigenvalues of large Jacobian matrices that depend on one or more parameters. The new approach is then employed to study the behavior of an industrial process referred to as coating. Stability analysis of the discretized system that models this process is important because it provides alternatives for changing parameters in order to improve the quality of the final product or to increase productivity. Experiments on several problems show the reliability of the new approach in the accurate detection of critical points. Further analysis of two-dimensional coating flow problems reveals that computational results are competitive with those of previous continuation approaches. As a byproduct, one obtains information about the stability of the process with no additional cost. Due to the size and structure of the matrices generated in three-dimensional free surface flow applications, it is necessary to use a general iterative linear solver, such as GMRES. However, GMRES displays a very slow rate of convergence as a consequence of the poor conditioning in the coefficient matrices. To speed up GMRES convergence, we developed and implemented a scalable approximate sparse inverse preconditioner. Numerical experiments demonstrate that this preconditioner greatly improves the convergence of the method. Results illustrate the effectiveness of the preconditioner on very large free surface flow problems with more than million unknowns.Item A new class of preconditioners for large-scale linear systems from interior-point methods for linear programming(1997) de Oliveira, Aurelio Ribeiro Leite; Sorensen, Danny C.A new class of preconditioners for the iterative solution of the linear systems arising from interior point methods is proposed. For many of these methods, the linear systems come from applying Newton's method on the perturbed Karush-Kuhn-Tucker optimality conditions for the linear programming problem. This leads to a symmetric indefinite linear system called the augmented system. This system can be reduced to the Schur complement system which is positive definite. After the reduction, the solution for the linear system is usually computed via the Cholesky factorization. This factorization can be dense for some classes of problems. Therefore, the solution of these systems by iterative methods must be considered. Since these systems are very ill-conditioned near a solution of the linear programming problem, it is crucial to develop efficient preconditioners. We show that from the point of view of designing preconditioners, it is better to work with the augmented system. We show that all preconditioners for the Schur complement system have an equivalent for the augmented system while the opposite is not true. The theoretical properties of the new preconditioners are discussed. This class works better near a solution of the linear programming problem when the linear systems are highly ill-conditioned. Another important feature is the option to reduce the preconditioned indefinite system to a positive definite one of the size of the Schur complement. This class of preconditioners relies on the computation of an LU factorization of a subset of columns of the matrix of constraints. The techniques developed for a competitive implementation are rather sophisticated since the subset of columns is not known a priori. The new preconditioner applied to the conjugate gradient method compares favorably with the Cholesky factorization approach. The new approach performs better on large scale problems whose Cholesky factorization contains a large number of nonzero entries. Numerical experiments on several representative classes of linear programming problems are presented to indicate the promise of this new approach.Item A New Matrix-Free Algorithm for the Large-Scale Trust-Region Subproblem(1999-09) Rojas, Marielba; Santos, Sandra A.; Sorensen, Danny C.We present a matrix-free algorithm for the large-scale trust-region subproblem. Our algorithm relies on matrix-vector products only and does not require matrix factorizations. We recast the trust-region subproblem as a parameterized eigenvalue problem and compute an optimal value for the parameter. We then find the optimal solution of the trust-region subproblem from the eigenvectors associated with two of the smallest eigenvalues of the parameterized eigenvalue problem corresponding to the optimal parameter. The new algorithm uses a different interpolating scheme than existent methods and introduces a unified iteration that naturally includes the so-called hard case. We show that the new iteration is well defined and convergent at a superlinear rate. We present computational results to illustrate convergence properties and robustness of the method.Item A Parameter Free ADI-like Method for the Numerical Solution of Large Scale Lyapunov Equations(2009-05) Nong, Ryan; Sorensen, Danny C.An algorithm is presented for constructing an approximate numerical solution to a large scale Lyapunov equation in low rank factored form. The algorithm is based upon a synthesis of an approximate power method and an alternating direction implicit (ADI) method. The former is parameter free and tends to be efficient in practice, but there is little theoretical understanding of its convergence properties. The ADI method has a well understood convergence theory, but the method relies upon selection of shift parameters and a poor shift selection can lead to very slow convergence in practice. The algorithm presented here uses an approximate power method iteration to obtain a basis update. It then constructs a re-weighting of this basis update to provide a factorization update that satisfies ADI-like convergence properties.Item A reduced basis method for molecular dynamics simulation(2005) Vincent-Finley, Rachel Elisabeth; Sorensen, Danny C.In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.Item A restarted Lanczos process for computing a large percentage of the eigenvalues of a symmetric matrix(1996) Zuo, Wei; Sorensen, Danny C.The Lanczos algorithm is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a large-scale real symmetric matric arising in many scientific and engineering applications. In this thesis a modified restarted k-step Lanczos algorithm is presented. The basic Lanczos process suffers from numerical difficulties such as large storage requirements and loss of orthogonality among the basis vectors. The current restarted Lanczos method is designed to overcome these difficulties by fixing the number of steps in the Lanczos process at a prespecified value k, where k is modest and proportional to the total number of eigenvalues of interest. However, it is possible that the total number of desired eigenvalues may not be moderate. The main difference between this restart scheme and other existing schemes is that the prescribed value k in our algorithm is only a reasonable number. It is independent of the total number of desired eigenvalues. In other words, this algorithm may compute a large percentage of eigenvalues and associated eigenvectors of a large symmetric matrix with significantly reduced storage requirement. Strategies for implementing the algorithm on parallel distributed-memory machines are presented. The efficiency of this algorithm is justified by computational results for the electronic structure problem in quantum chemistry.Item A State Space Error Estimate for POD-DEIM Nonlinear Model Reduction(2010-12) Chaturantabut, Saifon; Sorensen, Danny C.This paper derives state space error bounds for the solutions of reduced systems constructed using Proper Orthogonal Decomposition (POD) together with the Discrete Empirical Interpolation Method (DEIM) recently developed in [S. Chaturantabut and D.C. Sorensen, SISC 2010, pp. 2737-2764 ] for nonlinear dynamical systems. The resulting error bounds are shown to be proportional to the sums of the singular values corresponding to neglected POD basis vectors both in Galerkin projection of the reduced system and in the DEIM approximation of the nonlinear term. The analysis is particularly relevant to ODE systems arising from spatial discretizations of parabolic PDEs. The derivation clearly identifies where the parabolicity is crucial. It also shows exactly how the DEIM approximation error involving the nonlinear term comes into play.Item A symmetry preserving singular value decomposition(2007) Shah, Mili; Sorensen, Danny C.This thesis concentrates on the development, analysis, implementation, and application of a symmetry preserving singular value decomposition (SPSVD). This new factorization enhances the singular value decomposition (SVD)---a powerful method for calculating a low rank approximation to a large data set---by producing the best symmetric low rank approximation to a matrix with respect to the Frobenius norm and matrix-2 norm. Calculating an SPSVD is a two-step process. In the first step, a matrix representation for the symmetry of a given data set must be determined. This process is presented as a novel iterative reweighting method: a scheme which is rapidly convergent in practice and seems to be extremely effective in ignoring outliers of the data. In the second step, the best approximation that maintains the symmetry calculated from the first step is computed. This approximation is designated the SPSVD of the data set. In many situations, the SPSVD needs efficient updating. For instance, if new data is given, then the symmetry of the set may change and an alternative matrix representation has to be formed. A modification in the matrix representation also alters the SPSVD. Therefore, proficient methods to address each of these issues are developed in this thesis. This thesis applies the SPSVD to molecular dynamic (MD) simulations of proteins and to face analysis. Symmetric motions of a molecule may be lost when the SVD is applied to MD trajectories of proteins. This loss is corrected by implementing the SPSVD to create major modes of motion that best describe the symmetric movements of the protein. Moreover, the SPSVD may reduce the noise that often occurs on the side chains of molecules. In face analysis, the SVD is regularly used for compression. Because faces are nearly symmetric, applying the SPSVD to faces creates a more efficient compression. This efficiency is a result of having to store only half the picture for the SPSVD. Therefore, it is apparent that the SPSVD is an effective method for calculating a symmetric low rank approximation for a set of data.Item A Trust-Region Approach to the Regularization of Large-Scale Discrete Ill-Posed Problems(1999-12) Rojas, Marielba; Sorensen, Danny C.We consider the solution of large-scale least squares problems where the coefficient matrix comes from the discretization of an ill-posed operator and the right-hand size contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a trust-region subproblem and solve it by means of a recently developed method for the large-scale trust-region subproblem. We present numerical results on test problems, an inverse interpolation problem with real data, and a model seismic inversion problem with real data.Item Accelerating the Arnoldi iteration: Theory and practice(1998) Yang, Chao; Sorensen, Danny C.The Arnoldi iteration is widely used to compute a few eigenvalues of a large sparse or structured matrix. However, the method may suffer from slow convergence when the desired eigenvalues are not dominant or well separated. A systematic approach is taken in this dissertation to address the issue of how to accelerate the convergence of the Arnoldi algorithm within a subspace of limited size. The acceleration strategies presented here are grouped into three categories. They are the method of restarting, the method of spectral transformation and the Newton-like acceleration. Simply put, the method of restarting repeats a k-step Arnoldi iteration after improving the starting vector. The method is further divided into polynomial and rational restarting based on the way the starting vector is modified. We show that both mechanisms can be implemented in an implicit fashion by relating the restarted Arnoldi to a truncated QR or the RQ iteration. The rational restarting via a Truncated RQ (TRQ) iteration converges extremely fast. However, a linear system must be solved for each restarting. The possibility of replacing a direct linear solver with a preconditioned iterative solver while maintaining the rapid convergence of TRQ is explored in this thesis. The method of spectral transformation is based on the idea of transforming the original eigenvalue problem into one that is easier to solve. Again, both polynomial and rational transformations are possible. Practical issues regarding the design and implementation of effective spectral transformations are discussed. Finally, one can treat the eigenvalue problem as a nonlinear equation upon which Newton-like methods can be applied. The Jacobi-Davidson (JD) algorithm proposed by Sleijpen and Van der Vorst takes this approach. In JD, the Arnoldi iteration is merely used as a global searching tool to provide a good starting point for the Newton iteration. This algorithm shares many similar properties with the TRQ iteration. Numerical comparisons between these two methods are made in this thesis.Item Adaptive Reduction of Large Spiking Neurons(2013-11-21) Du, Bosen; Sorensen, Danny C.; Cox, Steven J.; Embree, Mark; Antoulas, Athanasios C.This thesis develops adaptive reduction approaches for various models of large spiking neurons. Most neurons are like dendritic trees with many branches, and they communicate by nonlinear spiking behaviors. However, with the exception of Kellems' Strong-Weak model, most existing reduction approaches compromise the active ionic mechanisms that cause the spiking dynamics. The Strong-Weak model can predict the spikes caused by suprathreshold input traveling from the dendritic branches to the spike initiation zone (SIZ), but it is not able to reproduce the back propagation from SIZ to the dendritic branches after spikes. This thesis develops a new model called QAact, the mechanisms to incorporate QAact into the hybrid model to capture the back propagation behavior, and different model reduction techniques for each part of the new hybrid model where they are most advantageous. Computational tests of QAact and the new hybrid model as well as corresponding model reduction techniques on FitzHugh-Nagumo system, active nonuniform cable, and branched cell LGMD, demonstrate a significant reduction of dimension, computational complexity and running time.Item An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients(2012-09-05) Kouri, Drew; Heinkenschloss, Matthias; Sorensen, Danny C.; Riviere, Beatrice M.; Cox, Dennis D.Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.Item Analysis and implementation of an implicitly restarted Arnoldi iteration(1995) Lehoucq, Richard Bruno; Sorensen, Danny C.The Arnoldi algorithm, or iteration, is a computationally attractive technique for computing a few eigenvalues and associated invariant subspace of large, often sparse, matrices. The method is a generalization of the Lanczos process and reduces to that when the underlying matrix is symmetric. This thesis presents an analysis of Sorensen's Implicitly Re-started Arnoldi iteration, (scIRA-iteration), by exploiting its relationship with the scQR algorithm. The goal of this thesis is to present numerical techniques that attempt to make the scIRA-iteration as robust as the implicitly shifted scQR algorithm. The benefit is that the Arnoldi iteration only requires the computation of matrix vector products w = Av at each step. It does to rely on the dense matrix similarity transformations required by the EISPACK and LAPACK software packages. Five topics form the contribution of this dissertation. The first topic analyzes re-starting the Arnoldi iteration in an implicit or explicit manner. The second topic is the numerical stability of an scIRA-iteration. The forward instability of the scQR algorithm and the various schemes used to re-order the Schur form of a matrix are fundamental to this analysis. A sensitivity analysis of the Hessenberg decomposition is presented. The practical issues associated with maintaining numerical orthogonality among the Arnoldi/Lanczos basis vectors is the third topic. The fourth topic is deflation techniques for an scIRA-iteration. The deflation strategies introduced make it possible to compute multiple or clustered eigenvalues with a single vector re-start method. The block Arnoldi/Lanczos methods commonly used are not required. The final topic is the convergence typical of an scIRA-iteration. Both formal theory and heuristics are provided for making choices that will lead to improved convergence of an scIRA-iteration.Item Application of POD and DEIM to Dimension Reduction of Nonlinear Miscible Viscous Fingering in Porous Media(2009-07) Chaturantabut, Saifon; Sorensen, Danny C.A Discrete Empirical Interpolation Method (DEIM) is applied in conjunction with Proper Orthogonal Decomposition (POD) to construct a nonlinear reduced-order model of finite difference discretized system used in the simulation of nonlinear miscible viscous fingering in a 2-D porous medium. POD is first applied to extract a low-dimensional basis that optimally captures the dominant characteristics of the system trajectory. This basis is then used in a Galerkin projection scheme to construct a reduced-order system. DEIM is then applied to greatly improve the efficiency in computing the projected nonlinear terms in the POD reduced system. DEIM achieves a complexity reduction of the nonlinearities which is proportional to the number of reduced variables while POD retains a complexity proportional to the original number of variables. Numerical results demonstrate that the dynamics of the viscous fingering in the full-order system of dimension 15000 can be captured accurately by the POD-DEIM reduced system of dimension 40 with the computational time reduced by factor of O(1000).Item Automatic Discrete Empirical Interpolation for Nonlinear Model Reduction(2012-08) Carden, Russell L.; Sorensen, Danny C.The Discrete Empirical Interpolation Method (DEIM) is a technique for model reduction of non-linear dynamical systems. It is based upon a modification to proper orthogonal decomposition which is designed to reduce the computational complexity for evaluating the reduced order nonlinear term. The DEIM approach is based upon an interpolatory projection and only requires evaluation of a few selected components of the original nonlinear term. Thus, implementation of the reduced order nonlinear term requires a new code to be derived from the original code for evaluating the nonlinearity. This work describes a methodology for automatically deriving a code for the reduced order nonlinearity directly from the original nonlinear code. The methodology is derived from standard techniques of automatic differentiation. This capability removes the possibly tedious and error prone task of producing such a code by hand and hence can facilitate the use of DEIM by non-experts.Item Best Symmetric Low Rank Approximation Via the Symmetry Preserving Singular Value Decomposition(2007-12) Shah, Mili I.; Sorensen, Danny C.The symmetry preserving singular value decomposition (SPSVD) produces the best symmetric (low rank) approximation to a set of data. These symmetric approximations are characterized via an invariance under the action of a symmetry group on the set of data. The symmetry groups of interest consist of all the non-spherical symmetry groups in three dimensions. This set includes the rotational, reflectional, dihedral, and inversion symmetry groups. In order to calculate the best symmetric (low rank) approximation, the symmetry of the data set must be determined. Therefore, matrix representations for each of the non-spherical symmetry groups have been formulated. These new matrix representations lead directly to a novel reweighting iterative method to determine the symmetry of a given data set by solving a series of minimization problems. Once the symmetry of the data set is found, the best symmetric (low rank) approximation in the Frobenius norm and matrix 2-norm can be established by using the SPSVD.Item Bounds on Eigenvalue Decay Rates and Sensitivity of Solutions to Lyapunov Equations(2002-06) Sorensen, Danny C.; Zhou, Y.Balanced model reduction is a technique for producing a low dimensional approximation to a linear time invariant system. An important feature of balanced reduction is the existence of an error bound that is closely related to the decay rate of the eigenvalues of certain system Gramians. Rapidly decaying eigenvalues imply low dimensional reduced systems. New bounds are developed for the eigen-decay rate of the solution of Lyapunov equation AP + PAT =- BBT. These bounds take into account the low rank right hand side structure of the Lyapunov equation. They are valid for any diagonalizable matrix A. Numerical results are presented to illustrate the effectiveness of these bounds when the eigensystem of A is moderately conditioned. We also present a bound on the norm of the solution P when A is diagonalizable and derive bounds on the conditioning of the Lyapunov operator for generalA.Item Deflation Techniques for an Implicitly Restarted Arnoldi Iteration(1994-09) Lehoucq, R.B.; Sorensen, Danny C.A deflation procedure is introduced that is designed to improve convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix. As the iteration progresses the Ritz value approximations of the eigenvalues of A converge at different rates. A numerically stable deflation scheme is introduced that implicitly deflates the converged approximations from the iteration. We present two forms of implicit deflation. The first, a locking operation, decouples converged Ritz values and associated vectors from the active part of the iteration. The second, a purgingoperation, removes unwanted but converged Ritz pairs. Convergence of the iteration is improved and a reduction in computational effort is also achieved. The deflation strategies make it possible to compute multiple or clustered eigenvalues with a single vector restart method. A Block method is not required. These schemes are analyzed with respect to numerical stability and computational results are presented.
- «
- 1 (current)
- 2
- 3
- »