Browsing by Author "Steihaug, Trond"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item A Convergence Theory for a Class of Quasi-Newton Methods for Constrained Optimization(1983-05) Fontecilla, Rodrigo; Steihaug, Trond; Tapia, Richard A.In this paper we develop a general convergence theory for a class of quasi-Newton methods for equality constrained optimization. The theory is set in the framework of the diagonalized multiplier method defined by Tapia and is an extension of the theory developed by Glad. We believe that this framework is flexible and amenable to convergence analysis and generalizations. A key ingredient of a method in this class is a multiplier update. Our theory is tested by showing that a straightforward application gives the best known convergence results for several known multiplier updates. Also a characterization of q-superlinear convergence is presented. It is shown that in the special case when the diagonalized multiplier method is equivalent to the successive quadratic programming approach, our general characterization result gives the Boggs, Tolle and Wang characterization.Item Damped Inexact Quasi-Newton Methods(1981-12) Steihaug, TrondThe inexact quasi-Newton methods are very attractive methods for large scale optimization since they require only an approximate solution of the linear system of equations for each iteration. To achieve global convergence results, we adjust the step using a backtracking strategy. We discuss the backtracking strategy in detail and show that this strategy has similar convergence properties as one obtains by using line searches with the Goldstein-Armijo conditions. The combination of backtracking and inexact quasi-Newton methods is particularly attractive since the conditions for convergence are easily met. We give conditions for Q-linear and Q-superlinear convergence.Item Local Analysis of Inexact Quasi-Newton Methods(1982-05) Eisenstat, Stanley C.; Steihaug, TrondQuasi-Newton methods are well known iterative methods for solving nonlinear problems. At each stage, a system of linear equations has to be solved. However, for large scale problems, solving the linear system of equations can be expensive and may not be justified when the iterate is far from the solution or when the matrix is an approximation to the Jacobian or Hessian matrix. Instead we consider a class of inexact quasi-Newton methods which solves the linear system only approximately. We derive conditions for local and superlinear rate of convergence in terms of a relative residual.Item Local and Superlinear Convergence for Truncated Projections Methods(1981-10) Steihaug, TrondLeast change secant updates can be obtained as the limit of iterated projections based on other secant updates. We show that these iterated projections can be terminated or truncated after any positive number of iterations and the local and the superlinear rate of convergence are still maintained. The truncated iterated projections method is used to find sparse and symmetric updates that are locally and superlinearly convergent.Item On the Component-Wise Convergence Rate(1998-06) El-Bakry, Amr; Steihaug, TrondIn this paper we investigate the convergence rate of a sequence of vectors provided that the convergence rates of the components are known. The result of this investigation is then used to study the m-step convergence rate of sequences.Item On the Successive Projections Approach to Least-Squares Problems(1983-08) Dennis, J.E. Jr.; Steihaug, TrondIn this paper, we suggest a generalized Gauss-Seidel approach to sparse linear and nonlinear least-squares problems. The algorithm, closely related to one given by Elfving (1980), uses the work of Curtis, Powell, and Reid (1974) as extended by Coleman and Moré (1983) to divide the variables into nondisjoint groups of structurally orthogonal columns and then projects the updated residual into each column subspace of the Jacobian in turn. In the linear case, this procedure can be viewed as an alternate ordering of the variables in the Gauss-Seidel method. Preliminary tests indicate that this leads quickly to cheap solutions of limited accuracy for linear problems, and that this approach is promising for an inexact Gauss-Newton analog of the inexact Newton approach of Dembo, Eisenstat, and Steihaug (1981).Item Properties of A Class of Preconditioners for Weighted Least Squares Problems(1999-04) Baryamureeba, Venansius; Steihaug, Trond; Zhang, YinA sequence of weighted linear least squares problems arises from interior-point methods for linear programming where the changes from one problem to the next are the weights and the right hand side. One approach for solving such a weighted linear least squares problem is to apply a preconditioned conjugate gradient method to the normal equations where the preconditioner is based on a low-rank correction to the Cholesky factorization of a previous coefficient matrix. In this paper, we establish theoretical results for such preconditioners that provide guidelines for the construction of preconditioners of this kind. We also present preliminary numerical experiments to validate our theoretical results and to demonstrate the effectiveness of this approach.Item The Conjugate Gradient Method and Trust Regions in Large Scale Optimization(1981-10) Steihaug, Trond