Browsing by Author "Dennis, J.E. Jr."
Now showing 1 - 20 of 42
Results Per Page
Sort Options
Item A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems(1996-12) Das, Indraneel; Dennis, J.E. Jr.A standard technique for generating the Pareto set in multicriteria optimization problems is to minimize (convex) weighted sums of the different objectives for various different settings of the weights. However, it is well-known that this method succeeds in getting points from all parts of the Pareto set only when the Pareto curve is convex. This article provides a geometrical argument as to why this is the case. Secondly, it is a frequent observation that even for convex Pareto curves, an evenly distributed set of weights fails to produce an even distribution of points from all parts of the Pareto set. This article aims to identify the mechanism behind this occurrence. Roughly, the weight is related to the slope of the Pareto curve in the objective space in a way such that an even spread of Pareto points actually corresponds to often very uneven distributions of weights. Several examples are provided showing assumed shapes of Pareto curves and the distribution of weights corresponding to an even spread of points on those Pareto curves.Item A Computational Note on Markov Decision Processes Without Discounting(1987-07) Pfeiffer, Paul E.; Dennis, J.E. Jr.The Markov decision process is treated in a variety of forms or cases: finite or infinite horizon, with or without discounting. The finite horizon cases and the case of infinite horizon with discounting have received considerable attention. In the infinite horizon case, with discounting, the problem either receives a linear programming treatment or is treated by the elegant and effective policy-iteration procedure by Ronald Howard. In the undiscounted case, however, a special form of this procedure is required, which detracts from the directness and elegance of the method. The difficulty comes in the step generally called the value-determination procedure. The equations used in this step are linearly dependent, so that the solution of the system of linear equations requires some adjustment. We propose a new computational procedure which avoids this difficulty and works directly with the average next-period gains and powers of the transition probability matrix. The fundamental computational tools are matrix multiplication and addition.Item A Convergence Theory for the Structured BFGS Secant Method with an Application to Nonlinear Least Squares(1987-05) Dennis, J.E. Jr.; Martinez, H.J.; Tapia, R.A.In 1981, Dennis and Walker developed a convergence theory for structured secant methods which included the PSB and the DFP secant methods, but not the straightforward structured version of the BFGS secant method. Here we fill this gap in the theory by establishing a convergence theory for the structured BFGS secant method. A direct application of our new theory gives the first proof of local and q-superlinear convergence of the important structured BFGS secant method for the nonlinear least-squares problem which is used by Dennis, Gay and Welsh in the current version of the popular and successful NL2SOL code.Item A Global Convergence Theory for General Trust-Region-Based Algorithms for Equality Constrained Optimization(1992-09) Dennis, J.E. Jr.; El-Alem, Mahmoud; Maciel, Maria C.This work presents a global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear programming problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trust-region algorithms. The trial step is characterized by very mild conditions on its normal and tangential components. The normal component must satisfy a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints. The tangential component then must satisfy a fraction of Cauchy decrease condition on a quadratic model of the Lagrangian function in the translated tangent space of the constraints determined by the normal component. The Lagrange multipliers and the Hessians are assumed only to be bounded. The other main characteristic of this class of algorithms is that the step is evaluated by using the augmented Lagrangian as a merit function and the penalty parameter is updated using the El-Alem scheme. The properties of the step together with the way that the penalty parameter is chosen are sufficient to establish global convergence. As an example, an algorithm is presented which can be viewed as a generalization of the Steihaug-Toint dogleg algorithm for the unconstrained case. It is based on a quadratic programming algorithm that uses as a feasible point a step in the normal direction to the tangent space of the constraints and then does feasible conjugate reduced-gradient steps to solve the quadratic program. This algorithm should cope quite well with large problems for which effective preconditions are known.Item A MADS Algorithm with a Progressive Barrier for Derivative-Free Nonlinear Programming(2007-12) Audet, Charles; Dennis, J.E. Jr.We propose a new algorithm for general constrained derivative-free optimization. As in most methods, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or barrier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased as the algorithm evolves. Using the Clarke nonsmooth calculus, we prove Clarke stationarity of the sequences of feasible and infeasible trial points. The new method is effective on two academic test problems with up to 50 variables, which were problematic for our GPS filter method. We also test on a chemical engineering problem. The proposed method generally outperforms our LTMADS in the case where no feasible initial points are known, and it does as well when feasible points are known.Item A Memoryless Augmented Gauss-Newton Method for Nonlinear Least-Squares Problems(1985-02) Dennis, J.E. Jr.; Songbai, Sheng; Vu, Phuong AhnIn this paper, we develop, analyze, and test a new algorithm for nonlinear least-squares problems. The algorithm uses a BFGS update of the Gauss-Newton Hessian when some heuristics indicate that the Gauss-Newton method may not make a good step. Some important elements are that the secant or quasi-Newton equations considered are not the obvious ones, and the method does not build up a Hessian approximation over several steps. The algorithm can be implemented easily as a modification of any Gauss-Newton code, and it seems to be useful for large residual problemsItem A New Nonlinear Equations Test Problem(1983-06) Dennis, J.E. Jr.; Gay, David M.; Vu, Phuong AhnThis presents a set of test problems for nonlinear equations and nonlinear least-squares algorithms. These problems, sent to us by C.V. Nelson of the Maine Medical Center, come from a dipole model of the heart. They are 6 x 6 or 8 x 8, easy to code, cheap to evaluate, and not easy to solve. In support of the latter contention, we present test results from MINPACK AND NL2SOL.Item A New Parallel Optimization Algorithm for Parameter Identification in Ordinary Differential Equations(1988-09) Dennis, J.E. Jr.; Williamson, Karen A.Often in mathematical modeling, it is necessary to estimate numerical values for parameters occurring in a system of ordinary differential equations from experimental measurements of the solution trajectories. We will discuss some of the difficulties involved in the solution of this problem, and we will describe a new parallel quasi-Newton algorithm for finding values of the parameters so that the numerical solution of the state equation best fits the observed data in the weighted least squares sense.Item A Pattern Search Filter Method for Nonlinear Programming without Derivatives(2000-03) Audet, Charles; Dennis, J.E. Jr.This paper formulates and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. A key feature of the new algorithm is that it preserves the useful division into global SEARCH and a local POLL steps. It is shown here that the algorithm identifies limit points at which optimality conditions depend on local smoothness of the functions. Stronger optimality conditions are guaranteed for smoother functions. In the absence of general constraints, the proposed algorithm and its convergence analysis generalizes the previous work on unconstrained, bound constrained and linearly constrained generalized pattern search. The algorithm is illustrated on some test examples and on an industrial wing planform engineering design application.Item A Trust Region Strategy for Nonlinear Equality Constrained Optimization(1984-09) Celis, M.R.; Dennis, J.E. Jr.Many current algorithms for nonlinear constrained optimization problems determine a direction by solving a quadratic programming subproblem. The global convergence properties are addressed by using a line search technique and a merit function to modify the length of the step obtained from the quadratic program. In constrained optimization, trust regions strategies have been very successful. In this paper we present a new approach for equality constrained optimization problems based on a trust region strategy. The direction selected is not necessarily the solution of the standard quadratic programming subproblem.Item A Trust-Region Approach to Nonlinear Systems of Equalities and Inequalities(1994-10) Dennis, J.E. Jr.; El-Alem, Mahmoud; Williamson, KarenIn this paper, two new trust-region algorithms for the numerical solution of systems of nonlinear equalities and inequalities are introduced. The formulation is free of arbitrary parameters and possesses sufficient smoothness to exploit the robustness of the trust-region approach. The proposed algorithms are one-sided least-squares trust-region algorithms. The first algorithm is a single-model algorithm, and the second one is a multi-model algorithm where the Cauchy point computation is a model selection procedure. Global convergence analyses for the two algorithms are presented. Our analysis generalizes to nonlinear systems of equalities and inequalities the well-developed theory for nonlinear least-squares problems. Numerical experiments on the two algorithms are also presented. The performances of the two algorithms are reported. The numerical results validate the effectiveness of our approach.Item A Unified Approach to Global Convergence of Trust Region Methods for Nonsmooth Optimization(1989-07) Dennis, J.E. Jr.; Li, Shou-Bai; Tapia, R.A.This paper investigates the global convergence of trust region (TR) methods for solving nonsmooth minimization problems. For a class of nonsmooth objective functions called regular functions, conditions are found on the TR local models that imply three fundamental convergence properties. These conditions are shown to be satisfied by Fletcher's TR method for solving constrained optimization problems, Powell for solving nonlinear fitting problems, Zhang, Kim & Lasdon's successive linear programming method for solving constrained problems, Duff, Nocedal & Reid's TR method for solving systems of nonlinear equations, and El Hallabi & Tapia's TR method for solving systems of nonlinear equations. Thus our results can be viewed as a unified convergence theory for TR methods for nonsmooth problems.Item A User's Guide to Nonlinear Optimization Algorithms(1983-08) Dennis, J.E. Jr.The purpose of this paper is to provide a user's introduction to the basic ideas currently favored in nonlinear optimization routines by numerical analysts. The primary focus will be on the unconstrained problem because the main ideas are much more settled. Although this is not a paper about nonlinear least squares, the rich structure of this important practical problem makes it a convenient example to use to illustrate the ideas we will discuss. We will make most use of this example in the first three sections which deal with the helpful concept of a local modeling technique and the attendant local convergence analysis. Stress will be put on ways used to improve a poor initial solution estimate since this is one of the keys to choosing the most suitable routine for a particular application. This material is covered in the rather long Section 4. the discussion of the constrained problem in Section 5 will be a brief outline of the current issues involved in deciding what algorithms to implement. Section 6 is devoted to some concluding remarks including sparse comments on large problems.Item A Variable Metric Variant of the Karmarkar Algorithm for Linear Programming(1986-06) Dennis, J.E. Jr.; Morshedi, A.M.; Turner, KathrynThe most time-consuming part of the Karmarkar algorithm for linear programming is the projection of a vector onto the nullspace of a matrix that changes at each iteration. We present a variant of the Karmarkar algorithm that uses standard variable-metric techniques in an innovative way to approximate this projection. In limited tests, this modification greatly reduces the number of matrix factorizations needed for the solution of linear programming problems.Item A View of Unconstrained Optimization(1987-10) Dennis, J.E. Jr.; Schnabel, Robert B.Finding the unconstrained minimizer of a function of more than one variable is an important problem with many practical applications, including data fitting, engineering design, and process control. In addition, techniques for solving unconstrained optimization problems form the basis for most methods for solving constrained optimization problems. This paper surveys the state of the art for solving unconstrained optimization problems and the closely related problem of solving systems of nonlinear equations. First we briefly give some mathematical background. Then we discuss Newton's method, the fundamental method underlying most approaches to these problems, as well as the inexact Newton method. The two main practical deficiencies of Newton's method, the need for analytic derivatives and the possible failure to converge to the solution from poor starting points, are the key issues in unconstrained optimization and are addressed next. We discuss a variety of techniques for approximating derivatives, including finite difference approximations, secant methods for nonlinear equations and unconstrained optimization, and the extension of these techniques to solving large, sparse problems. Then we discuss the main methods used to ensure convergence from poor starting points, line search methods and trust region methods. Next we briefly discuss two rather different approaches to unconstrained optimization, the Nelder-Mead simplex method and conjugate direction methods. Finally we comment on some current research directions in the field, in the solution of large problems, the solution of data fitting problems, new secant methods, the solution of singular problems.Item An Efficient Class of Direct Search Surrogate Methods for Solving Expensive Optimization Problems with CPU-Time-Related Functions(2008-06) Abramson, Mark A.; Asaki, Thomas J.; Dennis, J.E. Jr.; Magallanez, Raymond; Sottile, Matthew J.In this paper, we characterize a new class of computationally expensive optimization problems and introduce an approach for solving them. In this class of problems, objective function values may be directly related to the computational time required to obtain them, so that, as the optimal solution is approached, the computational time required to evaluate the objective is significantly less than at points farther away from the solution. This is motivated by an application in which each objective function evaluation requires both a numerical fluid dynamics simulation and an image registration process, and the goal is to find the parameter values of a predetermined reference image by comparing the flow dynamics from the numerical simulation and the reference image through the image comparison process. In designing an approach to numerically solve the more general class of problems in an efficient way, we make use of surrogates based on CPU times of previously evaluated points, rather than their function values, all within the search step framework of mesh adaptive direct search algorithms. Because of the expected positive correlation between function values and their CPU times, a time cutoff parameter is added to the objective function evaluation to allow its termination during the comparison process if the computational time exceeds a specified threshold. The approach was tested using the NOMADm and DACE MATLAB software packages, and results are presented.Item An Experimental Computer Network to Support Numerical Computation(1982-03) Cartwright, Robert; Dennis, J.E. Jr.; Jump, J. Robert; Kennedy, KenThe Computer Science faculty at Rice University proposes to design and implement an experimental distributed computing system to support numerical computation. Although local networks of single user machines have already been proven for many nonnumerical applications, the concept has yet to be tried in the context of numerical program development and execution. The Rice Numerical Network, or R^n, will consist of approximately 24 single-user numerical machines equipped with high-resolution bit-mapped screens, a 32-bit central processor, and vector floating point hardware. It will also include several specialized server nodes supporting a high-performance vector floating point processor and various peripheral devices including a gateway to the SCnet communications network linking the nation's major computer science research centers. The new facility will support a coherent research program in software systems, computer architecture, and quality numerical software, directed at creating a modern reactive environment for numerical computation. Despite stiff competition from industry and other universities, Rice University has recently assembled the nucleus of computer science faculty required to develop an innovative distributed computing system supporting vector numerical computation and to evaluate its utility as a tool for solving important scientific problems.Item Analysis of Generalized Pattern Searches(2000-02) Audet, Charles; Dennis, J.E. Jr.This paper contains a new convergence analysis for the Lewis and Torczon generalized pattern search (GPS) class of methods for unconstrained and linearly constrained optimization. This analysis is motivated by a desire to understand the successful behavior of the algorithm under hypotheses that are satisfied by many practical problems. Specifically, even if the objective function is discontinuous or extended valued, the methods find a limit point with some minimizing properties. Simple examples show that the strength of the optimality conditions at a limit point does not depend only on the algorithm, but also on the directions it uses, and on the smoothness of the objective at the limit point in question. This contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties and to the defining directions for the algorithm, and it gives previous results as corollaries.Item Characteristic Shape Sequences for Measures on Images(2006-11) Pingel, Rachael L.; Abramson, Mark A.; Asaki, Thomas J.; Dennis, J.E. Jr.Researchers in many fields often need to quantify the similarity between images using metrics that measure qualities of interest in a robust quantitative manner. We present here the concept of image dimension reduction through characteristic shape sequences. We formulate the problem as a nonlinear optimization program and demonstrate the solution on a test problem of extracting maximal area ellipses from two-dimensional image data. To solve the problem numerically, we augment the class of mesh adaptive direct search (MADS) algorithms with a filter, so as to allow infeasible starting points and to achieve better local solutions. Results here show that the MADS filter algorithm is successful in the test problem of finding good characteristic ellipse solutions from simple but noisy images.Item Comparing Problem Formulation for Coupled Sets of Component(2007-11) Arroyo, S.F.; Cramer, E.J.; Frank, P.D.; Dennis, J.E. Jr.In this paper several formulations and comparative test results are presented for problems involving the general paradigm of coupled sets of components. This paradigm is general enough to include systems of systems (SoS) and multidisciplinary design optimization (MDO). It is assumed that these systems involve a (potentially inactive) coordinating component, or "Central Authority" and one or more, potentially interacting, subordinate components. The formulations differ in the amount of control given to the central authority versus the autonomy granted to the subordinate components. Comparative test results are given for several of the formulations on a NASA-generated, public domain, aircraft conceptual design problem.
- «
- 1 (current)
- 2
- 3
- »