Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Audet, Charles"

Now showing 1 - 15 of 15
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A Branch and Cut Algorithm for Nonconvex Quadratically Constrained Quadratic Programming
    (1999-01) Audet, Charles; Hansen, Pierre; Jaumard, Brigitte; Savard, Gilles
    We present a branch and cut algorithm that yields in finite time, a globally epsilon-optimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using Reformulation-Linearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported. Some problems of the literature are solved, for the first time with a proof of global optimality.
  • Loading...
    Thumbnail Image
    Item
    A MADS Algorithm with a Progressive Barrier for Derivative-Free Nonlinear Programming
    (2007-12) Audet, Charles; Dennis, J.E. Jr.
    We propose a new algorithm for general constrained derivative-free optimization. As in most methods, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or barrier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased as the algorithm evolves. Using the Clarke nonsmooth calculus, we prove Clarke stationarity of the sequences of feasible and infeasible trial points. The new method is effective on two academic test problems with up to 50 variables, which were problematic for our GPS filter method. We also test on a chemical engineering problem. The proposed method generally outperforms our LTMADS in the case where no feasible initial points are known, and it does as well when feasible points are known.
  • Loading...
    Thumbnail Image
    Item
    A Pattern Search Filter Method for Nonlinear Programming without Derivatives
    (2000-03) Audet, Charles; Dennis, J.E. Jr.
    This paper formulates and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. A key feature of the new algorithm is that it preserves the useful division into global SEARCH and a local POLL steps. It is shown here that the algorithm identifies limit points at which optimality conditions depend on local smoothness of the functions. Stronger optimality conditions are guaranteed for smoother functions. In the absence of general constraints, the proposed algorithm and its convergence analysis generalizes the previous work on unconstrained, bound constrained and linearly constrained generalized pattern search. The algorithm is illustrated on some test examples and on an industrial wing planform engineering design application.
  • Loading...
    Thumbnail Image
    Item
    Analysis of Generalized Pattern Searches
    (2000-02) Audet, Charles; Dennis, J.E. Jr.
    This paper contains a new convergence analysis for the Lewis and Torczon generalized pattern search (GPS) class of methods for unconstrained and linearly constrained optimization. This analysis is motivated by a desire to understand the successful behavior of the algorithm under hypotheses that are satisfied by many practical problems. Specifically, even if the objective function is discontinuous or extended valued, the methods find a limit point with some minimizing properties. Simple examples show that the strength of the optimality conditions at a limit point does not depend only on the algorithm, but also on the directions it uses, and on the smoothness of the objective at the limit point in question. This contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties and to the defining directions for the algorithm, and it gives previous results as corollaries.
  • Loading...
    Thumbnail Image
    Item
    Concavity Cuts for Disjoint Bilinear Programming
    (1999-09) Alarie, Stéphane; Audet, Charles; Jaumard, Brigitte; Savard, Gilles
    We pursue the study of concavity cuts for the disjoint bilinear programming problem. This optimization problem has two equivalent symmetric linear maxmin reformulations, leading to two sets of concavity cuts. We first examine the depth of these cuts by considering the assumptions on the boundedness of the feasible regions of both maxmin and bilinear formulations. We next propose a branch and bound algorithm which makes use of concavity cuts. We also present a procedure that eliminates degenerate solutions. Extensive computational experiences are reported. Sparse problems with up to 500 variables in each disjoint set and 100 constraints, and dense problems with up to 60 variables again in each set and 60 constraints are solved in reasonable computing times.
  • Loading...
    Thumbnail Image
    Item
    Convergence of Mesh Adaptive Direct Search to Second-Order Stationary Points
    (2005-08) Abramson, Mark A.; Audet, Charles
    A previous analysis of second-order behavior of pattern search algorithms for unconstrained and linearly constrained minimization is extended to the more general class of mesh adaptive direct search (MADS) algorithms for general constrained optimization. Because of the ability of MADS to generate an asymptotically dense set of search directions, we are able to establish reasonable conditions under which a subsequence of MADS iterates converges to a limit point satisfying second-order necessary or sufficient optimality conditions for general set-constrained optimization problems.
  • Loading...
    Thumbnail Image
    Item
    Convergence Results for Pattern Search Algorithms are Tight
    (1998-11) Audet, Charles
    Recently, general definitions of pattern search methods for both unconstrained and linearly constrained optimization were presented. It was shown under mild conditions, that there exists a subsequence of iterates converging to a stationary point. In the unconstrained case, stronger results are derived under additional assumptions. In this paper, we present three small dimensioned examples showing that these results cannot be strengthened without additional assumptions. First, we show that second order optimality conditions cannot be guaranteed. Second, we show that there can be an accumulation point of the sequence of iterates whose gradient norm is strictly positive. These two examples are also valid for the bound constrained case. Finally, we show that even under the stronger assumptions of the unconstrained case, there can be infinitely many accumulation points.
  • Loading...
    Thumbnail Image
    Item
    Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
    (2004-06) Abramson, Mark A.; Audet, Charles; Dennis, J.E. Jr.
    A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a load-bearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
  • Loading...
    Thumbnail Image
    Item
    Generalized Pattern Searches with Derivative Information
    (2002-06) Abramson, Mark A.; Audet, Charles; Dennis, J.E. Jr.
    A common question asked by users of direct search algorithms is how to use derivative information at iterates where it is available. This paper addresses that question with respect to Generalized Pattern Search (GPS) meth-ods for unconstrained and linearly constrained optimization. Specifically this paper concentrates on the GPS POLL step. Polling is done to certify the need to refine the current mesh, and it requires O(n) function evaluations in the worst case. We show that the use of derivative information significantly reduces the maximum number of function evaluations necessary for POLL steps, even to a worst case of a single function evaluation with certain algorithmic choices given here. Furthermore, we show that rather rough approximations to the gradient are sufficient to reduce the POLL step to a single function evaluation. We prove that using these less expensive POLL steps does not weaken the known convergence properties of the method, all of which depend only on the POLL step.
  • Loading...
    Thumbnail Image
    Item
    Mesh Adaptive Direct Search Algorithms for Constrained Optimization
    (2004-01) Audet, Charles; Dennis, J.E. Jr.
    This paper introduces the Mesh Adaptive Direct Search (MADS) class of algorithms for nonlinear optimization. MADS extends the Generalized Pattern Search (GPS) class by allowing local exploration, called polling, in a dense set of directions in the space of optimization variables. This means that under certain hypotheses, including a weak constraint qualification due to Rockafellar, MADS can treat constraints by the extreme barrier approach of setting the objective to infinity for infeasible points and treating the problem as unconstrained. The main GPS convergence result is to identify limit points where the Clarke generalized derivatives are nonnegative in a finite set of directions, called refining directions. Although in the unconstrained case, nonnegative combinations of these directions spans the whole space, the fact that there can only be finitely many GPS refining directions limits rigorous justification of the barrier approach to finitely many constraints for GPS. The MADS class of algorithms extend this result; the set of refining directions may even be dense in Rn, although we give an example where it is not. We present an implementable instance of MADS, and we illustrate and compare it with GPS on some test problems. We also illustrate the limitation of our results with examples.
  • Loading...
    Thumbnail Image
    Item
    Mixed Variable Optimization of the Number and Composition of Heat Intercepts in a Thermal Insulation System
    (2000-06) Kokkolaras, Michael; Audet, Charles; Dennis, J.E. Jr.
    In the literature, thermal insulation systems with a fixed number of heat intercepts have been optimized with respect to intercept locations and temperatures. The number of intercepts and the types of insulators that surround them were chosen by parametric studies. This was because the optimization methods used could not treat such categorical variables. Discrete optimization variables are categorical if the objective function or the constraints can not be evaluated unless the variables take one of a prescribed enumerable set of values. The key issue is that categorical variables can not be treated as ordinary discrete variables are treated by relaxing them to continuous variables with a side constraint that they be discrete at the solution. A new mixed variable programming (MVP) algorithm makes it possible to optimize directly with respect to mixtures of discrete, continuous, and categorical decision variables. The result of applying MVP is shown here to give a 65% reduction in the objective function over the previously published result for a thermal insulation model from the engineering literature. This reduction is largely because MVP optimizes simultaneously with respect to the number of heat intercepts and the choices from a list of insulator types as well as intercept locations and temperatures. The main purpose of this paper is to show that the mixed variable optimization algorithm can be applied effectively to a broad class of optimization problems in engineering that could not be easily solved with earlier methods.
  • Loading...
    Thumbnail Image
    Item
    ORTHOMADS: A Deterministic MADS Instance with Orthogonal Directions
    (2008-02) Abramson, Mark A.; Audet, Charles; Dennis, J.E. Jr.; Le Digabel, Sébastien
    The purpose of this paper is to introduce a new way of choosing directions for the Mesh Adaptive Direct Search (MADS) class of algorithms. The advantages of this new OrthoMADS instantiation of MADS are that the polling directions are chosen deterministically, ensuring that the results of a given run are repeatable, and that they are orthogonal to each other, therefore the convex cones of missed directions at each iteration are minimal in size. The convergence results for OrthoMADS follow directly from those already published for MADS, and they hold deterministically, rather than with probability one, as for LTMADS, the first MADS instance. The initial numerical results are quite good for both smooth and nonsmooth, and constrained and unconstrained problems considered here.
  • Loading...
    Thumbnail Image
    Item
    Parallel Space Decomposition of the Mesh Adaptive Direct Search Algorithm
    (2007-11) Audet, Charles; Dennis, J.E. Jr.; Le Digabel, Sébastien
    This paper describes a Parallel Space Decomposition (PSD) technique for the Mesh Adaptive Direct Search (MADS) algorithm. MADS extends Generalized Pattern Search for constrained nonsmooth optimization problems. The objective here is to solve larger problems more efficiently. The new method (PSD-MADS) is an asynchronous parallel algorithm in which the processes solve problems over subsets of variables. The convergence analysis based on the Clarke calculus is essentially the same as for the MADS algorithm. A practical implementation is described and some numerical results on problems with up to 500 variables illustrate advantages and limitations of PSD-MADS.
  • Loading...
    Thumbnail Image
    Item
    Pattern search algorithms for mixed variable general constrained optimization problems
    (2003) Abramson, Mark Aaron; Dennis, John E., Jr.; Audet, Charles
    A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. The Audet-Dennis Generalized Pattern Search (GPS) algorithm for bound constrained mixed variable optimization problems is extended to problems with general nonlinear constraints by incorporating a filter, in which new iterates are accepted whenever they decrease the incumbent objective function value or constraint violation function value. Additionally, the algorithm can exploit any available derivative information (or rough approximation thereof) to speed convergence without sacrificing the flexibility often employed by GPS methods to find better local optima. In generalizing existing GPS algorithms, the new theoretical convergence results presented here reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are made, a hierarchy of theoretical convergence results is given, in which the assumptions dictate what can be proved about certain limit points of the algorithm. A new Matlab(c) software package was developed to implement these algorithms. Numerical results are provided for several nonlinear optimization problems from the CUTE test set, as well as a difficult nonlinearly constrained mixed variable optimization problem in the design of a load-bearing thermal insulation system used in cryogenic applications.
  • Loading...
    Thumbnail Image
    Item
    Pattern Search Algorithms for Mixed Variable Programming
    (1999-05) Audet, Charles; Dennis, J.E. Jr.
    Many engineering optimization problems involve a special kind of discrete variable that can be represented by a number, but this representation has no significance. Such variables arise when a decision involves some situation like a choice from an unordered list of categories. This has two implications: The standard approach of solving problems with continuous relaxations of discrete variables is not available, and the notion of local optimality must be defined through a user-specified set of neighboring points. We present a class of direct search algorithms to provide limit points that satisfy some appropriate necessary conditions for local optimality for such problems. We give a more expensive, version of the algorithm that guarantees additional necessary optimality conditions. A small example illustrates the differences between the two versions. A real thermal insulation system design problem illustrates the efficacy of the user controls for this class of algorithms.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892