Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Dennis, John E., Jr."

Now showing 1 - 17 of 17
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A framework for managing models in nonlinear optimization of computationally expensive functions
    (1999) Serafini, David Brian; Dennis, John E., Jr.
    One of the most significant problems in the application of standard optimization methods to real-world engineering design problems is that the computation of the objective function often takes so much computer time (sometimes hours) that traditional optimization techniques are not practical. A solution that has long been used in this situation has been to approximate the objective function with something much cheaper to compute, called a "model" (or surrogate), and optimize the model instead of the actual objective function. This simple approach succeeds some of the time, but sometimes it fails because there is not sufficient a priori knowledge to build an adequate model. One way to address this problem is to build the model with whatever a priori knowledge is available, and during the optimization process sample the true objective at selected points and use the results to monitor the progress of the optimization and to adapt the model in the region of interest. We call this approach "model management". This thesis will build on the fundamental ideas and theory of pattern search optimization methods to develop a rigorous methodology for model management. A general framework for model management algorithms will be presented along with a convergence analysis. A software implementation of the framework, which allows for the reuse of existing modeling and optimization software, has been developed and results for several test problems will be presented. The model management methodology and potential applications in aerospace engineering are the subject of an ongoing collaboration between researchers at Boeing, IBM, Rice and College of William & Mary.
  • Loading...
    Thumbnail Image
    Item
    A global convergence theory for a class of trust region algorithms for constrained optimization
    (1988) El-Alem, Mahmoud Mahmoud; Tapia, Richard A.; Dennis, John E., Jr.
    In this research we present a trust region algorithm for solving the equality constrained optimization problem. This algorithm is a variant of the 1984 Celis-Dennis-Tapia algorithm. The augmented Lagrangian function is used as a merit function. A scheme for updating the penalty parameter is presented. The behavior of the penalty parameter is discussed. We present a global and local convergence analysis for this algorithm. We also show that under mild assumptions, in a neighborhood of the minimizer, the algorithm will reduce to the standard SQP algorithm; hence the local rate of convergence of SQP is maintained. Our global convergence theory is sufficiently general that it holds for any algorithm that generates steps that give at least a fraction of Cauchy decrease in the quadratic model of the constraints.
  • Loading...
    Thumbnail Image
    Item
    A global convergence theory for a general class of trust region algorithms for equality constrained optimization
    (1993) Maciel, Maria Cristina; Dennis, John E., Jr.
    This work is concerned with global convergence results for a broad class of trust region sequential quadratic programming algorithms for the smooth nonlinear programming problem with equality constraints. The family of algorithms to which our results apply is characterized by very mild conditions on the normal and tangential components of the steps that its members generate. The normal component must predict a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints. The tangential component must predict a fraction of Cauchy decrease condition on the quadratic model of the Lagrangian function associated with the problem in the tangent space of the constraints. The other main characteristic of this class of algorithms is that the trial step is evaluated for acceptance by using as merit function the Fletcher exact penalty function with a penalty parameter specified by El-Alem. The properties of the step together with the way that the penalty parameter is chosen allow us to establish that while the algorithm does not terminate, the sequence of trust region radii is bounded away from zero and the nondecreasing sequence of penalty parameters is eventually constant. These results lead us to conclude that the algorithms are well defined and that they are globally convergent. The class includes well-known algorithms based on the Celis-Dennis-Tapia subproblem and on the Vardi subproblem. As an example we present an algorithm which can be viewed as a generalization of the Steihaug-Toint dogleg method for the unconstrained case. It is based on a quadratic programming algorithm that uses as feasible point a step in the normal direction to the tangent space of the constraints and then does feasible conjugate reduced-gradient steps to solve the quadratic program. This algorithm should cope quite well with large problems.
  • Loading...
    Thumbnail Image
    Item
    A mixed integer nonlinear formulation for improving membrane filtration water treatment plant design
    (2001) Klampfl, Erica Zimmer; Dennis, John E., Jr.
    This thesis provides several contributions to the problem of building large-scale membrane filtration water treatment plants needed to meet increasingly stringent drinking water standards. The first contribution is an extension of a small-scale model of membrane filtration water treatment plants to a model that determines building specifications for the needed large-scale plants at a minimum cost. The large-scale model allows choosing a mix of feed and backflush pumps and partitioning the membrane area into more than one array in order to capture design decisions that can greatly reduce the cost. However, these additions to the model change the mathematical formulation from a small nonconvex nonlinear programming problem (NLP) to a large nonconvex mixed integer nonlinear programming problem (MINLP), which is much more difficult to solve. To address the difficulties associated with solving general nonconvex MINLPs, the second contribution is the development of an algorithm that exploits the structure of the nonconvex MINLP problem and the code written to implement the algorithm. The special structure of our nonconvex MINLP allows the development of a specific algorithm that exploits the structure and allows many benefits over existing methods. The algorithm guarantees termination to local optimizer for the MINLP, requires the solution of a small NLP and a relatively small IP compared to most algorithms that require the solution of a large NLP and a large MILP, requires the solution of an IP instead of an MILP, and requires very few iterations for termination. The MINLP is reformulated so that the algorithm needs only to iteratively solve alternations of an NLP and an integer programming problem (IP). Finally, we establish design guidelines for building large-scale membrane filtration water treatment plants. These guidelines include suggestions on how to choose an appropriate mix of feed and backflush pumps, how to partition the membrane area for different size plants, and how to estimate costs. Specifically, we show that larger plants can be operated more cost efficiently than smaller plants at higher recoveries and that a more flexible consideration of facility configuration and pump selection may reduce costs for larger plants by approximately 20% per year.
  • Loading...
    Thumbnail Image
    Item
    A robust trust region algorithm for nonlinear programming
    (1990) Williamson, Karen Anne; Dennis, John E., Jr.
    This paper develops and tests a trust region algorithm for the nonlinear equality constrained optimization problem. Our goal is to develop a robust algorithm that can handle lack of second-order sufficiency away from the solution in a natural way. Celis, Dennis and Tapia (1985) give a trust region algorithm for this problem, but in certain situations their trust region subproblem is too difficult to solve. The algorithm given here is based on the restriction of the trust region subproblem given by Celis, Dennis and Tapia (1985) to a relevant two-dimensional subspace. This restriction greatly facilitates the solution of the subproblem. The trust region subproblem that is the focus of this work requires the minimization of a possibly non-convex quadratic subject to two quadratic constraints in two dimensions. The solution of this problem requires the determination of all the global solutions, and the non-global solution, if it exists, to the standard unconstrained trust region subproblem. Algorithms for approximating a single global solution to the unconstrained trust region subproblem have been well-established. Analytical expressions for all of the solutions will be derived for a number of special cases, and necessary and sufficient conditions are given for the existence of a non-global solution for the general case of the two-dimensional unconstrained trust region subproblem. Finally, numerical results are presented for a preliminary implementation of the algorithm, and these results verify that it is indeed robust.
  • Loading...
    Thumbnail Image
    Item
    A subgradient algorithm for nonlinear integer programming and its parallel implementation
    (1991) Wu, Zhijun; Dennis, John E., Jr.; Bixby, Robert E.
    This work concerns efficiently solving a class of nonlinear integer programming problems: min $\{f(x)$: $x \in \{0,1\}\sp{n}\}$ where $f(x)$ is a general nonlinear function. The notion of subgradient for the objective function is introduced. A necessary and sufficient condition for the optimal solution is constructed. And a new algorithm, called the subgradient algorithm, is developed. The algorithm is an iterative procedure, searching for the solution iteratively among feasible points, and in each iteration, generating the next iterative point by solving the problem for a local piecewise linear model of the original problem which is constructed with supporting planes for the objective function at a set of feasible points. Special continuous optimization techniques are used to compute the supporting planes. The problem for each local piecewise linear model is solved by solving an equivalent linear integer program. The fundamental theory for the new approach is built and all related mathematical proofs and derivations such as proofs for convergence properties, the finiteness of the algorithm, as well as the correct formulation of the subproblems are presented. The algorithm is parallelized and implemented on parallel distributed-memory machines. The preliminary numerical results show that the algorithm can solve test problems effectively. To implement the subgradient algorithm, a parallel software system written in EXPRESS C is developed. The system contains a group of parallel subroutines that can be used for either continuous or discrete optimization such as subroutines for QR, LU and Cholesky factorizations, triangular system solvers and so on. A sequential implementation of the simplex algorithm for linear programming also is included. Especially, a parallel branch-and-bound procedure is developed. Different from directly parallelizing the sequential binary branch-and-bound algorithm, a parallel strategy with multiple branching is used for good processor scheduling. Performance results of the system on NCUBE are given.
  • Loading...
    Thumbnail Image
    Item
    Automatic differentiation: Overview and application to systems of parameterized nonlinear equations
    (1993) Rosemblun, Marcela Laura; Dennis, John E., Jr.
    Automatic Differentiation is a computational technique that allows the evaluation of derivatives of functions defined by computer programs. Derivatives are calculated by applying the chain rule of differential calculus to the sequence of elementary computations involved in the program. In this work, an overview of the theory and implementation of automatic differentiation is presented, as well as a description of the available software. An application of automatic differentiation in the context of solving systems of parameterized nonlinear equations is discussed. In this application, the "differentiated" functions are implementations of Newton's method and Broyden's method. The iterates generated by the algorithms are differentiated with respect to the parameters. The results show that whenever the sequence of iterates converges to a solution of the system, the corresponding sequence of derivatives (computed by automatic differentiation) also converges to the correct value. Additionally, we show that the "differentiated" algorithms can be successfully employed in the solution of parameter identification problems via the Black-Box method.
  • Loading...
    Thumbnail Image
    Item
    Convergence properties of the Barzilai and Borwein gradient method
    (1991) Raydan M., Marcos; Tapia, Richard A.; Dennis, John E., Jr.
    In a recent paper, Barzilai and Borwein presented a new choice of steplength for the gradient method. Their choice does not guarantee descent in the objective function and greatly speeds up the convergence of the method. We derive an interesting relationship between any gradient method and the shifted power method. This relationship allows us to establish the convergence of the Barzilai and Borwein method when applied to the problem of minimizing any strictly convex quadratic function (Barzilai and Borwein considered only 2-dimensional problems). Our point of view also allows us to explain the remarkable improvement obtained by using this new choice of steplength. For the two eigenvalues case we present some very interesting convergence rate results. We show that our Q and R-rate of convergence analysis is sharp and we compare it with the Barzilai and Borwein analysis. We derive the preconditioned Barzilai and Borwein method and present preliminary numerical results indicating that it is an effective method, as compared to the preconditioned Conjugate Gradient method, for the numerical solution of some special symmetric positive definite linear systems that arise in the numerical solution of Partial Differential Equations.
  • Loading...
    Thumbnail Image
    Item
    Global convergence of trust region methods for minimizing a nondifferentiable function
    (1989) Li, Shou-Bai; Tapia, Richard A.; Dennis, John E., Jr.
    Three fundamental convergence properties of trust region (TR) methods for solving nonsmooth unconstrained minimization problems are considered in this paper. The first is to prevent the false termination of the TR iterates, $i.e.$, if the optimal solution of the TR subproblem is at the current iterate, then the current iterate is a stationary point of the objective function. Under the assumptions given in this thesis, we prove that false termination cannot happen. The second property is that an acceptable TR step can eventually be obtained by reducing the size of trust region. The third property states that any accumulation point of the TR iterates is a stationary point of the objective function. The convergence analysis is made for two classes of nonsmooth objective functions, regular functions and locally Lipschitz functions. Some general assumptions made on the objective function and the local model are shown to be satisfied by many nonsmooth TR methods. An example of locally Lipschitz function is also proved to satisfy these assumptions. Under these assumptions, a unified approach to analyze the global convergence of TR methods is provided in the paper. The order of approximation between the objective function and the local model at each iteration plays an important part in the analysis. Three equivalent conditions of the first order approximation have been derived for nonsmooth TR methods. We give an alternative approach for obtaining the convergence result of El Hallabi & Tapia (1987) for TR methods with convex local models.
  • Loading...
    Thumbnail Image
    Item
    Local and superlinear convergence of structured secant methods from the convex class
    (1988) Martinez R., Hector Jairo; Dennis, John E., Jr.; Tapia, Richard A.
    In this thesis we develop a unified theory for establishing the local and q-superlinear convergence of secant methods which use updates from Broyden's convex class and have been modified to take advantage of the structure present in the Hessian in constructing approximate Hessians. As an application of this theory, we show the local and q-superlinear convergence of any structured secant method which use updates from the convex class for the equality-constrained optimization problem and the nonlinear least-squares problem. Particular cases of these methods are the SQP augmented scale BFGS and DFP secant methods for constrained optimization problems introduced by Tapia. Another particular case, for which local and q-superlinear convergence is proved for the first time here, is the Al-Baali and Fletcher modification of the structured BFGS secant method considered by Dennis, Gay and Welsch for the nonlinear least-squares problem and implemented in the current version of the NL2SOL code.
  • Loading...
    Thumbnail Image
    Item
    Multidirectional search: A direct search algorithm for parallel machines
    (1989) Torczon, Virginia Joanne; Dennis, John E., Jr.
    In recent years there has been a great deal of interest in the development of optimization algorithms which exploit the computational power of parallel computer architectures. We have developed a new direct search algorithm, which we call multi-directional search, that is ideally suited for parallel computation. Our algorithm belongs to the class of direct search methods, a class of optimization algorithms which neither compute nor approximate any derivatives of the objective function. Our work, in fact, was inspired by the simplex method of Spendley, Hext, and Himsworth, and the simplex method of Nelder and Mead. The multi-directional search algorithm is inherently parallel. The basic idea of the algorithm is to perform concurrent searches in multiple directions. These searches are free of any interdependencies, so the information required can be computed in parallel. A central result of our work is the convergence analysis for our algorithm. By requiring only that the function be continuously differentiable over a bounded level set, we can prove that a subsequence of the points generated by the multi-directional search algorithm converges to a stationary point of the objective function. This is of great interest since we know of few convergence results for practical direct search algorithms. We also present numerical results indicating that the multi-directional search algorithm is robust, even in the presence of noise. Our results include comparisons with the Nelder-Mead simplex algorithm, the method of steepest descent, and a quasi-Newton method. One surprising conclusion of our numerical tests is that the Nelder-Mead simplex algorithm is not robust. We close with some comments about future directions of research.
  • Loading...
    Thumbnail Image
    Item
    Multilevel algorithms for nonlinear equations and equality constrained optimization
    (1993) Alexandrov, Natalia; Dennis, John E., Jr.
    A general trust region strategy is proposed for solving nonlinear systems of equations and equality constrained optimization problems by means of multilevel algorithms. The idea is to use the trust region strategy to globalize the Brent algorithms for solving nonlinear equations, and to extend them to algorithms for solving optimization problems. The new multilevel algorithm for nonlinear equality constrained optimization operates as follows. The constraints are divided into an arbitrary number of blocks dictated by the application. The trial step from the current solution approximation to the next one is computed as a sum of substeps, each of which must predict a Fraction of Cauchy Decrease on the subproblem of minimizing the model of each constraint block, and, finally, the model of the objective function, restricted to the intersection of the null spaces of all the preceding linearized constraints. The models of each constraint block and of the objective function are built by using the function and derivative information at different points. The merit function used to evaluate the step is a modified $\ell\sb2$ penalty function with nested penalty parameters. The scheme for updating the penalty parameters is a generalization of the one proposed by El-Alem. The algorithm is shown to be well-defined and globally convergent under reasonable assumptions. The global convergence theory for the optimization algorithm implies global convergence of the multilevel algorithm for nonlinear equations and a modification of a class of trust region algorithms proposed by Maciel, and Dennis, El-Alem and Maciel. The algorithms are expected to become flexible tools for solving a variety of optimization problems and to be of great practical use in applications such as multidisciplinary design optimization. In addition, they serve to establish a foundation for the study of the general multilevel optimization problem.
  • Loading...
    Thumbnail Image
    Item
    Nonlinear multicriteria optimization and robust optimality
    (1997) Das, Indraneel; Dennis, John E., Jr.
    This dissertation attempts to address two important problems in systems engineering, namely, multicriteria optimization and robustness optimization. In fields ranging from engineering to the social sciences designers are very often required to make decisions that attempt to optimize several criteria or objectives at once. Mathematically this amounts to finding the Pareto optimal set of points for these constrained multiple criteria optimization problems which happen to be nonlinear in many realistic situations, particularly in engineering design. Traditional techniques for nonlinear multicriteria optimization suffer from various drawbacks. The popular method of minimizing weighted sums of the multiple objectives suffers from the deficiency that choosing an even spread of 'weights' does not yield an even spread of points on the Pareto surface and further this spread is often quite sensitive to the relative scales of the functions. A continuation/homotopy based strategy for tracing out the Pareto curve tries to make up for this deficiency, but unfortunately requires exact second derivative information and further cannot be applied to problems with more than two objectives in general. Another technique, goal programming, requires prior knowledge of feasible goals which may not be easily available for more than two objectives. Normal-Boundary Intersection (NBI), a new technique introduced in this dissertation, overcomes all of the difficulties inherent in the existing techniques by introducing a better parametrization of the Pareto set. It is rigorously proved that NBI is completely independent of the relative scales of the functions and is quite successful in producing an evenly distributed set of points on the Pareto set given an evenly distributed set of 'NBI parameters' (comparable to the 'weights' in minimizing weighted sums of objectives). Further, this method can easily handle more than two objectives while retaining the computational efficiency of continuation-type algorithms, which is an improvement over homotopy techniques for tracing the trade-off curve. Various aspects of NBI including computational issues and its relationships with minimizing convex combinations and goal programming are discussed in this dissertation. Finally some case studies from engineering disciplines are performed using NBI. The other facet of this dissertation deals with robustness optimization, a concept useful in quantifying the stability of an optimum in the face of random fluctuations in the design variables. This robustness optimization problem is presented as an application of multicriteria optimization since it essentially involves the simultaneous minimization of two criteria, the objective function value at a point and the dispersion in the function values in a neighborhood of the point. Moreover, a formulation of the robustness optimization problem is presented so that it fits the framework of constrained, nonlinear optimization problems, which is an improvement on existing formulations that deal with either unconstrained nonlinear formulations or constrained linear formulations.
  • Loading...
    Thumbnail Image
    Item
    Pattern search algorithms for mixed variable general constrained optimization problems
    (2003) Abramson, Mark Aaron; Dennis, John E., Jr.; Audet, Charles
    A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. The Audet-Dennis Generalized Pattern Search (GPS) algorithm for bound constrained mixed variable optimization problems is extended to problems with general nonlinear constraints by incorporating a filter, in which new iterates are accepted whenever they decrease the incumbent objective function value or constraint violation function value. Additionally, the algorithm can exploit any available derivative information (or rough approximation thereof) to speed convergence without sacrificing the flexibility often employed by GPS methods to find better local optima. In generalizing existing GPS algorithms, the new theoretical convergence results presented here reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are made, a hierarchy of theoretical convergence results is given, in which the assumptions dictate what can be proved about certain limit points of the algorithm. A new Matlab(c) software package was developed to implement these algorithms. Numerical results are provided for several nonlinear optimization problems from the CUTE test set, as well as a difficult nonlinearly constrained mixed variable optimization problem in the design of a load-bearing thermal insulation system used in cryogenic applications.
  • Loading...
    Thumbnail Image
    Item
    Structured secant updates for nonlinear constrained optimization
    (1991) Overley, H. Kurt; Dennis, John E., Jr.
    Two new updates are presented, the UHU update and a modified Gurwitz update, for approximating the Hessian of the Lagrangian in nonlinear constrained optimization problems. Under the standard assumptions, the new UHU algorithm is shown to converge locally at a two-step q-superlinear rate. With the additional assumption that the update can be performed at every iteration, the UHU method converges locally at a one-step q-superlinear rate. Numerical experiments are performed on some full Hessian methods including Powell's modified BFGS and Tapia's ASSA and SALSA algorithms, and on reduced Hessian methods including the two new updates, the Coleman-Fenyes update, the Nocedal-Overton method, and the two-stage Gurwitz update. These experiments show that the new updates compare favorably with existing methods.
  • Loading...
    Thumbnail Image
    Item
    The solution of a class of limited diversification portfolio selection problems
    (1997) Butera, Gwyneth Owens; Bixby, Robert E.; Dennis, John E., Jr.
    A branch-and-bound algorithm for the solution of a class of mixed-integer nonlinear programming problems arising from the field of investment portfolio selection is presented. The problems in this class are characterized by the inclusion of the fixed transaction costs associated with each asset, a constraint that explicitly limits the number of distinct assets in the selected portfolio, or both. Modeling either of these forms of limiting the cost of owning an investment portfolio involves the introduction of binary variables, resulting in a mathematical programming problem that has a nonconvex feasible set. Two objective functions are examined in this thesis; the first is a positive definite quadratic function which is commonly used in the selection of investment portfolios. The second is a convex function that is not continuously differentiable; this objective function, although not as popular as the first, is, in many cases, a more appropriate objective function. To take advantage of the structure of the model, the branch-and-bound algorithm is not applied in the standard fashion; instead, we generalize the implicit branch-and-bound algorithm introduced by Bienstock (3). This branch-and-bound algorithm adopts many of the standard techniques from mixed-integer linear programming, including heuristics for finding feasible points and cutting planes. Implicit branch-and-bound involves the solution of a sequence of subproblems of the original problem, and thus it is necessary to be able to solve these subproblems efficiently. For each of the two objective functions, we develop an algorithm for solving its corresponding subproblems; these algorithms exploit the structure of the constraints and the objective function, simplifying the solution of the resulting linear systems. Convergence for each algorithm is proven. Results are provided for computational experiments performed on investment portfolio selection problems for which the cardinality of the universe of assets available for inclusion in the selected portfolio ranges in size from 52 to 1140.
  • Loading...
    Thumbnail Image
    Item
    Trust-region interior-point algorithms for a class of nonlinear programming problems
    (1996) Vicente, Luis Nunes; Dennis, John E., Jr.
    This thesis introduces and analyzes a family of trust-region interior-point (TRIP) reduced sequential quadratic programming (SQP) algorithms for the solution of minimization problems with nonlinear equality constraints and simple bounds on some of the variables. These nonlinear programming problems appear in applications in control, design, parameter identification, and inversion. In particular they often arise in the discretization of optimal control problems. The TRIP reduced SQP algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state and adjoint equations. These algorithms result from a successful combination of a reduced SQP algorithm, a trust-region globalization, and a primal-dual affine scaling interior-point method. The TRIP reduced SQP algorithms have very strong theoretical properties. It is shown in this thesis that they converge globally to points satisfying first and second order necessary optimality conditions, and in a neighborhood of a local minimizer the rate of convergence is quadratic. Our algorithms and convergence results reduce to those of Coleman and Li for box-constrained optimization. An inexact analysis is presented to provide a practical way of controlling residuals of linear systems and directional derivatives. Complementing this theory, numerical experiments for two nonlinear optimal control problems are included showing the robustness and effectiveness of these algorithms. Another topic of this dissertation is a specialized analysis of these algorithms for equality-constrained optimization problems. The important feature of the way this family of algorithms specializes for these problems is that they do not require the computation of normal components for the step and an orthogonal basis for the null space of the Jacobian of the equality constraints. An extension of More and Sorensen's result for unconstrained optimization is presented, showing global convergence for these algorithms to a point satisfying the second-order necessary optimality conditions.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892