Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Torczon, Virginia"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Direct Search Methods on Parallel Machines
    (1990-09) Dennis, J.E.; Torczon, Virginia
    This paper describes an approach to constructing derivative-free parallel algorithms for unconstrained optimization which are easy to implement on parallel machines. A special feature of this approach is the ease with which algorithms can be generated to take advantage of any number of processors and to adapt to any cost ratio of communication to function evaluation. The algorithms given here are supported by a strong convergence theorem, promising computational results, and an intuitively appealing interpretation as multi-directional line search methods.
  • Loading...
    Thumbnail Image
    Item
    Numerical Optimization Using Computer Experiments
    (1997-03) Trosset, Michael; Torczon, Virginia
    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
  • Loading...
    Thumbnail Image
    Item
    PDS: Direct Search Methods for Unconstrained Optimization on Either Sequential or Parallel Machines
    (1992-03) Torczon, Virginia
    PDS is a collection of Fortran subroutines for solving unconstrained nonlinear optimization problems using direct search methods. The software is written so that execution on sequential machines is straightforward while execution on Intel distributed memory machines, such as the iPSC/2, the iPSC/860 or the Touchstone Delta, can be accomplished simply by including a few well-defined routines containing calls to Intel-specific Fortran library calls. Those interested in using the algorithm on other distributed memory machines, even for something as simple as a network of workstations or personal computers, need only modify these few subroutines to handle the global communication requirements. Furthermore, since the parallelism is clearly defined at the "do-loop" level, it is a simple matter to insert compiler directives that allow for execution on shared memory parallel machines. Included here is an example of such directives, contained in comment statements, for execution on a Sequent Symmetry S81. PDS encompasses an entire class of general-purpose optimization methods that require little of the user other than a (scalar) subroutine to evaluate the function (though the algorithm is flexible enough to accommodate subroutines that evaluate the function in parallel) and even less of the problem to be solved since direct search methods presumes only that the function is continuous. Thus, these methods are particularly effective on parameter estimation problems involving a relatively small number of parameters. They are also very interesting as parallel algorithms because they are perfectly scalable: they can use any number of processors regardless of the dimension of the problem to be solved and, in fact, tend to perform better as more processors are added.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892