Browsing by Author "Huang, H. Y."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A first-order method for the extremization of constrained and unconstrained functions(1971) Naqvi, Sarwar; Huang, H. Y.The problem of extremizing a function f(xl subject to the constraint cc(x) = 0 is considered. Here, f is a scalar, x an n-vector, and cp a q-vector, where 0 < q < n. This problem is transformed into that of minimizing the unconstrained function R(x, X), where x and X are regarded as independent variables. The q-vector X is the Lagrange multiplier associated with the constraint and the function R(x, X) is the performance index measuring the cumulative error in the optimum condition and the constraint. The minimum R(x, X) = 0 of the performance index is sought by applying quadratically convergent algorithms for unconstrained function minimization: the (n+qj-vector Y = [x,^]T is the independent variable associated with the performance index R(y). Since the performance index R(y) involves the first derivatives f and cp^, the gradient G(y) = R^(y), which is employed in quadratically convergent algorithms, involves the second derivatives f and cp . To avoid the explicit use of these second derivatives, a two-point determination of the gradient G(y) is developed: the (n+q)-vector G(y) is computed numerically through only two evaluations of the function R(y). Concerning the one-dimensional determination of the stepsize a, a two point quasilinearization search is developed. This method requires only two evaluations of the function R(y), but preserves the eventual quadratic convergence of the quasilinearization method. Two terminating conditions are investigated: exact search and one-cycle search. Thus, the method presented here is a first-order method. For the ideal case of a quadratic function subject to a linear constraint, it converges to the solution in n+q iterations, at most. The total computational effort involved is equivalent to, at most, 3(n+q) + 1 evaluations of the function R(y). Three numerical examples are given using both the exact search and the one-cycle search. The results are presented in terms of number of iterations and number of function evaluations for convergence.Item Numerical experiments on the methods of dual matrices for function minimization(1972) Chambliss, Joe Preston; Huang, H. Y.Four algorithms of dual matrices for function minimization introduced in Ref. 1 are tested through several numerical examples. Three quadratic functions and five nonquadratic functions are investigated. For quadratic functions, the results show that the convergence is achieved in at most n+1 iterations, where n is the number of variables. Since one-dimensional search is not needed in these algorithms the total number of gradient evaluations for convergence is at most n+2. This represents a saving on the gradient evaluations versus 2n+1 required by the conventional quadratically convergent algorithms. For nonquadratic functions, the results show that these algorithms are very stable and efficient. Also, the effects of stepsize factor on these algorithms are investigated.