A globally convergent algorithm for training multilayer perceptrons for data classification and interpolation
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the training of multi-layer perceptrons, a class of Artificial Neural Networks, for the detection and classification of signals in single- and multi-user communication systems. The research is motivated by the fact that a multi-layer perceptron is theoretically capable of approximating any nonlinear function to within any specified accuracy. The object function to which we apply the optimization algorithm is the error function of the multilayer perceptron, i.e., the average of the sum of the squares of the differences between the actual and the desired outputs to specified inputs. Until recently, the most widely used training algorithm has been the Backward Error Propagation algorithm, which is based on the algorithm for "steepest descent" and hence, is at best linearly convergent. The algorithm discussed here combines the merits of two well known "global" algorithms--the Conjugate Gradients and the Trust Region algorithms. A further technique known as preconditioning is used to speed up the convergence by clustering the eigenvalues of the "effective Hessian". The Preconditioned Conjugate Gradients--Trust Regions algorithm is found to be superlinearly convergent and hence, outperforms the standard backpropagation routine.
Description
Advisor
Degree
Type
Keywords
Citation
Madyastha, Raghavendra K.. "A globally convergent algorithm for training multilayer perceptrons for data classification and interpolation." (1991) Master’s Thesis, Rice University. https://hdl.handle.net/1911/13532.