Adaptive Multiple Optimal Learning Factors For Neural Network Training
Author
Challagundla, Jeshwanth
Metadata
Show full item recordAbstract
There is always an ambiguity in deciding the number of learning factors that is really required for training a Multi-Layer Perceptron. This thesis solves this problem by introducing a new method of adaptively changing the number of learning factors computed based on error change created per multiply. A new method is introduced for computing learning factors for weights grouped based on the curvature of the objective function. A method for linearly compressing large ill-conditioned Newton's Hessian matrices to smaller well-conditioned ones is shown. This thesis also shows that the proposed training algorithm adapts itself between two other algorithms in order to produce a better error decrease per multiply. The performance of the proposed algorithm is shown to be better than OWO-MOLF and Levenberg Marquardt for most of the data sets.