Show simple item record

dc.contributor.authorChallagundla, Jeshwanthen_US
dc.date.accessioned2015-07-31T22:10:05Z
dc.date.available2015-07-31T22:10:05Z
dc.date.submittedJanuary 2015en_US
dc.identifier.otherDISS-13059en_US
dc.identifier.urihttp://hdl.handle.net/10106/25030
dc.description.abstractThere is always an ambiguity in deciding the number of learning factors that is really required for training a Multi-Layer Perceptron. This thesis solves this problem by introducing a new method of adaptively changing the number of learning factors computed based on error change created per multiply. A new method is introduced for computing learning factors for weights grouped based on the curvature of the objective function. A method for linearly compressing large ill-conditioned Newton's Hessian matrices to smaller well-conditioned ones is shown. This thesis also shows that the proposed training algorithm adapts itself between two other algorithms in order to produce a better error decrease per multiply. The performance of the proposed algorithm is shown to be better than OWO-MOLF and Levenberg Marquardt for most of the data sets.en_US
dc.description.sponsorshipManry, Michael T.en_US
dc.language.isoenen_US
dc.publisherElectrical Engineeringen_US
dc.titleAdaptive Multiple Optimal Learning Factors For Neural Network Trainingen_US
dc.typeM.S.en_US
dc.contributor.committeeChairManry, Michael T.en_US
dc.degree.departmentElectrical Engineeringen_US
dc.degree.disciplineElectrical Engineeringen_US
dc.degree.grantorUniversity of Texas at Arlingtonen_US
dc.degree.levelmastersen_US
dc.degree.nameM.S.en_US


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record