期刊文献+

Novel Newton’s learning algorithm of neural networks 被引量:2

Novel Newton’s learning algorithm of neural networks
在线阅读 下载PDF
导出
摘要 Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s. Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s.
出处 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第2期450-454,共5页 系统工程与电子技术(英文版)
关键词 Newton's method Hesse matrix fast learning BP method neural network. Newton's method, Hesse matrix, fast learning, BP method, neural network.
  • 相关文献

参考文献7

  • 1Scalero R S, Tepedelenlioglu. A fast new algorithm for training feedforward neural networks. IEEE Trans.Signal Processing, 1992,40(1): 202-210.
  • 2Karayiannis N B, Venetsanopoulos A N. Fast learning algorithms for neural networks. IEEE Trans. Circuits Syst. Ⅱ, 1992, 39 (7): 453-474.
  • 3Sarkav D. Methods to speed up error BP learning algorithrn. ACM Computing Survey, 1995,27: 519-592.
  • 4Azimi-Sadjadi M. Fast Learning Process of Multilayer NN Using RLS Method. IEEE Trans. Signal Processing, 1992, 40(2):446-450.
  • 5Shsh S.Optimal Filtering Algorithms for Fast Learning in Feedforward NN. Neural Networks, 1992, 5:779-787
  • 6Jaroslaw Bilski, Leszek Rutkowski. A fast training algorithm for neural networks. IEEE Trans. Circuits and Syst. Ⅱ, 1998, 45(6):749-753.
  • 7Alessandro Bortoletti, Carmine Di Fiore, Stefano Fanelli, et al. A new class of Quasi-Newtonian methods for optimal learning in MLP-Networks. IEEE Trans. Neural Networks, 2003, 14(3):263-273.

同被引文献20

  • 1Phansalkar V V, Sastry D S. Analysis of the back-prop- agation algorithm with momentum [J]. IEEE Transac- tions on Neural Networks, 1994, 5(3): 505-506.
  • 2Battiti R. First and second-order methods for learning: between steepest descent and Newton's method[J]. Neu- ral Computation, 1992, 4(2): 141-166.
  • 3Setiono R, Hui L C K. Use of a quasi-Newton method in a feedforward neural network construction algorithm [J]. IEEE Transactions on Neural Networks, 1995, 6 (1) : 273-277.
  • 4Dennis J E, More J J. Quasi-Newton methods: motiva- tion and theory[J].SIAM Review, 1977, 19(1): 46-89.
  • 5Biegi H S M, Li C J. Learning algorithms for neural networks based on quasi-Newton method with self-scal- ing[J]. Journal of Dynamic Systems, Measurements and Control, 1993, 115(1): 38-43.
  • 6Ampazis N,Perantonis S J. Two highly efficient second- order algorithms for training feedforward networks[J]. IEEE Transactions on Neural Networks, 2002, 13 ( 5 ) : 1064-1074.
  • 7Nawi N M, Ransing R S, Ransing M R. An improved learning algorithm based onthe conjugate gradient meth- od for back propagation neural networks[J].Internation- al Journal of Computational Intelligence, 2004, 4 ( 1 ) : 46-55.
  • 8Ergezinger S, Thomson E. An accelerated learning algo rithm for multilayer perceptrons: optimization layer by layer[J]. Neural Networks, 1995, 6(1): 31-42.
  • 9Hestenes M R, Stiefel E. Method of conjugate gradi ents for solving linear systems[J]. Journal of Research of the National Burear of Standards, 1952, 49 (6): 409-436.
  • 10Fletcher R, Reeves C M. Function minimization by conjugate gradients[J]. The Computer Journal, 1964, 7: 149-154.

引证文献2

二级引证文献22

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部