期刊文献+

动态步长情形在线PRP-BP算法的收敛性

Convergence of the PRP based online BP training with dynamic step-size rule
在线阅读 下载PDF
导出
摘要 基于动态步长规则,对在线PRP-BP算法的全局收敛性进行了研究.利用泛函分析理论及点集拓扑理论,证明了动态步长情形在线PRP-BP算法所生成的误差函数序列收敛于误差全局极小值且误差函数梯度序列收敛于零.数值试验不仅验证了所获得收敛结果的正确性,并且比较了不同步长、不同下降方向对在线BP算法性能的影响.实验结果表明,动态步长情形在线PRP-BP算法不仅具有更快的收敛速度,而且算法性能也更优. The convergence of the PRP based online BP training with dynamic step-size rule(PRP-OLBP-DYN) is studied.It is shown that the sequence of the error functions converges to its global minimum error,and the sequence of the gradient of the error functions converges to zero.The experimental results verify the correctness of the obtained convergence results,and reveal the fast convergent speed and the good performance of the proposed PRP-OLBP-DYN algorithm.
机构地区 西北大学数学系
出处 《陕西师范大学学报(自然科学版)》 CAS CSCD 北大核心 2012年第2期6-10,共5页 Journal of Shaanxi Normal University:Natural Science Edition
基金 国家自然科学基金资助项目(61075050)
关键词 在线BP算法 PRP共轭梯度法 动态步长 收敛性 online BP training procedure PRP conjugate gradient method dynamic step-size convergence
  • 相关文献

参考文献14

  • 1Phansalkar V V, Sastry D S. Analysis of the back-prop- agation algorithm with momentum [J]. IEEE Transac- tions on Neural Networks, 1994, 5(3): 505-506.
  • 2Long Ning Zhang Fengli.Novel Newton’s learning algorithm of neural networks[J].Journal of Systems Engineering and Electronics,2006,17(2):450-454. 被引量:2
  • 3Battiti R. First and second-order methods for learning: between steepest descent and Newton's method[J]. Neu- ral Computation, 1992, 4(2): 141-166.
  • 4Setiono R, Hui L C K. Use of a quasi-Newton method in a feedforward neural network construction algorithm [J]. IEEE Transactions on Neural Networks, 1995, 6 (1) : 273-277.
  • 5Dennis J E, More J J. Quasi-Newton methods: motiva- tion and theory[J].SIAM Review, 1977, 19(1): 46-89.
  • 6Biegi H S M, Li C J. Learning algorithms for neural networks based on quasi-Newton method with self-scal- ing[J]. Journal of Dynamic Systems, Measurements and Control, 1993, 115(1): 38-43.
  • 7Ampazis N,Perantonis S J. Two highly efficient second- order algorithms for training feedforward networks[J]. IEEE Transactions on Neural Networks, 2002, 13 ( 5 ) : 1064-1074.
  • 8Nawi N M, Ransing R S, Ransing M R. An improved learning algorithm based onthe conjugate gradient meth- od for back propagation neural networks[J].Internation- al Journal of Computational Intelligence, 2004, 4 ( 1 ) : 46-55.
  • 9Ergezinger S, Thomson E. An accelerated learning algo rithm for multilayer perceptrons: optimization layer by layer[J]. Neural Networks, 1995, 6(1): 31-42.
  • 10Hestenes M R, Stiefel E. Method of conjugate gradi ents for solving linear systems[J]. Journal of Research of the National Burear of Standards, 1952, 49 (6): 409-436.

二级参考文献7

  • 1Scalero R S, Tepedelenlioglu. A fast new algorithm for training feedforward neural networks. IEEE Trans.Signal Processing, 1992,40(1): 202-210.
  • 2Karayiannis N B, Venetsanopoulos A N. Fast learning algorithms for neural networks. IEEE Trans. Circuits Syst. Ⅱ, 1992, 39 (7): 453-474.
  • 3Sarkav D. Methods to speed up error BP learning algorithrn. ACM Computing Survey, 1995,27: 519-592.
  • 4Azimi-Sadjadi M. Fast Learning Process of Multilayer NN Using RLS Method. IEEE Trans. Signal Processing, 1992, 40(2):446-450.
  • 5Shsh S.Optimal Filtering Algorithms for Fast Learning in Feedforward NN. Neural Networks, 1992, 5:779-787
  • 6Jaroslaw Bilski, Leszek Rutkowski. A fast training algorithm for neural networks. IEEE Trans. Circuits and Syst. Ⅱ, 1998, 45(6):749-753.
  • 7Alessandro Bortoletti, Carmine Di Fiore, Stefano Fanelli, et al. A new class of Quasi-Newtonian methods for optimal learning in MLP-Networks. IEEE Trans. Neural Networks, 2003, 14(3):263-273.

共引文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部