期刊文献+

前向网络学习算法:学习速率矩阵法 被引量:1

Learning Rate Matrix Algorithm for Feed-Forward Nets
在线阅读 下载PDF
导出
摘要 本文针对前向网络 BP 算法存在的收敛速度慢,数值稳定性较差,网络学习困难等主要缺陷,提出用非线性优化理论中的 BFGS 方法构造学习速率阵(Learning RateMatrix,LRM),以替代原来的学习速率常数,并且输入一组样本改变一次权值向量.从而使网络训练时收敛速度极快,数值稳定性好且算法易并行化.文中对 LRM、BP、改进 BP 算法的收敛性、数值稳定性、计算复杂度等进行了讨论.对异或(XOR)问题和六位二进制数对称性判别问题进行了仿真比较,结果表明,对中、小型问题,LRM 算法的性能优于经典 BP 算法和改进 BP 算法. Rumelhart et al in 1986 proposed back-propagation (BP) algorithm for feed-forward nets[3].Since the convergence speed of BP learning alogorithm is very slow,the practical learning process appears to be very difficult.The main cause of difficulty,in the authors' opinion,is that the learning rate is taken to be a constant.Some researchers take the learning rate as a variable number without making much progress.In this paper,the authors take learning rate as a matrix.This learning rats matrix (LRM) is constructed ac- cording to BFGS method found in nonlinear optimization theory.LRM makes learning converge quickly, makes numerical stability satisfactory,and makes parallelism easy. In this paper,the authors compare and discuss LRM,BP,and improved BP learning algorithms;such comparison and discussion lead to a good understanding of why LRM possesses better convergency,bet- ter numerical stability and easier parallelism. Through simulation calculations of the Exclusive-or(XOR) problem and the six-bit binary code symmetry problem,it is again shown that LRM possesses better convergence and better numerical stabili- ty than BP and improved BP.
作者 杜方 王小同
机构地区 西北工业大学
出处 《西北工业大学学报》 EI CAS CSCD 北大核心 1993年第4期403-406,共4页 Journal of Northwestern Polytechnical University
基金 航空科学基金
关键词 神经网络 学习算法 前向网络 feed-forward nets learning algorithm learning rate nonlinear optimization
  • 相关文献

参考文献3

  • 1焦李成,神经网络系统理论,1990年
  • 2王德人,非线性方程组解法与最优化方法,1979年
  • 3朱风石,计算机应用数学,1975年,9期,26页

同被引文献16

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部