摘要
梯度下降法是训练RBF神经网络的一种有效方法。和其他基于下降法的算法一样,RBF神经网络的梯度下降训练方法中也存在学习步长的取值问题。本文基于误差能量函数对学习步长的二阶Taylor展开,构造了一种优化学习步长的方法,进行了较详细的推导:实验表明,本方法可有效地加速梯度下降法的收敛速度、提高其性能。该方法的思想可以用于其他基于下降法的学习步长的优化中。
Gradient descent (GD) method is one of the efficient means to train radial basis function (RBF) neural networks. As the other methods based on descent principle, there exists the problem of how to exploit the learning rates. This paper presents a new algorithm to refine the learning rates, based on second-order Taylor expansion of the error energy function with respect to learning rate, at some value decided by 'award-punish' strategy. Detailed deduction of the algorithm is given. Simulation studies show that this algorithm can accelerate the convergence and improve the performance of GD method. Further more, the performances of non-linear modeling of speech signals using RBF network trained by GD method are compared. In the case set up in this paper (with 500 iterations per processed frame), the nonlinear modeling using the algorithm out-performs the modeling without it over 2dB.
出处
《信号处理》
CSCD
2002年第1期43-48,共6页
Journal of Signal Processing
基金
宽带光纤传输与通信系统技术国家重点实验室开放基金资助
关键词
梯度下降法
学习步长优化
RBF神经网络
Gradient-descent method learning rate refining RBF neural networks