Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, a...Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.展开更多
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl...Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.展开更多
In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficien...In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.展开更多
" A method is proposed to estimate the longitudinal road gradient with a concept "general gradient force (GGF)", in which uncertain factors such as additional vertical load, road surface change, and strong wind a..." A method is proposed to estimate the longitudinal road gradient with a concept "general gradient force (GGF)", in which uncertain factors such as additional vertical load, road surface change, and strong wind are also taken into account. An adaptive downhill shift control system is then developed to help driver to use the engine brake with lower gears while downhill driving. In the adaptive system, a three-layer neural network is built to evaluate the necessity to make use of engine brake capability in current downhill situation, and the neural network is trained with samples from experienced drivers. Field test results of the adaptive system are introduced to verify the effectiveness of the approach mentioned above.展开更多
A self-organizing radial basis function(RBF) neural network(SODM-RBFNN) was presented for predicting the production yields and operating optimization. Gradient descent algorithm was used to optimize the widths of RBF ...A self-organizing radial basis function(RBF) neural network(SODM-RBFNN) was presented for predicting the production yields and operating optimization. Gradient descent algorithm was used to optimize the widths of RBF neural network with the initial parameters obtained by k-means learning method. During the iteration procedure of the algorithm, the centers of the neural network were optimized by using the gradient method with these optimized width values. The computational efficiency was maintained by using the multi-threading technique. SODM-RBFNN consists of two RBF neural network models: one is a running model used to predict the product yields of fluid catalytic cracking unit(FCCU) and optimize its operating parameters; the other is a learning model applied to construct or correct a RBF neural network. The running model can be updated by the learning model according to an accuracy criterion. The simulation results of a five-lump kinetic model exhibit its accuracy and generalization capabilities, and practical application in FCCU illustrates its effectiveness.展开更多
基金Partly supported by the National Natural Science Foundation of China,and the Basic Research Program of the Committee of ScienceTechnology and Industry of National Defense of China.
文摘Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.
文摘Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.
文摘In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.
文摘" A method is proposed to estimate the longitudinal road gradient with a concept "general gradient force (GGF)", in which uncertain factors such as additional vertical load, road surface change, and strong wind are also taken into account. An adaptive downhill shift control system is then developed to help driver to use the engine brake with lower gears while downhill driving. In the adaptive system, a three-layer neural network is built to evaluate the necessity to make use of engine brake capability in current downhill situation, and the neural network is trained with samples from experienced drivers. Field test results of the adaptive system are introduced to verify the effectiveness of the approach mentioned above.
基金Projects(60974031,60704011,61174128)supported by the National Natural Science Foundation of China
文摘A self-organizing radial basis function(RBF) neural network(SODM-RBFNN) was presented for predicting the production yields and operating optimization. Gradient descent algorithm was used to optimize the widths of RBF neural network with the initial parameters obtained by k-means learning method. During the iteration procedure of the algorithm, the centers of the neural network were optimized by using the gradient method with these optimized width values. The computational efficiency was maintained by using the multi-threading technique. SODM-RBFNN consists of two RBF neural network models: one is a running model used to predict the product yields of fluid catalytic cracking unit(FCCU) and optimize its operating parameters; the other is a learning model applied to construct or correct a RBF neural network. The running model can be updated by the learning model according to an accuracy criterion. The simulation results of a five-lump kinetic model exhibit its accuracy and generalization capabilities, and practical application in FCCU illustrates its effectiveness.