Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great...Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great attention due to its applications in artificial neural networks (cf. references 1—3).展开更多
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equat...In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equation of partitions of unity for the hyperbolic tangent function is given. Then, two kinds of quasi-interpolation type neural network operators are con- structed to approximate univariate and bivariate functions, respectively. Also, the errors of the approximation are estimated by means of the modulus of continuity of function. Moreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.展开更多
There have been many studies on the dense theorem of approximation by radial basis feedforword neural networks, and some approximation problems by Gaussian radial basis feedforward neural networks (GRBFNs) in some s...There have been many studies on the dense theorem of approximation by radial basis feedforword neural networks, and some approximation problems by Gaussian radial basis feedforward neural networks (GRBFNs) in some special function space have also been investigated. This paper considers the approximation by the GRBFNs in continuous function space. It is proved that the rate of approximation by GRNFNs with nd neurons to any continuous function f defined on a compact subset K R^d can be controlled by w(f, n^-1/2), where w(f, t) is the modulus of continuity of the function f.展开更多
In this paper, the capability of neural networks and some approximation problens in system identification with neural networks are investigated. Some results are given: (i) For any function g ∈Llocp (R1) ∩S’ (R1) t...In this paper, the capability of neural networks and some approximation problens in system identification with neural networks are investigated. Some results are given: (i) For any function g ∈Llocp (R1) ∩S’ (R1) to be an Lp-Tauber-Wiener function, it is necessary and sufficient that g is not apolynomial; (ii) If g∈(Lp TW), then the set of is dense in Lp(K)’ (iii) It is proved that bycompositions of some functions of one variable, one can approximate continuous functional defined on compact Lp(K) and continuous operators from compact Lp1(K1) to LP2(K2). These results confirm the capability of neural networks in identifying dynamic systems.展开更多
Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on th...Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs,for continuous function defined on bounded intervals.In the present paper,we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in[16].By using new methods,we also give right approximation rate estimations of the approximation by Li’s neural networks.展开更多
L^p approximation problems in system identification with RBF neural networks are investigated. It is proved that by superpositions of some functions of one variable in L^ploc(R), one can approximate continuous funct...L^p approximation problems in system identification with RBF neural networks are investigated. It is proved that by superpositions of some functions of one variable in L^ploc(R), one can approximate continuous functionals defined on a compact subset of L^P(K) and continuous operators from a compact subset of L^p1 (K1) to a compact subset of L^p2 (K2). These results show that if its activation function is in L^ploc(R) and is not an even polynomial, then this RBF neural networks can approximate the above systems with any accuracy.展开更多
基金Project supported in part by the Natural Science Foundation of Shanghaithe National Key Laboratory of Intelligent Technology and Systems of China.
文摘Let the function n: R^R→R be defined by Recently, approximating continuous function of several variables by the composition and superposition 9oπ of the function π and a function of one variable g is arousing great attention due to its applications in artificial neural networks (cf. references 1—3).
基金Supported by the National Natural Science Foundation of China(61179041,61272023,and 11401388)
文摘In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equation of partitions of unity for the hyperbolic tangent function is given. Then, two kinds of quasi-interpolation type neural network operators are con- structed to approximate univariate and bivariate functions, respectively. Also, the errors of the approximation are estimated by means of the modulus of continuity of function. Moreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.
基金Supported by National Natural Science Foundation of China(Grant Nos.61101240and61272023)the Zhejiang Provincial Natural Science Foundation of China(Grant No.Y6110117)
文摘There have been many studies on the dense theorem of approximation by radial basis feedforword neural networks, and some approximation problems by Gaussian radial basis feedforward neural networks (GRBFNs) in some special function space have also been investigated. This paper considers the approximation by the GRBFNs in continuous function space. It is proved that the rate of approximation by GRNFNs with nd neurons to any continuous function f defined on a compact subset K R^d can be controlled by w(f, n^-1/2), where w(f, t) is the modulus of continuity of the function f.
基金Project supported by the Climbing Programme-National Key Project for Fundamental Research in China, Grant NSC 92092 and NSF 19371022
文摘In this paper, the capability of neural networks and some approximation problens in system identification with neural networks are investigated. Some results are given: (i) For any function g ∈Llocp (R1) ∩S’ (R1) to be an Lp-Tauber-Wiener function, it is necessary and sufficient that g is not apolynomial; (ii) If g∈(Lp TW), then the set of is dense in Lp(K)’ (iii) It is proved that bycompositions of some functions of one variable, one can approximate continuous functional defined on compact Lp(K) and continuous operators from compact Lp1(K1) to LP2(K2). These results confirm the capability of neural networks in identifying dynamic systems.
文摘Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs,for continuous function defined on bounded intervals.In the present paper,we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in[16].By using new methods,we also give right approximation rate estimations of the approximation by Li’s neural networks.
基金Foundation item: tile National Natural Science Foundation of China (No. 10471017).
文摘L^p approximation problems in system identification with RBF neural networks are investigated. It is proved that by superpositions of some functions of one variable in L^ploc(R), one can approximate continuous functionals defined on a compact subset of L^P(K) and continuous operators from a compact subset of L^p1 (K1) to a compact subset of L^p2 (K2). These results show that if its activation function is in L^ploc(R) and is not an even polynomial, then this RBF neural networks can approximate the above systems with any accuracy.