By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-...By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.展开更多
Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At f...Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At first,multivariate Bernstein polynomials associated with fuzzy valued functions are empolyed to approximate continuous fuzzy valued functions defined on each compact set of R n . Secondly,by introducing cut preserving fuzzy mapping,the equivalent conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular fuzzy neural networks are shown. Finally a few of sufficient and necessary conditions for characterizing approximation capabilities of regular fuzzy neural networks are obtained. And some concrete fuzzy functions demonstrate our conclusions.展开更多
In this paper,the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions.Using the modulus of continuity of function as a metric,the errors of th...In this paper,the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions.Using the modulus of continuity of function as a metric,the errors of the operators approximating continuous functions defined on a compact interval are estimated.Furthmore,Bochner-Riesz means operators of double Fourier series are used to construct networks operators for approximating bivariate functions,and the errors of approximation by the operators are estimated.展开更多
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equat...In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equation of partitions of unity for the hyperbolic tangent function is given. Then, two kinds of quasi-interpolation type neural network operators are con- structed to approximate univariate and bivariate functions, respectively. Also, the errors of the approximation are estimated by means of the modulus of continuity of function. Moreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.展开更多
In order to solve three kinds of fuzzy programm model, fuzzy chance-constrained programming mode ng models, i.e. fuzzy expected value and fuzzy dependent-chance programming model, a simultaneous perturbation stochast...In order to solve three kinds of fuzzy programm model, fuzzy chance-constrained programming mode ng models, i.e. fuzzy expected value and fuzzy dependent-chance programming model, a simultaneous perturbation stochastic approximation algorithm is proposed by integrating neural network with fuzzy simulation. At first, fuzzy simulation is used to generate a set of input-output data. Then a neural network is trained according to the set. Finally, the trained neural network is embedded in simultaneous perturbation stochastic approximation algorithm. Simultaneous perturbation stochastic approximation algorithm is used to search the optimal solution. Two numerical examples are presented to illustrate the effectiveness of the proposed algorithm.展开更多
The relationship between the order of approximation by neural network based on scattered threshold value nodes and the neurons involved in a single hidden layer is investigated. The results obtained show that the degr...The relationship between the order of approximation by neural network based on scattered threshold value nodes and the neurons involved in a single hidden layer is investigated. The results obtained show that the degree of approximation by the periodic neural network with one hidden layer and scattered threshold value nodes is increased with the increase of the number of neurons hid in hidden layer and the smoothness of excitation function.展开更多
Using some regular matrices we present a method to express any multivariate algebraic polynomial of total order n in a normal form. Consequently, we prove constructively that, to approximate continuous target function...Using some regular matrices we present a method to express any multivariate algebraic polynomial of total order n in a normal form. Consequently, we prove constructively that, to approximate continuous target functions defined on some compact set of Rd, neural networks are at least as good as algebraic polynomials.展开更多
A neural network(NN) is a powerful tool for approximating bounded continuous functions in machine learning. The NN provides a framework for numerically solving ordinary differential equations(ODEs) and partial differe...A neural network(NN) is a powerful tool for approximating bounded continuous functions in machine learning. The NN provides a framework for numerically solving ordinary differential equations(ODEs) and partial differential equations(PDEs)combined with the automatic differentiation(AD) technique. In this work, we explore the use of NN for the function approximation and propose a universal solver for ODEs and PDEs. The solver is tested for initial value problems and boundary value problems of ODEs, and the results exhibit high accuracy for not only the unknown functions but also their derivatives. The same strategy can be used to construct a PDE solver based on collocation points instead of a mesh, which is tested with the Burgers equation and the heat equation(i.e., the Laplace equation).展开更多
This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV i...This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.展开更多
This paper presents the variational physics-informed neural network(VPINN)as an effective tool for static structural analyses.One key innovation includes the construction of the neural network solution as an admissibl...This paper presents the variational physics-informed neural network(VPINN)as an effective tool for static structural analyses.One key innovation includes the construction of the neural network solution as an admissible function of the boundary-value problem(BVP),which satisfies all geometrical boundary conditions.We then prove that the admissible neural network solution also satisfies natural boundary conditions,and therefore all boundary conditions,when the stationarity condition of the variational principle is met.Numerical examples are presented to show the advantages and effectiveness of the VPINN in comparison with the physics-informed neural network(PINN).Another contribution of the work is the introduction of Gaussian approximation of the Dirac delta function,which significantly enhances the ability of neural networks to handle singularities,as demonstrated by the examples with concentrated support conditions and loadings.It is hoped that these structural examples are so convincing that engineers would adopt the VPINN method in their structural design practice.展开更多
General neural network inverse adaptive controller has two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system. These defects limit the scope in which the neu...General neural network inverse adaptive controller has two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system. These defects limit the scope in which the neural network inverse adaptive controller is used. We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence, and then through constructing the pseudo-plant, a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system. The simulation results show the validity of this scheme.展开更多
To improve the performance of multilayer perceptron(MLP)neural networks activated by conventional activation functions,this paper presents a new MLP activated by univariate Gaussian radial basis functions(RBFs)with ad...To improve the performance of multilayer perceptron(MLP)neural networks activated by conventional activation functions,this paper presents a new MLP activated by univariate Gaussian radial basis functions(RBFs)with adaptive centers and widths,which is composed of more than one hidden layer.In the hidden layer of the RBF-activated MLP network(MLPRBF),the outputs of the preceding layer are first linearly transformed and then fed into the univariate Gaussian RBF,which exploits the highly nonlinear property of RBF.Adaptive RBFs might address the issues of saturated outputs,low sensitivity,and vanishing gradients in MLPs activated by other prevailing nonlinear functions.Finally,we apply four MLP networks with the rectified linear unit(ReLU),sigmoid function(sigmoid),hyperbolic tangent function(tanh),and Gaussian RBF as the activation functions to approximate the one-dimensional(1D)sinusoidal function,the analytical solution of viscous Burgers’equation,and the two-dimensional(2D)steady lid-driven cavity flows.Using the same network structure,MLP-RBF generally predicts more accurately and converges faster than the other threeMLPs.MLP-RBF using less hidden layers and/or neurons per layer can yield comparable or even higher approximation accuracy than other MLPs equipped with more layers or neurons.展开更多
In recent years,neural networks have become an increasingly powerful tool in scientific computing.The universal approximation theorem asserts that a neural network may be constructed to approximate any given continuou...In recent years,neural networks have become an increasingly powerful tool in scientific computing.The universal approximation theorem asserts that a neural network may be constructed to approximate any given continuous function at desired accuracy.The backpropagation algorithm further allows efficient optimization of the parameters in training a neural network.Powered by GPU’s,effective computations for scientific and engineering problems are thereby enabled.In addition,we show that finite element shape functions may also be approximated by neural networks.展开更多
On the assumption that random interruptions in the observation process are modeled by a sequence of independent Bernoulli random variables, we firstly generalize two kinds of nonlinear filtering methods with random in...On the assumption that random interruptions in the observation process are modeled by a sequence of independent Bernoulli random variables, we firstly generalize two kinds of nonlinear filtering methods with random interruption failures in the observation based on the extended Kalman filtering (EKF) and the unscented Kalman filtering (UKF), which were shortened as GEKF and CUKF in this paper, respectively. Then the nonlinear filtering model is established by using the radial basis function neural network (RBFNN) prototypes and the network weights as state equation and the output of RBFNN to present the observation equation. Finally, we take the filtering problem under missing observed data as a special case of nonlinear filtering with random intermittent failures by setting each missing data to be zero without needing to pre-estimate the missing data, and use the GEKF-based RBFNN and the GUKF-based RBFNN to predict the ground radioactivity time series with missing data. Experimental results demonstrate that the prediction results of GUKF-based RBFNN accord well with the real ground radioactivity time series while the prediction results of GEKF-based RBFNN are divergent.展开更多
Simultaneous perturbation stochastic approximation (SPSA) belongs to the class of gradient-free optimization methods that extract gradient information from successive objective function evaluation. This paper descri...Simultaneous perturbation stochastic approximation (SPSA) belongs to the class of gradient-free optimization methods that extract gradient information from successive objective function evaluation. This paper describes an improved SPSA algorithm, which entails fuzzy adaptive gain sequences, gradient smoothing, and a step rejection procedure to enhance convergence and stability. The proposed fuzzy adaptive simultaneous perturbation approximation (FASPA) algorithm is particularly well suited to problems involving a large number of parameters such as those encountered in nonlinear system identification using neural networks (NNs). Accordingly, a multilayer perceptron (MLP) network with popular training algorithms was used to predicate the system response. We found that an MLP trained by FASPSA had the desired accuracy that was comparable to results obtained by traditional system identification algorithms. Simulation results for typical nonlinear systems demonstrate that the proposed NN architecture trained with FASPSA yields improved system identification as measured by reduced time of convergence and a smaller identification error.展开更多
We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic bi...We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic binary neural networks that are pre-trained to produce such mappings. The pre-training is implemented by a back propagating supervised learning algorithm which converges almost surely to the probabilities induced by the environment, under general ergodicity conditions.展开更多
Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complex...Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods.展开更多
A new type of neural network is described, which is basing on Fourier series, and the activation transfer function in its neuron model is sinusoid, ft can approximate to any function, which is continuum in every segme...A new type of neural network is described, which is basing on Fourier series, and the activation transfer function in its neuron model is sinusoid, ft can approximate to any function, which is continuum in every segment, with any precision with by layers only. We also provide the computer approach emulation results of different kinds of static function.展开更多
In this paper,we propose an equal interval range approximation and expandinglearning rule for multi-layer perceptrons applied in pattern recognitions.Compared with tra-ditional BP algorithm,this learning rule requires...In this paper,we propose an equal interval range approximation and expandinglearning rule for multi-layer perceptrons applied in pattern recognitions.Compared with tra-ditional BP algorithm,this learning rule requires the output activations interval between themaximum target output node and other nodes to exceed a given equal interval range for eachtraining input pattern,thus it can train networks faster in much lower calculation cost andmay avoid the occurrences ot reversed target output and overlearning,hence it can improve thenetwork’s generalization abilities in pattern recognitions.Through gradually expanding of theinterval range,this learning rule can also enable the network to learn its targets more accuratelyin less additional training iterations.Finally,we apply this algorithm in network training inEEG detection,and the experimental results have shown the above advantages of the proposedalgorithm.展开更多
基金Supported by the National Natural Science Foundation of China(No:69872039)
文摘By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.
基金This work was supported by National Natural Science Foundation(699740 4 1 699740 0 6)
文摘Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At first,multivariate Bernstein polynomials associated with fuzzy valued functions are empolyed to approximate continuous fuzzy valued functions defined on each compact set of R n . Secondly,by introducing cut preserving fuzzy mapping,the equivalent conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular fuzzy neural networks are shown. Finally a few of sufficient and necessary conditions for characterizing approximation capabilities of regular fuzzy neural networks are obtained. And some concrete fuzzy functions demonstrate our conclusions.
基金Supported by the National Natural Science Foundation of China(61179041,61101240)the Zhejiang Provincial Natural Science Foundation of China(Y6110117)
文摘In this paper,the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions.Using the modulus of continuity of function as a metric,the errors of the operators approximating continuous functions defined on a compact interval are estimated.Furthmore,Bochner-Riesz means operators of double Fourier series are used to construct networks operators for approximating bivariate functions,and the errors of approximation by the operators are estimated.
基金Supported by the National Natural Science Foundation of China(61179041,61272023,and 11401388)
文摘In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tan- gent activation function. Firstly, an equation of partitions of unity for the hyperbolic tangent function is given. Then, two kinds of quasi-interpolation type neural network operators are con- structed to approximate univariate and bivariate functions, respectively. Also, the errors of the approximation are estimated by means of the modulus of continuity of function. Moreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.
基金National Natural Science Foundation of China (No.70471049)China Postdoctoral Science Foundation (No. 20060400704)
文摘In order to solve three kinds of fuzzy programm model, fuzzy chance-constrained programming mode ng models, i.e. fuzzy expected value and fuzzy dependent-chance programming model, a simultaneous perturbation stochastic approximation algorithm is proposed by integrating neural network with fuzzy simulation. At first, fuzzy simulation is used to generate a set of input-output data. Then a neural network is trained according to the set. Finally, the trained neural network is embedded in simultaneous perturbation stochastic approximation algorithm. Simultaneous perturbation stochastic approximation algorithm is used to search the optimal solution. Two numerical examples are presented to illustrate the effectiveness of the proposed algorithm.
文摘The relationship between the order of approximation by neural network based on scattered threshold value nodes and the neurons involved in a single hidden layer is investigated. The results obtained show that the degree of approximation by the periodic neural network with one hidden layer and scattered threshold value nodes is increased with the increase of the number of neurons hid in hidden layer and the smoothness of excitation function.
文摘Using some regular matrices we present a method to express any multivariate algebraic polynomial of total order n in a normal form. Consequently, we prove constructively that, to approximate continuous target functions defined on some compact set of Rd, neural networks are at least as good as algebraic polynomials.
基金Project supported by the National Natural Science Foundation of China(No.11521091)
文摘A neural network(NN) is a powerful tool for approximating bounded continuous functions in machine learning. The NN provides a framework for numerically solving ordinary differential equations(ODEs) and partial differential equations(PDEs)combined with the automatic differentiation(AD) technique. In this work, we explore the use of NN for the function approximation and propose a universal solver for ODEs and PDEs. The solver is tested for initial value problems and boundary value problems of ODEs, and the results exhibit high accuracy for not only the unknown functions but also their derivatives. The same strategy can be used to construct a PDE solver based on collocation points instead of a mesh, which is tested with the Burgers equation and the heat equation(i.e., the Laplace equation).
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23066).
文摘This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.
基金supported by the National Natural Science Foundation of China(Nos.12072118 and12372029)。
文摘This paper presents the variational physics-informed neural network(VPINN)as an effective tool for static structural analyses.One key innovation includes the construction of the neural network solution as an admissible function of the boundary-value problem(BVP),which satisfies all geometrical boundary conditions.We then prove that the admissible neural network solution also satisfies natural boundary conditions,and therefore all boundary conditions,when the stationarity condition of the variational principle is met.Numerical examples are presented to show the advantages and effectiveness of the VPINN in comparison with the physics-informed neural network(PINN).Another contribution of the work is the introduction of Gaussian approximation of the Dirac delta function,which significantly enhances the ability of neural networks to handle singularities,as demonstrated by the examples with concentrated support conditions and loadings.It is hoped that these structural examples are so convincing that engineers would adopt the VPINN method in their structural design practice.
基金Tianjin Natural Science Foundation !983602011National 863/CIMS Research Foundation !863-511-945-010
文摘General neural network inverse adaptive controller has two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system. These defects limit the scope in which the neural network inverse adaptive controller is used. We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence, and then through constructing the pseudo-plant, a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system. The simulation results show the validity of this scheme.
基金This work was partially supported by the research grant of the National University of Singapore(NUS),Ministry of Education(MOE Tier 1).
文摘To improve the performance of multilayer perceptron(MLP)neural networks activated by conventional activation functions,this paper presents a new MLP activated by univariate Gaussian radial basis functions(RBFs)with adaptive centers and widths,which is composed of more than one hidden layer.In the hidden layer of the RBF-activated MLP network(MLPRBF),the outputs of the preceding layer are first linearly transformed and then fed into the univariate Gaussian RBF,which exploits the highly nonlinear property of RBF.Adaptive RBFs might address the issues of saturated outputs,low sensitivity,and vanishing gradients in MLPs activated by other prevailing nonlinear functions.Finally,we apply four MLP networks with the rectified linear unit(ReLU),sigmoid function(sigmoid),hyperbolic tangent function(tanh),and Gaussian RBF as the activation functions to approximate the one-dimensional(1D)sinusoidal function,the analytical solution of viscous Burgers’equation,and the two-dimensional(2D)steady lid-driven cavity flows.Using the same network structure,MLP-RBF generally predicts more accurately and converges faster than the other threeMLPs.MLP-RBF using less hidden layers and/or neurons per layer can yield comparable or even higher approximation accuracy than other MLPs equipped with more layers or neurons.
基金This work was supported in part by the National Natural Sci-ence Foundation of China(Grants 11521202,11832001,11890681 and 11988102).
文摘In recent years,neural networks have become an increasingly powerful tool in scientific computing.The universal approximation theorem asserts that a neural network may be constructed to approximate any given continuous function at desired accuracy.The backpropagation algorithm further allows efficient optimization of the parameters in training a neural network.Powered by GPU’s,effective computations for scientific and engineering problems are thereby enabled.In addition,we show that finite element shape functions may also be approximated by neural networks.
基金Project supported by the State Key Program of the National Natural Science of China (Grant No. 60835004)the Natural Science Foundation of Jiangsu Province of China (Grant No. BK2009727)+1 种基金the Natural Science Foundation of Higher Education Institutions of Jiangsu Province of China (Grant No. 10KJB510004)the National Natural Science Foundation of China (Grant No. 61075028)
文摘On the assumption that random interruptions in the observation process are modeled by a sequence of independent Bernoulli random variables, we firstly generalize two kinds of nonlinear filtering methods with random interruption failures in the observation based on the extended Kalman filtering (EKF) and the unscented Kalman filtering (UKF), which were shortened as GEKF and CUKF in this paper, respectively. Then the nonlinear filtering model is established by using the radial basis function neural network (RBFNN) prototypes and the network weights as state equation and the output of RBFNN to present the observation equation. Finally, we take the filtering problem under missing observed data as a special case of nonlinear filtering with random intermittent failures by setting each missing data to be zero without needing to pre-estimate the missing data, and use the GEKF-based RBFNN and the GUKF-based RBFNN to predict the ground radioactivity time series with missing data. Experimental results demonstrate that the prediction results of GUKF-based RBFNN accord well with the real ground radioactivity time series while the prediction results of GEKF-based RBFNN are divergent.
文摘Simultaneous perturbation stochastic approximation (SPSA) belongs to the class of gradient-free optimization methods that extract gradient information from successive objective function evaluation. This paper describes an improved SPSA algorithm, which entails fuzzy adaptive gain sequences, gradient smoothing, and a step rejection procedure to enhance convergence and stability. The proposed fuzzy adaptive simultaneous perturbation approximation (FASPA) algorithm is particularly well suited to problems involving a large number of parameters such as those encountered in nonlinear system identification using neural networks (NNs). Accordingly, a multilayer perceptron (MLP) network with popular training algorithms was used to predicate the system response. We found that an MLP trained by FASPSA had the desired accuracy that was comparable to results obtained by traditional system identification algorithms. Simulation results for typical nonlinear systems demonstrate that the proposed NN architecture trained with FASPSA yields improved system identification as measured by reduced time of convergence and a smaller identification error.
文摘We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic binary neural networks that are pre-trained to produce such mappings. The pre-training is implemented by a back propagating supervised learning algorithm which converges almost surely to the probabilities induced by the environment, under general ergodicity conditions.
基金supported by Major Project of Science and Technology Research Program of Chongqing Education Commission of China(Grant No.KJZD-M201900601)China Postdoctoral Science Foundation(Grant No.2021MD703932)Project Supported by Engineering Research Center of Mobile Communications,Ministry of Education,China(Grant No.cqupt-mct-202006)。
文摘Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods.
文摘A new type of neural network is described, which is basing on Fourier series, and the activation transfer function in its neuron model is sinusoid, ft can approximate to any function, which is continuum in every segment, with any precision with by layers only. We also provide the computer approach emulation results of different kinds of static function.
文摘In this paper,we propose an equal interval range approximation and expandinglearning rule for multi-layer perceptrons applied in pattern recognitions.Compared with tra-ditional BP algorithm,this learning rule requires the output activations interval between themaximum target output node and other nodes to exceed a given equal interval range for eachtraining input pattern,thus it can train networks faster in much lower calculation cost andmay avoid the occurrences ot reversed target output and overlearning,hence it can improve thenetwork’s generalization abilities in pattern recognitions.Through gradually expanding of theinterval range,this learning rule can also enable the network to learn its targets more accuratelyin less additional training iterations.Finally,we apply this algorithm in network training inEEG detection,and the experimental results have shown the above advantages of the proposedalgorithm.