With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the ...Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s.展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
An integrated fuzzy min-max neural network(IFMMNN) is developed to avoid the classification result influenced by the input sequence of training samples, and the learning algorithm can be used as pure clustering,pure c...An integrated fuzzy min-max neural network(IFMMNN) is developed to avoid the classification result influenced by the input sequence of training samples, and the learning algorithm can be used as pure clustering,pure classification, or a hybrid clustering classification. Three experiments are designed to realize the aim. The serial input of samples is changed to parallel input, and the fuzzy membership function is substituted by similarity matrix. The experimental results show its superiority in contrast with the original method proposed by Simpson.展开更多
Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development ...Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.展开更多
This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm f...This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis.展开更多
Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturi...Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area.展开更多
Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons i...Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons in each layer)has a significant influence on the accuracy of these methods.Therefore,a considerable number of studies have been carried out to optimize the NN hyperpaxameters.In this study,the genetic algorithm is applied to NN to find the optimal hyperpaxameters.Thus,the deep energy method,which contains a deep neural network,is applied first on a Timoshenko beam and a plate with a hole.Subsequently,the numbers of hidden layers,integration points,and neurons in each layer are optimized to reach the highest accuracy to predict the stress distribution through these structures.Thus,applying the proper optimization method on NN leads to significant increase in the NN prediction accuracy after conducting the optimization in various examples.展开更多
The hydraulic roll bending control system usually has the dynamic characteristics of nonlinearity, slow time variance and strong outside interference in the roiling process, so it is difficult to establish a precise m...The hydraulic roll bending control system usually has the dynamic characteristics of nonlinearity, slow time variance and strong outside interference in the roiling process, so it is difficult to establish a precise mathemati- cal model for control. So, a new method for establishing a hydraulic roll bending control system is put forward by cerebellar model articulation controller (CMAC) neural network and proportional-integral-derivative (PID) coupling control strategy. The non-linear relationship between input and output can be achieved by the concept mapping and the actual mapping of CMAC. The simulation results show that, compared with the conventional PID control algo- rithm, the parallel control algorithm can overcome the influence of parameter change of roll bending system on the control performance, thus improve the anti jamming capability of the system greatly, reduce the dependence of con- trol performance on the accuracy of the analytical model, enhance the tracking performance of hydraulic roll bending loop for the hydraulic and roll bending force and increase system response speed. The results indicate that the CMAC-P1D coupling control strategy for hydraulic roll bending system is effective.展开更多
The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms ...The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms are significant potentials in nonlinear approximation ability,convergent speeds and global optimization than the classical neural networks and the standard BP algorithm, and related computer simulations and theoretical analysis are given too.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the lim...Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the limited energy resources of SNs.Current energy efficiency strategies,such as clustering,multi-hop routing,and data aggregation,face challenges,including uneven energy depletion,high computational demands,and suboptimal cluster head(CH)selection.To address these limitations,this paper proposes a hybrid methodology that optimizes energy consumption(EC)while maintaining network performance.The proposed approach integrates the Low Energy Adaptive Clustering Hierarchy with Deterministic(LEACH-D)protocol using an Artificial Neural Network(ANN)and Bayesian Regularization Algorithm(BRA).LEACH-D improves upon conventional LEACH by ensuring more uniform energy usage across SNs,mitigating inefficiencies from random CH selection.The ANN further enhances CH selection and routing processes,effectively reducing data transmission overhead and idle listening.Simulation results reveal that the LEACH-D-ANN model significantly reduces EC and extends the network’s lifespan compared to existing protocols.This framework offers a promising solution to the energy efficiency challenges in WSNs,paving the way for more sustainable and reliable network deployments.展开更多
This paper investigates exponential stability and trajectory bounds of motions of equilibria of a class of associative neural networks under structural variations as learning a new pattern. Some conditions for the pos...This paper investigates exponential stability and trajectory bounds of motions of equilibria of a class of associative neural networks under structural variations as learning a new pattern. Some conditions for the possible maximum estimate of the domain of structural exponential stability are determined. The filtering ability of the associative neural networks contaminated by input noises is analyzed. Employing the obtained results as valuable guidelines, a systematic synthesis procedure for constructing a dynamical associative neural network that stores a given set of vectors as the stable equilibrium points as well as learns new patterns can be developed. Some new concepts defined here are expected to be the instruction for further studies of learning associative neural networks.展开更多
This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorith...This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorithm was used for the training of the linking-weights of the neural network.Hence it gets rid of the difficulty of choosing these tuning-knobs manually and provides easier condition for the wide applications of GPC on industrial plants.Simulation results illustrated the effectiveness of the method.展开更多
Convolutional Neural Networks(CNNs)models succeed in vast domains.CNNs are available in a variety of topologies and sizes.The challenge in this area is to develop the optimal CNN architecture for a particular issue in...Convolutional Neural Networks(CNNs)models succeed in vast domains.CNNs are available in a variety of topologies and sizes.The challenge in this area is to develop the optimal CNN architecture for a particular issue in order to achieve high results by using minimal computational resources to train the architecture.Our proposed framework to automated design is aimed at resolving this problem.The proposed framework is focused on a genetic algorithm that develops a population of CNN models in order to find the architecture that is the best fit.In comparison to the co-authored work,our proposed framework is concerned with creating lightweight architectures with a limited number of parameters while retaining a high degree of validity accuracy utilizing an ensemble learning technique.This architecture is intended to operate on low-resource machines,rendering it ideal for implementation in a number of environments.Four common benchmark image datasets are used to test the proposed framework,and it is compared to peer competitors’work utilizing a range of parameters,including accuracy,the number of model parameters used,the number of GPUs used,and the number of GPU days needed to complete the method.Our experimental findings demonstrated a significant advantage in terms of GPU days,accuracy,and the number of parameters in the discovered model.展开更多
A switched reluctance machine (SRM) drive is a time-varying, strongly nonlinear system. High performance control can no longer be achieved by using linear techniques. This paper describes the back-propagation (BP)...A switched reluctance machine (SRM) drive is a time-varying, strongly nonlinear system. High performance control can no longer be achieved by using linear techniques. This paper describes the back-propagation (BP) neural network-based proportional-integral-derivative (PID) speed control of the SRM. It's the interest of this paper to explore the utilization of the prior empirical knowledge as guidance in the initializing and training of the neural networks. The purpose is to make the networks less sensitive on the initial weights. Two modified algorithms are presented and simulation experiments show some interesting findings about their control effects and their corresponding sensitivity on the initial weights of the networks.展开更多
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec...Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.展开更多
As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. A...As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. And the simulation result illustrate the modified algorithm is more effective and practicable.展开更多
The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dim...The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dimensional convolutional neural network(3D-CNN),which can extract two-di-mensional features in spatial domain and one-dimensional features in time domain,simultaneously.The network structure design is based on the deep learning framework Keras,and the discarding method and batch normalization(BN)algorithm are effectively combined with three-dimensional vis-ual geometry group block(3D-VGG-Block)to reduce the risk of overfitting while improving training speed.Aiming at the problem of the lack of samples in the data set,two methods of image flipping and small amplitude flipping are used for data amplification.Finally,the recognition rate on the data set is as high as 69.11%.Compared with the current international average micro-expression recog-nition rate of about 67%,the proposed algorithm has obvious advantages in recognition rate.展开更多
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
文摘Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s.
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
基金the National Natural Science Foundation of China(No.61402280)
文摘An integrated fuzzy min-max neural network(IFMMNN) is developed to avoid the classification result influenced by the input sequence of training samples, and the learning algorithm can be used as pure clustering,pure classification, or a hybrid clustering classification. Three experiments are designed to realize the aim. The serial input of samples is changed to parallel input, and the fuzzy membership function is substituted by similarity matrix. The experimental results show its superiority in contrast with the original method proposed by Simpson.
基金Project (No. 60721062) supported by the National Creative Research Groups Science Foundation of China
文摘Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.
文摘This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis.
文摘Additive manufacturing(AM),also known as three-dimensional printing,is gaining increasing attention from academia and industry due to the unique advantages it has in comparison with traditional subtractive manufacturing.However,AM processing parameters are difficult to tune,since they can exert a huge impact on the printed microstructure and on the performance of the subsequent products.It is a difficult task to build a process-structure-property-performance(PSPP)relationship for AM using traditional numerical and analytical models.Today,the machine learning(ML)method has been demonstrated to be a valid way to perform complex pattern recognition and regression analysis without an explicit need to construct and solve the underlying physical models.Among ML algorithms,the neural network(NN)is the most widely used model due to the large dataset that is currently available,strong computational power,and sophisticated algorithm architecture.This paper overviews the progress of applying the NN algorithm to several aspects of the AM whole chain,including model design,in situ monitoring,and quality evaluation.Current challenges in applying NNs to AM and potential solutions for these problems are then outlined.Finally,future trends are proposed in order to provide an overall discussion of this interdisciplinary area.
文摘Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons in each layer)has a significant influence on the accuracy of these methods.Therefore,a considerable number of studies have been carried out to optimize the NN hyperpaxameters.In this study,the genetic algorithm is applied to NN to find the optimal hyperpaxameters.Thus,the deep energy method,which contains a deep neural network,is applied first on a Timoshenko beam and a plate with a hole.Subsequently,the numbers of hidden layers,integration points,and neurons in each layer are optimized to reach the highest accuracy to predict the stress distribution through these structures.Thus,applying the proper optimization method on NN leads to significant increase in the NN prediction accuracy after conducting the optimization in various examples.
基金Item Sponsored by National High-Tech Research and Development Program(863Program)of China(2009AA04Z143)Natural Science Foundation of Hebei Province of China(E2006001038)Hebei Provincial Science and Technology Project of China(10212101D)
文摘The hydraulic roll bending control system usually has the dynamic characteristics of nonlinearity, slow time variance and strong outside interference in the roiling process, so it is difficult to establish a precise mathemati- cal model for control. So, a new method for establishing a hydraulic roll bending control system is put forward by cerebellar model articulation controller (CMAC) neural network and proportional-integral-derivative (PID) coupling control strategy. The non-linear relationship between input and output can be achieved by the concept mapping and the actual mapping of CMAC. The simulation results show that, compared with the conventional PID control algo- rithm, the parallel control algorithm can overcome the influence of parameter change of roll bending system on the control performance, thus improve the anti jamming capability of the system greatly, reduce the dependence of con- trol performance on the accuracy of the analytical model, enhance the tracking performance of hydraulic roll bending loop for the hydraulic and roll bending force and increase system response speed. The results indicate that the CMAC-P1D coupling control strategy for hydraulic roll bending system is effective.
文摘The Volterra feedforward neural network with nonlinear interconnections and related homotopy learning algorithm are proposed in the paper. It is shown that Volterra neural network and the homolopy learning algorithms are significant potentials in nonlinear approximation ability,convergent speeds and global optimization than the classical neural networks and the standard BP algorithm, and related computer simulations and theoretical analysis are given too.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
文摘Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the limited energy resources of SNs.Current energy efficiency strategies,such as clustering,multi-hop routing,and data aggregation,face challenges,including uneven energy depletion,high computational demands,and suboptimal cluster head(CH)selection.To address these limitations,this paper proposes a hybrid methodology that optimizes energy consumption(EC)while maintaining network performance.The proposed approach integrates the Low Energy Adaptive Clustering Hierarchy with Deterministic(LEACH-D)protocol using an Artificial Neural Network(ANN)and Bayesian Regularization Algorithm(BRA).LEACH-D improves upon conventional LEACH by ensuring more uniform energy usage across SNs,mitigating inefficiencies from random CH selection.The ANN further enhances CH selection and routing processes,effectively reducing data transmission overhead and idle listening.Simulation results reveal that the LEACH-D-ANN model significantly reduces EC and extends the network’s lifespan compared to existing protocols.This framework offers a promising solution to the energy efficiency challenges in WSNs,paving the way for more sustainable and reliable network deployments.
文摘This paper investigates exponential stability and trajectory bounds of motions of equilibria of a class of associative neural networks under structural variations as learning a new pattern. Some conditions for the possible maximum estimate of the domain of structural exponential stability are determined. The filtering ability of the associative neural networks contaminated by input noises is analyzed. Employing the obtained results as valuable guidelines, a systematic synthesis procedure for constructing a dynamical associative neural network that stores a given set of vectors as the stable equilibrium points as well as learns new patterns can be developed. Some new concepts defined here are expected to be the instruction for further studies of learning associative neural networks.
基金Supported by the National 863 CIMS Project Foundation(863-511-010)Tianjin Natural Science Foundation(983602011)Backbone Young Teacher Project Foundation of Ministry of Education
文摘This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorithm was used for the training of the linking-weights of the neural network.Hence it gets rid of the difficulty of choosing these tuning-knobs manually and provides easier condition for the wide applications of GPC on industrial plants.Simulation results illustrated the effectiveness of the method.
文摘Convolutional Neural Networks(CNNs)models succeed in vast domains.CNNs are available in a variety of topologies and sizes.The challenge in this area is to develop the optimal CNN architecture for a particular issue in order to achieve high results by using minimal computational resources to train the architecture.Our proposed framework to automated design is aimed at resolving this problem.The proposed framework is focused on a genetic algorithm that develops a population of CNN models in order to find the architecture that is the best fit.In comparison to the co-authored work,our proposed framework is concerned with creating lightweight architectures with a limited number of parameters while retaining a high degree of validity accuracy utilizing an ensemble learning technique.This architecture is intended to operate on low-resource machines,rendering it ideal for implementation in a number of environments.Four common benchmark image datasets are used to test the proposed framework,and it is compared to peer competitors’work utilizing a range of parameters,including accuracy,the number of model parameters used,the number of GPUs used,and the number of GPU days needed to complete the method.Our experimental findings demonstrated a significant advantage in terms of GPU days,accuracy,and the number of parameters in the discovered model.
基金the Programme of Introducing Talents of Discipline to Universities(No.B06012)
文摘A switched reluctance machine (SRM) drive is a time-varying, strongly nonlinear system. High performance control can no longer be achieved by using linear techniques. This paper describes the back-propagation (BP) neural network-based proportional-integral-derivative (PID) speed control of the SRM. It's the interest of this paper to explore the utilization of the prior empirical knowledge as guidance in the initializing and training of the neural networks. The purpose is to make the networks less sensitive on the initial weights. Two modified algorithms are presented and simulation experiments show some interesting findings about their control effects and their corresponding sensitivity on the initial weights of the networks.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343)PrincessNourah bint Abdulrahman University,Riyadh,Saudi ArabiaDeanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia,for funding this researchwork through the project number“NBU-FFR-2024-1092-02”.
文摘Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.
文摘As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. And the simulation result illustrate the modified algorithm is more effective and practicable.
基金Supported by the Shaanxi Province Key Research and Development Project(No.2021GY-280)Shaanxi Province Natural Science Basic Re-search Program Project(No.2021JM-459)+1 种基金the National Natural Science Foundation of China(No.61834005,61772417,61802304,61602377,61634004)the Shaanxi Province International Science and Technology Cooperation Project(No.2018KW-006).
文摘The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dimensional convolutional neural network(3D-CNN),which can extract two-di-mensional features in spatial domain and one-dimensional features in time domain,simultaneously.The network structure design is based on the deep learning framework Keras,and the discarding method and batch normalization(BN)algorithm are effectively combined with three-dimensional vis-ual geometry group block(3D-VGG-Block)to reduce the risk of overfitting while improving training speed.Aiming at the problem of the lack of samples in the data set,two methods of image flipping and small amplitude flipping are used for data amplification.Finally,the recognition rate on the data set is as high as 69.11%.Compared with the current international average micro-expression recog-nition rate of about 67%,the proposed algorithm has obvious advantages in recognition rate.