In recent years, with the development of educational informatization, the network teaching mode has become an indispensable auxiliary teaching method in teaching. For teachers' teaching work, the network learning ...In recent years, with the development of educational informatization, the network teaching mode has become an indispensable auxiliary teaching method in teaching. For teachers' teaching work, the network learning space is not only a change in teaching methods, but also a deeper change in teachers' educational concepts and thoughts. Using online learning space to optimize classroom teaching has also become a help to improve classroom teaching efficiency and teaching quality. However, the integration of education and information technology in our country is getting closer and closer, and the means and methods of learning by using network technology are becoming more and more perfect. With the help of information technology and network environment, the network learning space provides teachers and students with relevant technologies and services they need to meet the teaching needs. Rational network application technology can promote the reform of school teaching and the in-depth integration of information technology and teaching.展开更多
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt...Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.展开更多
Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development ...Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.展开更多
Under the bounded rationality assumption,a principal rarely provides an optimal contract to an agent.Learning from others is one way to improve such a contract.This paper studies the efficiency of social network learn...Under the bounded rationality assumption,a principal rarely provides an optimal contract to an agent.Learning from others is one way to improve such a contract.This paper studies the efficiency of social network learning(SNL)in the principal–agent framework.We first introduce the Cobb-Douglas production function into the classic Holmstrom and Milgrom(1987)model with a constant relative risk-averse agent and work out the theoretically optimal contract.Algorithms are then designed to model the SNL process based on profit gaps between contracts in a network of principals.Considering the uncertainty of the agent's labor output,we find that the principals can reach a consensus that tends to result in overcompensation compared to the optimal contract.Then,this study examines how network attributes and model parameters impact learning efficiency and posits several summative hypotheses.The simulation results validate these hypotheses,and we discuss the relevant economic implications of the observed changes in SNL efficiency.展开更多
Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention an...Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention and control measures.The self-potential(SP)stands out for its sensitivity to contamination plumes,offering a solution for monitoring and detecting the movement and seepage of subsurface pollutants.However,traditional SP inversion techniques heavily rely on precise subsurface resistivity information.In this study,we propose the Attention U-Net deep learning network for rapid SP inversion.By incorporating an attention mechanism,this algorithm effectively learns the relationship between array-style SP data and the location and extent of subsurface contaminated sources.We designed a synthetic landfill model with a heterogeneous resistivity structure to assess the performance of Attention U-Net deep learning network.Additionally,we conducted further validation using a laboratory model to assess its practical applicability.The results demonstrate that the algorithm is not solely dependent on resistivity information,enabling effective locating of the source distribution,even in models with intricate subsurface structures.Our work provides a promising tool for SP data processing,enhancing the applicability of this method in the field of near-subsurface environmental monitoring.展开更多
Blockchain platform swith the unique characteristics of anonymity,decentralization,and transparency of their transactions,which are faced with abnormal activities such as money laundering,phishing scams,and fraudulent...Blockchain platform swith the unique characteristics of anonymity,decentralization,and transparency of their transactions,which are faced with abnormal activities such as money laundering,phishing scams,and fraudulent behavior,posing a serious threat to account asset security.For these potential security risks,this paper proposes a hybrid neural network detection method(HNND)that learns multiple types of account features and enhances fusion information among them to effectively detect abnormal transaction behaviors in the blockchain.In HNND,the Temporal Transaction Graph Attention Network(T2GAT)is first designed to learn biased aggregation representation of multi-attribute transactions among nodes,which can capture key temporal information from node neighborhood transactions.Then,the Graph Convolutional Network(GCN)is adopted which captures abstract structural features of the transaction network.Further,the Stacked Denoising Autoencode(SDA)is developed to achieve adaptive fusion of thses features from different modules.Moreover,the SDA enhances robustness and generalization ability of node representation,leading to higher binary classification accuracy in detecting abnormal behaviors of blockchain accounts.Evaluations on a real-world abnormal transaction dataset demonstrate great advantages of the proposed HNND method over other compared methods.展开更多
Grating-based X-ray phase-contrast imaging enhances the contrast of imaged objects,particularly soft tissues.However,the radiation dose in computed tomography(CT)is generally excessive owing to the complex collection ...Grating-based X-ray phase-contrast imaging enhances the contrast of imaged objects,particularly soft tissues.However,the radiation dose in computed tomography(CT)is generally excessive owing to the complex collection scheme.Sparse-view CT collection reduces the radiation dose,but with reduced resolution and reconstructed artifacts particularly in analytical reconstruction methods.Recently,deep learning has been employed in sparse-view CT reconstruction and achieved stateof-the-art results.Nevertheless,its low generalization performance and requirement for abundant training datasets have hindered the practical application of deep learning in phase-contrast CT.In this study,a CT model was used to generate a substantial number of simulated training datasets,thereby circumventing the need for experimental datasets.By training a network with simulated training datasets,the proposed method achieves high generalization performance in attenuationbased CT and phase-contrast CT,despite the lack of sufficient experimental datasets.In experiments utilizing only half of the CT data,our proposed method obtained an image quality comparable to that of the filtered back-projection algorithm with full-view projection.The proposed method simultaneously addresses two challenges in phase-contrast three-dimensional imaging,namely the lack of experimental datasets and the high exposure dose,through model-driven deep learning.This method significantly accelerates the practical application of phase-contrast CT.展开更多
Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the lim...Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the limited energy resources of SNs.Current energy efficiency strategies,such as clustering,multi-hop routing,and data aggregation,face challenges,including uneven energy depletion,high computational demands,and suboptimal cluster head(CH)selection.To address these limitations,this paper proposes a hybrid methodology that optimizes energy consumption(EC)while maintaining network performance.The proposed approach integrates the Low Energy Adaptive Clustering Hierarchy with Deterministic(LEACH-D)protocol using an Artificial Neural Network(ANN)and Bayesian Regularization Algorithm(BRA).LEACH-D improves upon conventional LEACH by ensuring more uniform energy usage across SNs,mitigating inefficiencies from random CH selection.The ANN further enhances CH selection and routing processes,effectively reducing data transmission overhead and idle listening.Simulation results reveal that the LEACH-D-ANN model significantly reduces EC and extends the network’s lifespan compared to existing protocols.This framework offers a promising solution to the energy efficiency challenges in WSNs,paving the way for more sustainable and reliable network deployments.展开更多
Learning Bayesian network structure is one of the most exciting challenges in machine learning. Discovering a correct skeleton of a directed acyclic graph(DAG) is the foundation for dependency analysis algorithms fo...Learning Bayesian network structure is one of the most exciting challenges in machine learning. Discovering a correct skeleton of a directed acyclic graph(DAG) is the foundation for dependency analysis algorithms for this problem. Considering the unreliability of high order condition independence(CI) tests, and to improve the efficiency of a dependency analysis algorithm, the key steps are to use few numbers of CI tests and reduce the sizes of conditioning sets as much as possible. Based on these reasons and inspired by the algorithm PC, we present an algorithm, named fast and efficient PC(FEPC), for learning the adjacent neighbourhood of every variable. FEPC implements the CI tests by three kinds of orders, which reduces the high order CI tests significantly. Compared with current algorithm proposals, the experiment results show that FEPC has better accuracy with fewer numbers of condition independence tests and smaller size of conditioning sets. The highest reduction percentage of CI test is 83.3% by EFPC compared with PC algorithm.展开更多
Power flow adjustment is a sequential decision problem.The operator makes decisions to ensure that the power flow meets the system's operational constraints,thereby obtaining a typical operating mode power flow.Ho...Power flow adjustment is a sequential decision problem.The operator makes decisions to ensure that the power flow meets the system's operational constraints,thereby obtaining a typical operating mode power flow.However,this decision-making method relies heavily on human experience,which is inefficient when the system is complex.In addition,the results given by the current evaluation system are difficult to directly guide the intelligent power flow adjustment.In order to improve the efficiency and intelligence of power flow adjustment,this paper proposes a power flow adjustment method based on deep reinforcement learning.Combining deep reinforcement learning theory with traditional power system operation mode analysis,the concept of region mapping is proposed to describe the adjustment process,so as to analyze the process of power flow calculation and manual adjustment.Considering the characteristics of power flow adjustment,a Markov decision process model suitable for power flow adjustment is constructed.On this basis,a double Q network learning method suitable for power flow adjustment is proposed.This method can adjust the power flow according to the set adjustment route,thus improving the intelligent level of power flow adjustment.The method in this paper is tested on China Electric Power Research Institute(CEPRI)test system.展开更多
The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system f...The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.展开更多
With an increasing number of services connected to the internet,including cloud computing and Internet of Things(IoT)systems,the prevention of cyberattacks has become more challenging due to the high dimensionality of...With an increasing number of services connected to the internet,including cloud computing and Internet of Things(IoT)systems,the prevention of cyberattacks has become more challenging due to the high dimensionality of the network traffic data and access points.Recently,researchers have suggested deep learning(DL)algorithms to define intrusion features through training empirical data and learning anomaly patterns of attacks.However,due to the high dynamics and imbalanced nature of the data,the existing DL classifiers are not completely effective at distinguishing between abnormal and normal behavior line connections for modern networks.Therefore,it is important to design a self-adaptive model for an intrusion detection system(IDS)to improve the detection of attacks.Consequently,in this paper,a novel hybrid weighted deep belief network(HW-DBN)algorithm is proposed for building an efficient and reliable IDS(DeepIoT.IDS)model to detect existing and novel cyberattacks.The HW-DBN algorithm integrates an improved Gaussian–Bernoulli restricted Boltzmann machine(Deep GB-RBM)feature learning operator with a weighted deep neural networks(WDNN)classifier.The CICIDS2017 dataset is selected to evaluate the DeepIoT.IDS model as it contains multiple types of attacks,complex data patterns,noise values,and imbalanced classes.We have compared the performance of the DeepIoT.IDS model with three recent models.The results show the DeepIoT.IDS model outperforms the three other models by achieving a higher detection accuracy of 99.38%and 99.99%for web attack and bot attack scenarios,respectively.Furthermore,it can detect the occurrence of low-frequency attacks that are undetectable by other models.展开更多
To get simpler operation in modified fuzzy adaptive learning control network (FALCON) in some engineering application, sigmoid nonlinear function is employed as a substitute of traditional Gaussian membership functi...To get simpler operation in modified fuzzy adaptive learning control network (FALCON) in some engineering application, sigmoid nonlinear function is employed as a substitute of traditional Gaussian membership function. For making the modified FALCON learning more efficient and stable, a simulated annealing (SA) learning coefficient is introduced into learning algorithm. At first, the basic concepts and main advantages of FALCON were briefly reviewed. Subsequently, the topological structure and nodes operation were illustrated; the gradient-descent learning algorithm with SA learning coefficient was derived; and the distinctions between the archetype and the modification were analyzed. Eventually, the significance and worthiness of the modified FALCON were validated by its application to probability prediction of anode effect in aluminium electrolysis cells.展开更多
Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stage...Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.展开更多
Frequent counting is a very so often required operation in machine learning algorithms. A typical machine learning task, learning the structure of Bayesian network (BN) based on metric scoring, is introduced as an e...Frequent counting is a very so often required operation in machine learning algorithms. A typical machine learning task, learning the structure of Bayesian network (BN) based on metric scoring, is introduced as an example that heavily relies on frequent counting. A fast calculation method for frequent counting enhanced with two cache layers is then presented for learning BN. The main contribution of our approach is to eliminate comparison operations for frequent counting by introducing a multi-radix number system calculation. Both mathematical analysis and empirical comparison between our method and state-of-the-art solution are conducted. The results show that our method is dominantly superior to state-of-the-art solution in solving the problem of learning BN.展开更多
The Wavelet-Domain Projection Pursuit Learning Network (WDPPLN) is proposedfor restoring degraded image. The new network combines the advantages of both projectionpursuit and wavelet shrinkage. Restoring image is very...The Wavelet-Domain Projection Pursuit Learning Network (WDPPLN) is proposedfor restoring degraded image. The new network combines the advantages of both projectionpursuit and wavelet shrinkage. Restoring image is very difficult when little is known about apriori knowledge for multisource degraded factors. WDPPLN successfully resolves this problemby separately processing wavelet coefficients and scale coefficients. Parameters in WDPPLN,which are used to simulate degraded factors, are estimated via WDPPLN training, using scalecoefficients. Also, WDPPLN uses soft-threshold of wavelet shrinkage technique to suppress noisein three high frequency subbands. The new method is compared with the traditional methodsand the Projection Pursuit Learning Network (PPLN) method. Experimental results demonstratethat it is an effective method for unsupervised restoring degraded image.展开更多
This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure c...This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.展开更多
In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits...In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.展开更多
Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some l...Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some limitations.For instance,only the structural information of nodes is considered when these kinds of algorithms are constructed.Aiming at this issue,a label and community information-based network presentation learning algorithm(LC-NPLA)is proposed in this paper.First of all,by using the community information and the label information of nodes,the first-order neighbors of nodes are reconstructed.In the next,the random walk strategy is improved by integrating the degree information and label information of nodes.Then,the node sequence obtained from random walk sampling is transformed into the node representation vector by the Skip-Gram model.At last,the experimental results on ten real-world networks demonstrate that the proposed algorithm has great advantages in the label classification,network reconstruction and link prediction tasks,compared with three benchmark algorithms.展开更多
There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR)...There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR) circumstances or under time-varying multipath channels, the majority of the existing algorithms for signal recognition are already facing limitations. In this series, we present a robust signal recognition method based upon the original and latest updated version of the extreme learning machine(ELM) to help users to switch between networks. The ELM utilizes signal characteristics to distinguish systems. The superiority of this algorithm lies in the random choices of hidden nodes and in the fact that it determines the output weights analytically, which result in lower complexity. Theoretically, the algorithm tends to offer a good generalization performance at an extremely fast speed of learning. Moreover, we implement the GSM/WCDMA/LTE models in the Matlab environment by using the Simulink tools. The simulations reveal that the signals can be recognized successfully to achieve a 95% accuracy in a low SNR(0 dB) environment in the time-varying multipath Rayleigh fading channel.展开更多
文摘In recent years, with the development of educational informatization, the network teaching mode has become an indispensable auxiliary teaching method in teaching. For teachers' teaching work, the network learning space is not only a change in teaching methods, but also a deeper change in teachers' educational concepts and thoughts. Using online learning space to optimize classroom teaching has also become a help to improve classroom teaching efficiency and teaching quality. However, the integration of education and information technology in our country is getting closer and closer, and the means and methods of learning by using network technology are becoming more and more perfect. With the help of information technology and network environment, the network learning space provides teachers and students with relevant technologies and services they need to meet the teaching needs. Rational network application technology can promote the reform of school teaching and the in-depth integration of information technology and teaching.
基金supported by the National Natural Science Foundation of China (60974082,11171094)the Fundamental Research Funds for the Central Universities (K50510700004)+1 种基金the Foundation and Advanced Technology Research Program of Henan Province (102300410264)the Basic Research Program of the Education Department of Henan Province (2010A110010)
文摘Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.
基金Project (No. 60721062) supported by the National Creative Research Groups Science Foundation of China
文摘Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction.
基金the support of the National Natural Science Foundation of China(Grant number:72371202)the Fundamental Research Funds for the Central Universities(Grant number:JBK2207051).
文摘Under the bounded rationality assumption,a principal rarely provides an optimal contract to an agent.Learning from others is one way to improve such a contract.This paper studies the efficiency of social network learning(SNL)in the principal–agent framework.We first introduce the Cobb-Douglas production function into the classic Holmstrom and Milgrom(1987)model with a constant relative risk-averse agent and work out the theoretically optimal contract.Algorithms are then designed to model the SNL process based on profit gaps between contracts in a network of principals.Considering the uncertainty of the agent's labor output,we find that the principals can reach a consensus that tends to result in overcompensation compared to the optimal contract.Then,this study examines how network attributes and model parameters impact learning efficiency and posits several summative hypotheses.The simulation results validate these hypotheses,and we discuss the relevant economic implications of the observed changes in SNL efficiency.
基金Projects(42174170,41874145,72088101)supported by the National Natural Science Foundation of ChinaProject(CX20200228)supported by the Hunan Provincial Innovation Foundation for Postgraduate,China。
文摘Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention and control measures.The self-potential(SP)stands out for its sensitivity to contamination plumes,offering a solution for monitoring and detecting the movement and seepage of subsurface pollutants.However,traditional SP inversion techniques heavily rely on precise subsurface resistivity information.In this study,we propose the Attention U-Net deep learning network for rapid SP inversion.By incorporating an attention mechanism,this algorithm effectively learns the relationship between array-style SP data and the location and extent of subsurface contaminated sources.We designed a synthetic landfill model with a heterogeneous resistivity structure to assess the performance of Attention U-Net deep learning network.Additionally,we conducted further validation using a laboratory model to assess its practical applicability.The results demonstrate that the algorithm is not solely dependent on resistivity information,enabling effective locating of the source distribution,even in models with intricate subsurface structures.Our work provides a promising tool for SP data processing,enhancing the applicability of this method in the field of near-subsurface environmental monitoring.
文摘Blockchain platform swith the unique characteristics of anonymity,decentralization,and transparency of their transactions,which are faced with abnormal activities such as money laundering,phishing scams,and fraudulent behavior,posing a serious threat to account asset security.For these potential security risks,this paper proposes a hybrid neural network detection method(HNND)that learns multiple types of account features and enhances fusion information among them to effectively detect abnormal transaction behaviors in the blockchain.In HNND,the Temporal Transaction Graph Attention Network(T2GAT)is first designed to learn biased aggregation representation of multi-attribute transactions among nodes,which can capture key temporal information from node neighborhood transactions.Then,the Graph Convolutional Network(GCN)is adopted which captures abstract structural features of the transaction network.Further,the Stacked Denoising Autoencode(SDA)is developed to achieve adaptive fusion of thses features from different modules.Moreover,the SDA enhances robustness and generalization ability of node representation,leading to higher binary classification accuracy in detecting abnormal behaviors of blockchain accounts.Evaluations on a real-world abnormal transaction dataset demonstrate great advantages of the proposed HNND method over other compared methods.
基金supported by the National Natural Science Foundation of China(Nos.U2032148,U2032157,11775224)USTC Research Funds of the Double First-Class Initiative(No.YD2310002008)the National Key Research and Development Program of China(No.2017YFA0402904),the Youth Innovation Promotion Association,CAS(No.2020457)。
文摘Grating-based X-ray phase-contrast imaging enhances the contrast of imaged objects,particularly soft tissues.However,the radiation dose in computed tomography(CT)is generally excessive owing to the complex collection scheme.Sparse-view CT collection reduces the radiation dose,but with reduced resolution and reconstructed artifacts particularly in analytical reconstruction methods.Recently,deep learning has been employed in sparse-view CT reconstruction and achieved stateof-the-art results.Nevertheless,its low generalization performance and requirement for abundant training datasets have hindered the practical application of deep learning in phase-contrast CT.In this study,a CT model was used to generate a substantial number of simulated training datasets,thereby circumventing the need for experimental datasets.By training a network with simulated training datasets,the proposed method achieves high generalization performance in attenuationbased CT and phase-contrast CT,despite the lack of sufficient experimental datasets.In experiments utilizing only half of the CT data,our proposed method obtained an image quality comparable to that of the filtered back-projection algorithm with full-view projection.The proposed method simultaneously addresses two challenges in phase-contrast three-dimensional imaging,namely the lack of experimental datasets and the high exposure dose,through model-driven deep learning.This method significantly accelerates the practical application of phase-contrast CT.
文摘Wireless Sensor Networks(WSNs)have emerged as crucial tools for real-time environmental monitoring through distributed sensor nodes(SNs).However,the operational lifespan of WSNs is significantly constrained by the limited energy resources of SNs.Current energy efficiency strategies,such as clustering,multi-hop routing,and data aggregation,face challenges,including uneven energy depletion,high computational demands,and suboptimal cluster head(CH)selection.To address these limitations,this paper proposes a hybrid methodology that optimizes energy consumption(EC)while maintaining network performance.The proposed approach integrates the Low Energy Adaptive Clustering Hierarchy with Deterministic(LEACH-D)protocol using an Artificial Neural Network(ANN)and Bayesian Regularization Algorithm(BRA).LEACH-D improves upon conventional LEACH by ensuring more uniform energy usage across SNs,mitigating inefficiencies from random CH selection.The ANN further enhances CH selection and routing processes,effectively reducing data transmission overhead and idle listening.Simulation results reveal that the LEACH-D-ANN model significantly reduces EC and extends the network’s lifespan compared to existing protocols.This framework offers a promising solution to the energy efficiency challenges in WSNs,paving the way for more sustainable and reliable network deployments.
基金Supported by the National Natural Science Foundation of China(61403290,11301408,11401454)the Foundation for Youths of Shaanxi Province(2014JQ1020)+1 种基金the Foundation of Baoji City(2013R7-3)the Foundation of Baoji University of Arts and Sciences(ZK15081)
文摘Learning Bayesian network structure is one of the most exciting challenges in machine learning. Discovering a correct skeleton of a directed acyclic graph(DAG) is the foundation for dependency analysis algorithms for this problem. Considering the unreliability of high order condition independence(CI) tests, and to improve the efficiency of a dependency analysis algorithm, the key steps are to use few numbers of CI tests and reduce the sizes of conditioning sets as much as possible. Based on these reasons and inspired by the algorithm PC, we present an algorithm, named fast and efficient PC(FEPC), for learning the adjacent neighbourhood of every variable. FEPC implements the CI tests by three kinds of orders, which reduces the high order CI tests significantly. Compared with current algorithm proposals, the experiment results show that FEPC has better accuracy with fewer numbers of condition independence tests and smaller size of conditioning sets. The highest reduction percentage of CI test is 83.3% by EFPC compared with PC algorithm.
文摘Power flow adjustment is a sequential decision problem.The operator makes decisions to ensure that the power flow meets the system's operational constraints,thereby obtaining a typical operating mode power flow.However,this decision-making method relies heavily on human experience,which is inefficient when the system is complex.In addition,the results given by the current evaluation system are difficult to directly guide the intelligent power flow adjustment.In order to improve the efficiency and intelligence of power flow adjustment,this paper proposes a power flow adjustment method based on deep reinforcement learning.Combining deep reinforcement learning theory with traditional power system operation mode analysis,the concept of region mapping is proposed to describe the adjustment process,so as to analyze the process of power flow calculation and manual adjustment.Considering the characteristics of power flow adjustment,a Markov decision process model suitable for power flow adjustment is constructed.On this basis,a double Q network learning method suitable for power flow adjustment is proposed.This method can adjust the power flow according to the set adjustment route,thus improving the intelligent level of power flow adjustment.The method in this paper is tested on China Electric Power Research Institute(CEPRI)test system.
基金supported by the Natural Science Foundation of Tianjin(No.20JCQNJC00720)the Fundamental Research Fund for the Central Universities(No.3122021052)。
文摘The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.
基金This work was partially funded by the Industry Grant Scheme from Jaycorp Berhad in cooperation with UNITAR International University.The authors would like to thank INSFORNET,the Center for Advanced Computing Technology(C-ACT)at Universiti Teknikal Malaysia Melaka(UTeM),and the Center of Intelligent and Autonomous Systems(CIAS)at Universiti Tun Hussein Onn Malaysia(UTHM)for supporting this work.
文摘With an increasing number of services connected to the internet,including cloud computing and Internet of Things(IoT)systems,the prevention of cyberattacks has become more challenging due to the high dimensionality of the network traffic data and access points.Recently,researchers have suggested deep learning(DL)algorithms to define intrusion features through training empirical data and learning anomaly patterns of attacks.However,due to the high dynamics and imbalanced nature of the data,the existing DL classifiers are not completely effective at distinguishing between abnormal and normal behavior line connections for modern networks.Therefore,it is important to design a self-adaptive model for an intrusion detection system(IDS)to improve the detection of attacks.Consequently,in this paper,a novel hybrid weighted deep belief network(HW-DBN)algorithm is proposed for building an efficient and reliable IDS(DeepIoT.IDS)model to detect existing and novel cyberattacks.The HW-DBN algorithm integrates an improved Gaussian–Bernoulli restricted Boltzmann machine(Deep GB-RBM)feature learning operator with a weighted deep neural networks(WDNN)classifier.The CICIDS2017 dataset is selected to evaluate the DeepIoT.IDS model as it contains multiple types of attacks,complex data patterns,noise values,and imbalanced classes.We have compared the performance of the DeepIoT.IDS model with three recent models.The results show the DeepIoT.IDS model outperforms the three other models by achieving a higher detection accuracy of 99.38%and 99.99%for web attack and bot attack scenarios,respectively.Furthermore,it can detect the occurrence of low-frequency attacks that are undetectable by other models.
文摘To get simpler operation in modified fuzzy adaptive learning control network (FALCON) in some engineering application, sigmoid nonlinear function is employed as a substitute of traditional Gaussian membership function. For making the modified FALCON learning more efficient and stable, a simulated annealing (SA) learning coefficient is introduced into learning algorithm. At first, the basic concepts and main advantages of FALCON were briefly reviewed. Subsequently, the topological structure and nodes operation were illustrated; the gradient-descent learning algorithm with SA learning coefficient was derived; and the distinctions between the archetype and the modification were analyzed. Eventually, the significance and worthiness of the modified FALCON were validated by its application to probability prediction of anode effect in aluminium electrolysis cells.
基金This work was supported by the National Research Foundation of Korea-Grant funded by the Korean Government(Ministry of Science and ICT)-NRF-2020R1A2B5B02002478).There was no additional external funding received for this study.
文摘Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.
基金supported by National Natural Science Foundation of China (No.60970055)
文摘Frequent counting is a very so often required operation in machine learning algorithms. A typical machine learning task, learning the structure of Bayesian network (BN) based on metric scoring, is introduced as an example that heavily relies on frequent counting. A fast calculation method for frequent counting enhanced with two cache layers is then presented for learning BN. The main contribution of our approach is to eliminate comparison operations for frequent counting by introducing a multi-radix number system calculation. Both mathematical analysis and empirical comparison between our method and state-of-the-art solution are conducted. The results show that our method is dominantly superior to state-of-the-art solution in solving the problem of learning BN.
文摘The Wavelet-Domain Projection Pursuit Learning Network (WDPPLN) is proposedfor restoring degraded image. The new network combines the advantages of both projectionpursuit and wavelet shrinkage. Restoring image is very difficult when little is known about apriori knowledge for multisource degraded factors. WDPPLN successfully resolves this problemby separately processing wavelet coefficients and scale coefficients. Parameters in WDPPLN,which are used to simulate degraded factors, are estimated via WDPPLN training, using scalecoefficients. Also, WDPPLN uses soft-threshold of wavelet shrinkage technique to suppress noisein three high frequency subbands. The new method is compared with the traditional methodsand the Projection Pursuit Learning Network (PPLN) method. Experimental results demonstratethat it is an effective method for unsupervised restoring degraded image.
基金the National Research Foundation of Korea(NRF)grant of the Korea government(MSIP)(2020R1A2B5B01001899)(Grantee:GJY,http://www.nrf.re.kr)and Institute of Engineering Research at Seoul National University(Grantee:GJY,http://www.snu.ac.kr).The authors are grateful for their supports.
文摘This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.
基金supported by a State Grid Zhejiang Electric Power Co.,Ltd.Economic and Technical Research Institute Project(Key Technologies and Empirical Research of Diversified Integrated Operation of User-Side Energy Storage in Power Market Environment,No.5211JY19000W)supported by the National Natural Science Foundation of China(Research on Power Market Management to Promote Large-Scale New Energy Consumption,No.71804045).
文摘In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.
基金What is more,we thank the National Natural Science Foundation of China(Nos.61966039,62241604)the Scientific Research Fund Project of the Education Department of Yunnan Province(No.2023Y0565)Also,this work was supported in part by the Xingdian Talent Support Program for Young Talents(No.XDYC-QNRC-2022-0518).
文摘Many network presentation learning algorithms(NPLA)have originated from the process of the random walk between nodes in recent years.Despite these algorithms can obtain great embedding results,there may be also some limitations.For instance,only the structural information of nodes is considered when these kinds of algorithms are constructed.Aiming at this issue,a label and community information-based network presentation learning algorithm(LC-NPLA)is proposed in this paper.First of all,by using the community information and the label information of nodes,the first-order neighbors of nodes are reconstructed.In the next,the random walk strategy is improved by integrating the degree information and label information of nodes.Then,the node sequence obtained from random walk sampling is transformed into the node representation vector by the Skip-Gram model.At last,the experimental results on ten real-world networks demonstrate that the proposed algorithm has great advantages in the label classification,network reconstruction and link prediction tasks,compared with three benchmark algorithms.
基金supported by the National Science and Technology Major Project of the Ministry of Science and Technology of China(2014 ZX03001027)
文摘There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR) circumstances or under time-varying multipath channels, the majority of the existing algorithms for signal recognition are already facing limitations. In this series, we present a robust signal recognition method based upon the original and latest updated version of the extreme learning machine(ELM) to help users to switch between networks. The ELM utilizes signal characteristics to distinguish systems. The superiority of this algorithm lies in the random choices of hidden nodes and in the fact that it determines the output weights analytically, which result in lower complexity. Theoretically, the algorithm tends to offer a good generalization performance at an extremely fast speed of learning. Moreover, we implement the GSM/WCDMA/LTE models in the Matlab environment by using the Simulink tools. The simulations reveal that the signals can be recognized successfully to achieve a 95% accuracy in a low SNR(0 dB) environment in the time-varying multipath Rayleigh fading channel.