Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpret...Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpretable,and have limited training data.To address these limitations,this paper proposes an automatic infrared image recognition framework,which includes an object recognition module based on a deep self-attention network and a temperature distribution identification module based on a multi-factor similarity calculation.First,the features of an input image are extracted and embedded using a multi-head attention encoding-decoding mechanism.Thereafter,the embedded features are used to predict the equipment component category and location.In the located area,preliminary segmentation is performed.Finally,similar areas are gradually merged,and the temperature distribution of the equipment is obtained to identify a fault.Our experiments indicate that the proposed method demonstrates significantly improved accuracy compared with other related methods and,hence,provides a good reference for the automation of power equipment inspection.展开更多
Sustainable energy systems will entail a change in the carbon intensity projections,which should be carried out in a proper manner to facilitate the smooth running of the grid and reduce greenhouse emissions.The prese...Sustainable energy systems will entail a change in the carbon intensity projections,which should be carried out in a proper manner to facilitate the smooth running of the grid and reduce greenhouse emissions.The present article outlines the TransCarbonNet,a novel hybrid deep learning framework with self-attention characteristics added to the bidirectional Long Short-Term Memory(Bi-LSTM)network to forecast the carbon intensity of the grid several days.The proposed temporal fusion model not only learns the local temporal interactions but also the long-term patterns of the carbon emission data;hence,it is able to give suitable forecasts over a period of seven days.TransCarbonNet takes advantage of a multi-head self-attention element to identify significant temporal connections,which means the Bi-LSTM element calculates sequential dependencies in both directions.Massive tests on two actual data sets indicate much improved results in comparison with the existing results,with mean relative errors of 15.3 percent and 12.7 percent,respectively.The framework has given explicable weights of attention that reveal critical periods that influence carbon intensity alterations,and informed decisions on the management of carbon sustainability.The effectiveness of the proposed solution has been validated in numerous cases of operations,and TransCarbonNet is established to be an effective tool when it comes to carbon-friendly optimization of the grid.展开更多
With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and ...With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and diversity of IoT network traffic.This paper presents a comparative benchmark of classic machine learning(ML)and state-of-the-art deep learning(DL)algorithms for IoT intrusion detection.Our methodology employs a twophased approach:a preliminary pilot study using a custom-generated dataset to establish baselines,followed by a comprehensive evaluation on the large-scale CICIoTDataset2023.We benchmarked algorithms including Random Forest,XGBoost,CNN,and StackedLSTM.The results indicate that while top-performingmodels frombothcategories achieve over 99%classification accuracy,this metric masks a crucial performance trade-off.We demonstrate that treebased ML ensembles exhibit superior precision(91%)in identifying benign traffic,making them effective at reducing false positives.Conversely,DL models demonstrate superior recall(96%),making them better suited for minimizing the interruption of legitimate traffic.We conclude that the selection of an optimal model is not merely a matter of maximizing accuracy but is a strategic choice dependent on the specific security priority either minimizing false alarms or ensuring service availability.Thiswork provides a practical framework for deploying context-aware security solutions in diverse IoT environments.展开更多
Red chilli powder(RCP)is a versatile spice accepted globally in diverse culinary products due to its distinct pungent characteristics and red colour.The higher market demand makes the spice vulnerable to unethical mix...Red chilli powder(RCP)is a versatile spice accepted globally in diverse culinary products due to its distinct pungent characteristics and red colour.The higher market demand makes the spice vulnerable to unethical mixing,so its quality assessment is crucial.The non-destructive application of computer vision for measuring food adulteration has always attracted researchers and industry due to its robustness and feasibility.Following the current era of Food Quality 4.0 and artificial intelligence,this study follows an approach based on 1D-convolutional neural networks(CNN)and 2D-CNN models for detecting RCP adulteration.The performance evaluation metrics are used to analyse the efficiency of these models.The histogram features from the Lab colour space trained on the 1D-CNN model(BS-40 and Epoch 100)show an accuracy of 84.56%.On the other hand,the 2D-CNN model DenseNet-121(AdamW and BS-30)also shows a test accuracy of 84.62%.From the observations of this study,it is concluded that CNN models can be a promising tool for solving the adulteration detection problem in food quality evaluation.Further,internet of things-based systems can be developed to aid the industry and government agencies in monitoring the quality of RCP to harness the unethical practices of food adulteration.展开更多
Precipitation nowcasting is of great importance for disaster prevention and mitigation.However,precipitation is a complex spatio-temporal phenomenon influenced by various underlying physical factors.Even slight change...Precipitation nowcasting is of great importance for disaster prevention and mitigation.However,precipitation is a complex spatio-temporal phenomenon influenced by various underlying physical factors.Even slight changes in the initial precipitation field can have a significant impact on the future precipitation patterns,making the nowcasting of short-term high-resolution precipitation a major challenge.Traditional deep learning methods often have difficulty capturing the long-term spatial dependence of precipitation and are usually at a low resolution.To address these issues,based upon the Simpler yet Better Video Prediction(SimVP)framework,we proposed a deep generative neural network that incorporates the Simple Parameter-Free Attention Module(SimAM)and Generative Adversarial Networks(GANs)for short-term high-resolution precipitation event forecasting.Through an adversarial training strategy,critical precipitation features were extracted from complex radar echo images.During the adversarial learning process,the dynamic competition between the generator and the discriminator could continuously enhance the model in prediction accuracy and resolution for short-term precipitation.Experimental results demonstrate that the proposed method could effectively forecast short-term precipitation events on various scales and showed the best overall performance among existing methods.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by le...Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.展开更多
The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiment...The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.展开更多
Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the pun...Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.展开更多
For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high com...For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high computational complexity,and long reconstruction time.An image compressed sensing reconstruction network based on self-attention mechanism(SAMNet)was proposed.For the compressed sampling,self-attention convolution was designed,which was conducive to capturing richer features,so that the compressed sensing measurement value retained more image structure information.For the reconstruction,a self-attention mechanism was introduced in the convolutional neural network.A reconstruction network including residual blocks,bottleneck transformer(BoTNet),and dense blocks was proposed,which strengthened the transfer of image features and reduced the amount of parameters dramatically.Under the Set5 dataset,when the measurement rates are 0.01,0.04,0.10,and 0.25,the average peak signal-to-noise ratio(PSNR)of SAMNet is improved by 1.27,1.23,0.50,and 0.15 dB,respectively,compared to the CSNet+.The running time of reconstructing a 256×256 image is reduced by 0.1473,0.1789,0.2310,and 0.2524 s compared to ReconNet.Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time.展开更多
As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additi...As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.展开更多
The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accur...The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.展开更多
False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work u...False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work usually trains a detection model by fusing the data-driven features from diverse power data streams.Data-driven features,however,cannot effectively capture the differences between noisy data and attack samples.As a result,slight noise disturbances in the power grid may cause a large number of false detections for FDIA attacks.To address this problem,this paper designs a deep collaborative self-attention network to achieve robust FDIA detection,in which the spatio-temporal features of cascaded FDIA attacks are fully integrated.Firstly,a high-order Chebyshev polynomials-based graph convolution module is designed to effectively aggregate the spatio information between grid nodes,and the spatial self-attention mechanism is involved to dynamically assign attention weights to each node,which guides the network to pay more attention to the node information that is conducive to FDIA detection.Furthermore,the bi-directional Long Short-Term Memory(LSTM)network is introduced to conduct time series modeling and long-term dependence analysis for power grid data and utilizes the temporal self-attention mechanism to describe the time correlation of data and assign different weights to different time steps.Our designed deep collaborative network can effectively mine subtle perturbations from spatiotemporal feature information,efficiently distinguish power grid noise from FDIA attacks,and adapt to diverse attack intensities.Extensive experiments demonstrate that our method can obtain an efficient detection performance over actual load data from New York Independent System Operator(NYISO)in IEEE 14,IEEE 39,and IEEE 118 bus systems,and outperforms state-of-the-art FDIA detection schemes in terms of detection accuracy and robustness.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowad...Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowadays,the groundwater vulnerability assessment(GVA)has become an essential task to identify the current status and development trend of groundwater quality.In this study,the Convolutional Neural Network(CNN)and Long Short-Term Memory(LSTM)models are integrated to realize the spatio-temporal prediction of regional groundwater vulnerability by introducing the Self-attention mechanism.The study firstly builds the CNN-LSTM modelwith self-attention(SA)mechanism and evaluates the prediction accuracy of the model for groundwater vulnerability compared to other common machine learning models such as Support Vector Machine(SVM),Random Forest(RF),and Extreme Gradient Boosting(XGBoost).The results indicate that the CNNLSTM model outperforms thesemodels,demonstrating its significance in groundwater vulnerability assessment.It can be posited that the predictions indicate an increased risk of groundwater vulnerability in the study area over the coming years.This increase can be attributed to the synergistic impact of global climate anomalies and intensified local human activities.Moreover,the overall groundwater vulnerability risk in the entire region has increased,evident fromboth the notably high value and standard deviation.This suggests that the spatial variability of groundwater vulnerability in the area is expected to expand in the future due to the sustained progression of climate change and human activities.The model can be optimized for diverse applications across regional environmental assessment,pollution prediction,and risk statistics.This study holds particular significance for ecological protection and groundwater resource management.展开更多
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo...Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.展开更多
On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend ha...On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend hashtags for tweets has received wide attention.The previous hashtag recommendation methods were to convert the task into a multi-class classification problem.However,these methods can only recommend hashtags that appeared in historical information,and cannot recommend the new ones.In this work,we extend the self-attention mechanism to turn the hashtag recommendation task into a sequence labeling task.To train and evaluate the proposed method,we used the real tweet data which is collected from Twitter.Experimental results show that the proposed method can be significantly better than the most advanced method.Compared with the state-of-the-art methods,the accuracy of our method has been increased 4%.展开更多
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is cr...Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is crucial for computationally limited portable devices such as augmented reality and virtual reality.With the rapid advancements in deep learning,many network models have been developed specifically for eye image segmentation.Some methods divide the segmentation process into multiple stages to achieve model parameter miniaturization while enhancing output through post processing techniques to improve segmentation accuracy.These approaches significantly increase the inference time.Other networks adopt more complex encoding and decoding modules to achieve end-to-end output,which requires substantial computation.Therefore,balancing the model’s size,accuracy,and computational complexity is essential.To address these challenges,we propose a lightweight asymmetric UNet architecture and a projection loss function.We utilize ResNet-3 layer blocks to enhance feature extraction efficiency in the encoding stage.In the decoding stage,we employ regular convolutions and skip connections to upscale the feature maps from the latent space to the original image size,balancing the model size and segmentation accuracy.In addition,we leverage the geometric features of the eye region and design a projection loss function to further improve the segmentation accuracy without adding any additional inference computational cost.We validate our approach on the OpenEDS2019 dataset for virtual reality and achieve state-of-the-art performance with 95.33%mean intersection over union(mIoU).Our model has only 0.63M parameters and 350 FPS,which are 68%and 200%of the state-of-the-art model RITNet,respectively.展开更多
基金This work was supported by National Key R&D Program of China(2019YFE0102900).
文摘Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpretable,and have limited training data.To address these limitations,this paper proposes an automatic infrared image recognition framework,which includes an object recognition module based on a deep self-attention network and a temperature distribution identification module based on a multi-factor similarity calculation.First,the features of an input image are extracted and embedded using a multi-head attention encoding-decoding mechanism.Thereafter,the embedded features are used to predict the equipment component category and location.In the located area,preliminary segmentation is performed.Finally,similar areas are gradually merged,and the temperature distribution of the equipment is obtained to identify a fault.Our experiments indicate that the proposed method demonstrates significantly improved accuracy compared with other related methods and,hence,provides a good reference for the automation of power equipment inspection.
基金funded by the Deanship of Scientific Research and Libraries at Princess Nourah bint Abdulrahman University,through the“Nafea”Program,Grant No.(NP-45-082).
文摘Sustainable energy systems will entail a change in the carbon intensity projections,which should be carried out in a proper manner to facilitate the smooth running of the grid and reduce greenhouse emissions.The present article outlines the TransCarbonNet,a novel hybrid deep learning framework with self-attention characteristics added to the bidirectional Long Short-Term Memory(Bi-LSTM)network to forecast the carbon intensity of the grid several days.The proposed temporal fusion model not only learns the local temporal interactions but also the long-term patterns of the carbon emission data;hence,it is able to give suitable forecasts over a period of seven days.TransCarbonNet takes advantage of a multi-head self-attention element to identify significant temporal connections,which means the Bi-LSTM element calculates sequential dependencies in both directions.Massive tests on two actual data sets indicate much improved results in comparison with the existing results,with mean relative errors of 15.3 percent and 12.7 percent,respectively.The framework has given explicable weights of attention that reveal critical periods that influence carbon intensity alterations,and informed decisions on the management of carbon sustainability.The effectiveness of the proposed solution has been validated in numerous cases of operations,and TransCarbonNet is established to be an effective tool when it comes to carbon-friendly optimization of the grid.
文摘With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and diversity of IoT network traffic.This paper presents a comparative benchmark of classic machine learning(ML)and state-of-the-art deep learning(DL)algorithms for IoT intrusion detection.Our methodology employs a twophased approach:a preliminary pilot study using a custom-generated dataset to establish baselines,followed by a comprehensive evaluation on the large-scale CICIoTDataset2023.We benchmarked algorithms including Random Forest,XGBoost,CNN,and StackedLSTM.The results indicate that while top-performingmodels frombothcategories achieve over 99%classification accuracy,this metric masks a crucial performance trade-off.We demonstrate that treebased ML ensembles exhibit superior precision(91%)in identifying benign traffic,making them effective at reducing false positives.Conversely,DL models demonstrate superior recall(96%),making them better suited for minimizing the interruption of legitimate traffic.We conclude that the selection of an optimal model is not merely a matter of maximizing accuracy but is a strategic choice dependent on the specific security priority either minimizing false alarms or ensuring service availability.Thiswork provides a practical framework for deploying context-aware security solutions in diverse IoT environments.
文摘Red chilli powder(RCP)is a versatile spice accepted globally in diverse culinary products due to its distinct pungent characteristics and red colour.The higher market demand makes the spice vulnerable to unethical mixing,so its quality assessment is crucial.The non-destructive application of computer vision for measuring food adulteration has always attracted researchers and industry due to its robustness and feasibility.Following the current era of Food Quality 4.0 and artificial intelligence,this study follows an approach based on 1D-convolutional neural networks(CNN)and 2D-CNN models for detecting RCP adulteration.The performance evaluation metrics are used to analyse the efficiency of these models.The histogram features from the Lab colour space trained on the 1D-CNN model(BS-40 and Epoch 100)show an accuracy of 84.56%.On the other hand,the 2D-CNN model DenseNet-121(AdamW and BS-30)also shows a test accuracy of 84.62%.From the observations of this study,it is concluded that CNN models can be a promising tool for solving the adulteration detection problem in food quality evaluation.Further,internet of things-based systems can be developed to aid the industry and government agencies in monitoring the quality of RCP to harness the unethical practices of food adulteration.
基金Supported by the National Natural Science Foundation of China(No.42306214)the Postdoctoral Innovative Talents Support Program of Shandong Province(No.SDBX2022026)+1 种基金the China Postdoctoral Science Foundation(No.2023M733533)the Special Research Assistant Project of the Chinese Academy of Sciences in 2022。
文摘Precipitation nowcasting is of great importance for disaster prevention and mitigation.However,precipitation is a complex spatio-temporal phenomenon influenced by various underlying physical factors.Even slight changes in the initial precipitation field can have a significant impact on the future precipitation patterns,making the nowcasting of short-term high-resolution precipitation a major challenge.Traditional deep learning methods often have difficulty capturing the long-term spatial dependence of precipitation and are usually at a low resolution.To address these issues,based upon the Simpler yet Better Video Prediction(SimVP)framework,we proposed a deep generative neural network that incorporates the Simple Parameter-Free Attention Module(SimAM)and Generative Adversarial Networks(GANs)for short-term high-resolution precipitation event forecasting.Through an adversarial training strategy,critical precipitation features were extracted from complex radar echo images.During the adversarial learning process,the dynamic competition between the generator and the discriminator could continuously enhance the model in prediction accuracy and resolution for short-term precipitation.Experimental results demonstrate that the proposed method could effectively forecast short-term precipitation events on various scales and showed the best overall performance among existing methods.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
文摘Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.
文摘The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.
文摘Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.
基金supported by National Natural Science Foundation of China(Nos.61261016,61661025)Science and Technology Plan of Gansu Province(No.20JR10RA273).
文摘For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high computational complexity,and long reconstruction time.An image compressed sensing reconstruction network based on self-attention mechanism(SAMNet)was proposed.For the compressed sampling,self-attention convolution was designed,which was conducive to capturing richer features,so that the compressed sensing measurement value retained more image structure information.For the reconstruction,a self-attention mechanism was introduced in the convolutional neural network.A reconstruction network including residual blocks,bottleneck transformer(BoTNet),and dense blocks was proposed,which strengthened the transfer of image features and reduced the amount of parameters dramatically.Under the Set5 dataset,when the measurement rates are 0.01,0.04,0.10,and 0.25,the average peak signal-to-noise ratio(PSNR)of SAMNet is improved by 1.27,1.23,0.50,and 0.15 dB,respectively,compared to the CSNet+.The running time of reconstructing a 256×256 image is reduced by 0.1473,0.1789,0.2310,and 0.2524 s compared to ReconNet.Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time.
基金supported by the National Key Research and Development Program of China(2020YFC2200901)。
文摘As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.
基金supported by the National Key Research and Development Program of China[grant No.2016YF B0502200]the Postdoctoral Research Foundation of China[grant No.2020M682480]the Fundamental Research Funds for the Central Universities[grant No.2042021kf0009]。
文摘The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.
基金supported in part by the Research Fund of Guangxi Key Lab of Multi-Source Information Mining&Security(MIMS21-M-02).
文摘False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work usually trains a detection model by fusing the data-driven features from diverse power data streams.Data-driven features,however,cannot effectively capture the differences between noisy data and attack samples.As a result,slight noise disturbances in the power grid may cause a large number of false detections for FDIA attacks.To address this problem,this paper designs a deep collaborative self-attention network to achieve robust FDIA detection,in which the spatio-temporal features of cascaded FDIA attacks are fully integrated.Firstly,a high-order Chebyshev polynomials-based graph convolution module is designed to effectively aggregate the spatio information between grid nodes,and the spatial self-attention mechanism is involved to dynamically assign attention weights to each node,which guides the network to pay more attention to the node information that is conducive to FDIA detection.Furthermore,the bi-directional Long Short-Term Memory(LSTM)network is introduced to conduct time series modeling and long-term dependence analysis for power grid data and utilizes the temporal self-attention mechanism to describe the time correlation of data and assign different weights to different time steps.Our designed deep collaborative network can effectively mine subtle perturbations from spatiotemporal feature information,efficiently distinguish power grid noise from FDIA attacks,and adapt to diverse attack intensities.Extensive experiments demonstrate that our method can obtain an efficient detection performance over actual load data from New York Independent System Operator(NYISO)in IEEE 14,IEEE 39,and IEEE 118 bus systems,and outperforms state-of-the-art FDIA detection schemes in terms of detection accuracy and robustness.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the National Key Research and Development Program of China(No.2021YFA0715900).
文摘Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowadays,the groundwater vulnerability assessment(GVA)has become an essential task to identify the current status and development trend of groundwater quality.In this study,the Convolutional Neural Network(CNN)and Long Short-Term Memory(LSTM)models are integrated to realize the spatio-temporal prediction of regional groundwater vulnerability by introducing the Self-attention mechanism.The study firstly builds the CNN-LSTM modelwith self-attention(SA)mechanism and evaluates the prediction accuracy of the model for groundwater vulnerability compared to other common machine learning models such as Support Vector Machine(SVM),Random Forest(RF),and Extreme Gradient Boosting(XGBoost).The results indicate that the CNNLSTM model outperforms thesemodels,demonstrating its significance in groundwater vulnerability assessment.It can be posited that the predictions indicate an increased risk of groundwater vulnerability in the study area over the coming years.This increase can be attributed to the synergistic impact of global climate anomalies and intensified local human activities.Moreover,the overall groundwater vulnerability risk in the entire region has increased,evident fromboth the notably high value and standard deviation.This suggests that the spatial variability of groundwater vulnerability in the area is expected to expand in the future due to the sustained progression of climate change and human activities.The model can be optimized for diverse applications across regional environmental assessment,pollution prediction,and risk statistics.This study holds particular significance for ecological protection and groundwater resource management.
基金supported by the National Key R&D Program of China(Grant No.:2022YFC3501805)the National Natural Science Foundation of China(Grant No.:82374030)+2 种基金the Science and Technology Program of Tianjin in China(Grant No.:23ZYJDSS00030)the Tianjin Outstanding Youth Fund,China(Grant No.:23JCJQJC00030)the China Postdoctoral Science Foundation-Tianjin Joint Support Program(Grant No.:2023T030TJ).
文摘Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.
文摘On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend hashtags for tweets has received wide attention.The previous hashtag recommendation methods were to convert the task into a multi-class classification problem.However,these methods can only recommend hashtags that appeared in historical information,and cannot recommend the new ones.In this work,we extend the self-attention mechanism to turn the hashtag recommendation task into a sequence labeling task.To train and evaluate the proposed method,we used the real tweet data which is collected from Twitter.Experimental results show that the proposed method can be significantly better than the most advanced method.Compared with the state-of-the-art methods,the accuracy of our method has been increased 4%.
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
基金supported by the HFIPS Director’s Foundation(YZJJ202207-TS),the National Natural Science Foundation of China(82371931)the Natural Science Foundation of Anhui Province(2008085MC69)+3 种基金the Natural Science Foundation of Hefei City(2021033)the General Scientific Research Project of Anhui Provincial Health Commission(AHWJ2021b150)the Collaborative Innovation Program of Hefei Science Center,CAS(2021HSC-CIP013)the Anhui Province Key Research and Development Project(202204295107020004).
文摘Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is crucial for computationally limited portable devices such as augmented reality and virtual reality.With the rapid advancements in deep learning,many network models have been developed specifically for eye image segmentation.Some methods divide the segmentation process into multiple stages to achieve model parameter miniaturization while enhancing output through post processing techniques to improve segmentation accuracy.These approaches significantly increase the inference time.Other networks adopt more complex encoding and decoding modules to achieve end-to-end output,which requires substantial computation.Therefore,balancing the model’s size,accuracy,and computational complexity is essential.To address these challenges,we propose a lightweight asymmetric UNet architecture and a projection loss function.We utilize ResNet-3 layer blocks to enhance feature extraction efficiency in the encoding stage.In the decoding stage,we employ regular convolutions and skip connections to upscale the feature maps from the latent space to the original image size,balancing the model size and segmentation accuracy.In addition,we leverage the geometric features of the eye region and design a projection loss function to further improve the segmentation accuracy without adding any additional inference computational cost.We validate our approach on the OpenEDS2019 dataset for virtual reality and achieve state-of-the-art performance with 95.33%mean intersection over union(mIoU).Our model has only 0.63M parameters and 350 FPS,which are 68%and 200%of the state-of-the-art model RITNet,respectively.