Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine...Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.展开更多
Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowad...Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowadays,the groundwater vulnerability assessment(GVA)has become an essential task to identify the current status and development trend of groundwater quality.In this study,the Convolutional Neural Network(CNN)and Long Short-Term Memory(LSTM)models are integrated to realize the spatio-temporal prediction of regional groundwater vulnerability by introducing the Self-attention mechanism.The study firstly builds the CNN-LSTM modelwith self-attention(SA)mechanism and evaluates the prediction accuracy of the model for groundwater vulnerability compared to other common machine learning models such as Support Vector Machine(SVM),Random Forest(RF),and Extreme Gradient Boosting(XGBoost).The results indicate that the CNNLSTM model outperforms thesemodels,demonstrating its significance in groundwater vulnerability assessment.It can be posited that the predictions indicate an increased risk of groundwater vulnerability in the study area over the coming years.This increase can be attributed to the synergistic impact of global climate anomalies and intensified local human activities.Moreover,the overall groundwater vulnerability risk in the entire region has increased,evident fromboth the notably high value and standard deviation.This suggests that the spatial variability of groundwater vulnerability in the area is expected to expand in the future due to the sustained progression of climate change and human activities.The model can be optimized for diverse applications across regional environmental assessment,pollution prediction,and risk statistics.This study holds particular significance for ecological protection and groundwater resource management.展开更多
A healthy brain is vital to every person since the brain controls every movement and emotion.Sometimes,some brain cells grow unexpectedly to be uncontrollable and cancerous.These cancerous cells are called brain tumor...A healthy brain is vital to every person since the brain controls every movement and emotion.Sometimes,some brain cells grow unexpectedly to be uncontrollable and cancerous.These cancerous cells are called brain tumors.For diagnosed patients,their lives depend mainly on the early diagnosis of these tumors to provide suitable treatment plans.Nowadays,Physicians and radiologists rely on Magnetic Resonance Imaging(MRI)pictures for their clinical evaluations of brain tumors.These evaluations are time-consuming,expensive,and require expertise with high skills to provide an accurate diagnosis.Scholars and industrials have recently partnered to implement automatic solutions to diagnose the disease with high accuracy.Due to their accuracy,some of these solutions depend on deep-learning(DL)methodologies.These techniques have become important due to their roles in the diagnosis process,which includes identification and classification.Therefore,there is a need for a solid and robust approach based on a deep-learning method to diagnose brain tumors.The purpose of this study is to develop an intelligent automatic framework for brain tumor diagnosis.The proposed solution is based on a novel dense dynamic residual self-attention transfer adaptive learning fusion approach(NDDRSATALFA),carried over two implemented deep-learning networks:VGG19 and UNET to identify and classify brain tumors.In addition,this solution applies a transfer learning approach to exchange extracted features and data within the two neural networks.The presented framework is trained,validated,and tested on six public datasets of MRIs to detect brain tumors and categorize these tumors into three suitable classes,which are glioma,meningioma,and pituitary.The proposed framework yielded remarkable findings on variously evaluated performance indicators:99.32%accuracy,98.74%sensitivity,98.89%specificity,99.01%Dice,98.93%Area Under the Curve(AUC),and 99.81%F1-score.In addition,a comparative analysis with recent state-of-the-art methods was performed and according to the comparative analysis,NDDRSATALFA shows an admirable level of reliability in simplifying the timely identification of diverse brain tumors.Moreover,this framework can be applied by healthcare providers to assist radiologists,pathologists,and physicians in their evaluations.The attained outcomes open doors for advanced automatic solutions that improve clinical evaluations and provide reasonable treatment plans.展开更多
Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features i...Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features in multimodal image analysis of ophthalmology,as well as the existence of information redundancy in cross-modal data fusion,this paper proposes amultimodal fusion framework based on cross-modal collaboration and weighted attention mechanism.In terms of feature extraction,the framework collaboratively extracts local fine-grained features and global structural dependencies through a parallel dual-branch architecture,overcoming the limitations of traditional single-modality models in capturing either local or global information;in terms of fusion strategy,the framework innovatively designs a cross-modal dynamic fusion strategy,combining overlappingmulti-head self-attention modules with a bidirectional feature alignment mechanism,addressing the bottlenecks of low feature interaction efficiency and excessive attention fusion computations in traditional parallel fusion,and further introduces cross-domain local integration technology,which enhances the representation ability of the lesion area through pixel-level feature recalibration and optimizes the diagnostic robustness of complex cases.Experiments show that the framework exhibits excellent feature expression and generalization performance in cross-domain scenarios of ophthalmic medical images and natural images,providing a high-precision,low-redundancy fusion paradigm for multimodal medical image analysis,and promoting the upgrade of intelligent diagnosis and treatment fromsingle-modal static analysis to dynamic decision-making.展开更多
Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the pun...Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.展开更多
As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additi...As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.展开更多
The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiment...The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.展开更多
Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operati...Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions.展开更多
Fault diagnosis is important for maintaining the safety and effectiveness of chemical process.Considering the multivariate,nonlinear,and dynamic characteristic of chemical process,many time-series-based data-driven fa...Fault diagnosis is important for maintaining the safety and effectiveness of chemical process.Considering the multivariate,nonlinear,and dynamic characteristic of chemical process,many time-series-based data-driven fault diagnosis methods have been developed in recent years.However,the existing methods have the problem of long-term dependency and are difficult to train due to the sequential way of training.To overcome these problems,a novel fault diagnosis method based on time-series and the hierarchical multihead self-attention(HMSAN)is proposed for chemical process.First,a sliding window strategy is adopted to construct the normalized time-series dataset.Second,the HMSAN is developed to extract the time-relevant features from the time-series process data.It improves the basic self-attention model in both width and depth.With the multihead structure,the HMSAN can pay attention to different aspects of the complicated chemical process and obtain the global dynamic features.However,the multiple heads in parallel lead to redundant information,which cannot improve the diagnosis performance.With the hierarchical structure,the redundant information is reduced and the deep local time-related features are further extracted.Besides,a novel many-to-one training strategy is introduced for HMSAN to simplify the training procedure and capture the long-term dependency.Finally,the effectiveness of the proposed method is demonstrated by two chemical cases.The experimental results show that the proposed method achieves a great performance on time-series industrial data and outperforms the state-of-the-art approaches.展开更多
Aerial threat assessment is a crucial link in modern air combat, whose result counts a great deal for commanders to make decisions. With the consideration that the existing threat assessment methods have difficulties ...Aerial threat assessment is a crucial link in modern air combat, whose result counts a great deal for commanders to make decisions. With the consideration that the existing threat assessment methods have difficulties in dealing with high dimensional time series target data, a threat assessment method based on self-attention mechanism and gated recurrent unit(SAGRU) is proposed. Firstly, a threat feature system including air combat situations and capability features is established. Moreover, a data augmentation process based on fractional Fourier transform(FRFT) is applied to extract more valuable information from time series situation features. Furthermore, aiming to capture key characteristics of battlefield evolution, a bidirectional GRU and SA mechanisms are designed for enhanced features.Subsequently, after the concatenation of the processed air combat situation and capability features, the target threat level will be predicted by fully connected neural layers and the softmax classifier. Finally, in order to validate this model, an air combat dataset generated by a combat simulation system is introduced for model training and testing. The comparison experiments show the proposed model has structural rationality and can perform threat assessment faster and more accurately than the other existing models based on deep learning.展开更多
Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have becom...Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications.展开更多
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accur...The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.展开更多
Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(s...Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(such as medical,transportation,military,etc.)which are reachable targets for hostile intruders due to their openness and varied structure.Intrusion Detection Systems(IDS)based on Machine Learning(ML)and Deep Learning(DL)techniques have got significant attention.However,existing ML and DL-based IDS still face a number of obstacles that must be overcome.For instance,the existing DL approaches necessitate a substantial quantity of data for effective performance,which is not feasible to run on low-power and low-memory devices.Imbalanced and fewer data potentially lead to low performance on existing IDS.This paper proposes a self-attention convolutional neural network(SACNN)architecture for the detection of malicious activity in IIoT networks and an appropriate feature extraction method to extract the most significant features.The proposed architecture has a self-attention layer to calculate the input attention and convolutional neural network(CNN)layers to process the assigned attention features for prediction.The performance evaluation of the proposed SACNN architecture has been done with the Edge-IIoTset and X-IIoTID datasets.These datasets encompassed the behaviours of contemporary IIoT communication protocols,the operations of state-of-the-art devices,various attack types,and diverse attack scenarios.展开更多
The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random mis...The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random missing(RM)that differs significantly from common missing patterns of RTT-AT.The method for solving the RM may experience performance degradation or failure when applied to RTT-AT imputation.Conventional autoregressive deep learning methods are prone to error accumulation and long-term dependency loss.In this paper,a non-autoregressive imputation model that addresses the issue of missing value imputation for two common missing patterns in RTT-AT is proposed.Our model consists of two probabilistic sparse diagonal masking self-attention(PSDMSA)units and a weight fusion unit.It learns missing values by combining the representations outputted by the two units,aiming to minimize the difference between the missing values and their actual values.The PSDMSA units effectively capture temporal dependencies and attribute correlations between time steps,improving imputation quality.The weight fusion unit automatically updates the weights of the output representations from the two units to obtain a more accurate final representation.The experimental results indicate that,despite varying missing rates in the two missing patterns,our model consistently outperforms other methods in imputation performance and exhibits a low frequency of deviations in estimates for specific missing entries.Compared to the state-of-the-art autoregressive deep learning imputation model Bidirectional Recurrent Imputation for Time Series(BRITS),our proposed model reduces mean absolute error(MAE)by 31%~50%.Additionally,the model attains a training speed that is 4 to 8 times faster when compared to both BRITS and a standard Transformer model when trained on the same dataset.Finally,the findings from the ablation experiments demonstrate that the PSDMSA,the weight fusion unit,cascade network design,and imputation loss enhance imputation performance and confirm the efficacy of our design.展开更多
In the application of aerial target recognition,on the one hand,the recognition error produced by the single measurement of the sensor is relatively large due to the impact of noise.On the other hand,it is difficult t...In the application of aerial target recognition,on the one hand,the recognition error produced by the single measurement of the sensor is relatively large due to the impact of noise.On the other hand,it is difficult to apply machine learning methods to improve the intelligence and recognition effect due to few or no actual measurement samples.Aiming at these problems,an aerial target recognition algorithm based on self-attention and Long Short-Term Memory Network(LSTM)is proposed.LSTM can effectively extract temporal dependencies.The attention mechanism calculates the weight of each input element and applies the weight to the hidden state of the LSTM,thereby adjusting the LSTM’s attention to the input.This combination retains the learning ability of LSTM and introduces the advantages of the attention mechanism,making the model have stronger feature extraction ability and adaptability when processing sequence data.In addition,based on the prior information of the multidimensional characteristics of the target,the three-point estimation method is adopted to simulate an aerial target recognition dataset to train the recognition model.The experimental results show that the proposed algorithm achieves more than 91%recognition accuracy,lower false alarm rate and higher robustness compared with the multi-attribute decision-making(MADM)based on fuzzy numbers.展开更多
False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work u...False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work usually trains a detection model by fusing the data-driven features from diverse power data streams.Data-driven features,however,cannot effectively capture the differences between noisy data and attack samples.As a result,slight noise disturbances in the power grid may cause a large number of false detections for FDIA attacks.To address this problem,this paper designs a deep collaborative self-attention network to achieve robust FDIA detection,in which the spatio-temporal features of cascaded FDIA attacks are fully integrated.Firstly,a high-order Chebyshev polynomials-based graph convolution module is designed to effectively aggregate the spatio information between grid nodes,and the spatial self-attention mechanism is involved to dynamically assign attention weights to each node,which guides the network to pay more attention to the node information that is conducive to FDIA detection.Furthermore,the bi-directional Long Short-Term Memory(LSTM)network is introduced to conduct time series modeling and long-term dependence analysis for power grid data and utilizes the temporal self-attention mechanism to describe the time correlation of data and assign different weights to different time steps.Our designed deep collaborative network can effectively mine subtle perturbations from spatiotemporal feature information,efficiently distinguish power grid noise from FDIA attacks,and adapt to diverse attack intensities.Extensive experiments demonstrate that our method can obtain an efficient detection performance over actual load data from New York Independent System Operator(NYISO)in IEEE 14,IEEE 39,and IEEE 118 bus systems,and outperforms state-of-the-art FDIA detection schemes in terms of detection accuracy and robustness.展开更多
Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the...Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the accuracy of diagnostic classification methods based on deep learning.To address this issue,our study designed a deep data amplification model named Progressive Conditional Generative Adversarial Network with Efficient Approximating Self Attention(PCGAN-EASA),which incrementally improves the quality of generated EEG features.This network can yield full-scale,fine-grained EEG features from the low-scale,coarse ones.Specially,to overcome the limitations of traditional generative models that fail to generate features tailored to individual patient characteristics,we developed an encoder with an effective approximating self-attention mechanism.This encoder not only automatically extracts relevant features across different patients but also reduces the computational resource consumption.Furthermore,the adversarial loss and reconstruction loss functions were redesigned to better align with the training characteristics of the network and the spatial correlations among electrodes.Extensive experimental results demonstrate that PCGAN-EASA provides the highest generation quality and the lowest computational resource usage compared to several existing approaches.Additionally,it significantly improves the accuracy of subsequent stroke classification tasks.展开更多
To predict renewable energy sources such as solar power in microgrids more accurately,a hybrid power prediction method is presented in this paper.First,the self-attention mechanism is introduced based on a bidirection...To predict renewable energy sources such as solar power in microgrids more accurately,a hybrid power prediction method is presented in this paper.First,the self-attention mechanism is introduced based on a bidirectional gated recurrent neural network(BiGRU)to explore the time-series characteristics of solar power output and consider the influence of different time nodes on the prediction results.Subsequently,an improved quantum particle swarm optimization(QPSO)algorithm is proposed to optimize the hyperparameters of the combined prediction model.The final proposed LQPSO-BiGRU-self-attention hybrid model can predict solar power more effectively.In addition,considering the coordinated utilization of various energy sources such as electricity,hydrogen,and renewable energy,a multi-objective optimization model that considers both economic and environmental costs was constructed.A two-stage adaptive multi-objective quantum particle swarm optimization algorithm aided by a Lévy flight,named MO-LQPSO,was proposed for the comprehensive optimal scheduling of a multi-energy microgrid system.This algorithm effectively balances the global and local search capabilities and enhances the solution of complex nonlinear problems.The effectiveness and superiority of the proposed scheme are verified through comparative simulations.展开更多
Separable nonlinear models are widely used in various fields such as time series analysis, system modeling, and machine learning, due to their flexible structures and ability to capture nonlinear behavior of data. How...Separable nonlinear models are widely used in various fields such as time series analysis, system modeling, and machine learning, due to their flexible structures and ability to capture nonlinear behavior of data. However, identifying the parameters of these models is challenging, especially when sparse models with better interpretability are desired by practitioners. Previous theoretical and practical studies have shown that variable projection (VP) is an efficient method for identifying separable nonlinear models, but these are based on \(L_2\) penalty of model parameters, which cannot be directly extended to deal with sparse constraint. Based on the exploration of the structural characteristics of separable models, this paper proposes gradient-based and trust-region-based variable projection algorithms, which mainly solve two key problems: how to eliminate linear parameters under sparse constraint;and how to deal with the coupling relationship between linear and nonlinear parameters in the model. Finally, numerical experiments on synthetic data and real time series data are conducted to verify the effectiveness of the proposed algorithms.展开更多
基金supported by the National Natural Science Foundation of China(No.52277055).
文摘Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.
基金supported by the National Key Research and Development Program of China(No.2021YFA0715900).
文摘Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowadays,the groundwater vulnerability assessment(GVA)has become an essential task to identify the current status and development trend of groundwater quality.In this study,the Convolutional Neural Network(CNN)and Long Short-Term Memory(LSTM)models are integrated to realize the spatio-temporal prediction of regional groundwater vulnerability by introducing the Self-attention mechanism.The study firstly builds the CNN-LSTM modelwith self-attention(SA)mechanism and evaluates the prediction accuracy of the model for groundwater vulnerability compared to other common machine learning models such as Support Vector Machine(SVM),Random Forest(RF),and Extreme Gradient Boosting(XGBoost).The results indicate that the CNNLSTM model outperforms thesemodels,demonstrating its significance in groundwater vulnerability assessment.It can be posited that the predictions indicate an increased risk of groundwater vulnerability in the study area over the coming years.This increase can be attributed to the synergistic impact of global climate anomalies and intensified local human activities.Moreover,the overall groundwater vulnerability risk in the entire region has increased,evident fromboth the notably high value and standard deviation.This suggests that the spatial variability of groundwater vulnerability in the area is expected to expand in the future due to the sustained progression of climate change and human activities.The model can be optimized for diverse applications across regional environmental assessment,pollution prediction,and risk statistics.This study holds particular significance for ecological protection and groundwater resource management.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia under Grant No.(GPIP:1055-829-2024).
文摘A healthy brain is vital to every person since the brain controls every movement and emotion.Sometimes,some brain cells grow unexpectedly to be uncontrollable and cancerous.These cancerous cells are called brain tumors.For diagnosed patients,their lives depend mainly on the early diagnosis of these tumors to provide suitable treatment plans.Nowadays,Physicians and radiologists rely on Magnetic Resonance Imaging(MRI)pictures for their clinical evaluations of brain tumors.These evaluations are time-consuming,expensive,and require expertise with high skills to provide an accurate diagnosis.Scholars and industrials have recently partnered to implement automatic solutions to diagnose the disease with high accuracy.Due to their accuracy,some of these solutions depend on deep-learning(DL)methodologies.These techniques have become important due to their roles in the diagnosis process,which includes identification and classification.Therefore,there is a need for a solid and robust approach based on a deep-learning method to diagnose brain tumors.The purpose of this study is to develop an intelligent automatic framework for brain tumor diagnosis.The proposed solution is based on a novel dense dynamic residual self-attention transfer adaptive learning fusion approach(NDDRSATALFA),carried over two implemented deep-learning networks:VGG19 and UNET to identify and classify brain tumors.In addition,this solution applies a transfer learning approach to exchange extracted features and data within the two neural networks.The presented framework is trained,validated,and tested on six public datasets of MRIs to detect brain tumors and categorize these tumors into three suitable classes,which are glioma,meningioma,and pituitary.The proposed framework yielded remarkable findings on variously evaluated performance indicators:99.32%accuracy,98.74%sensitivity,98.89%specificity,99.01%Dice,98.93%Area Under the Curve(AUC),and 99.81%F1-score.In addition,a comparative analysis with recent state-of-the-art methods was performed and according to the comparative analysis,NDDRSATALFA shows an admirable level of reliability in simplifying the timely identification of diverse brain tumors.Moreover,this framework can be applied by healthcare providers to assist radiologists,pathologists,and physicians in their evaluations.The attained outcomes open doors for advanced automatic solutions that improve clinical evaluations and provide reasonable treatment plans.
基金funded by the Ongoing Research Funding Program(ORF-2025-102),King Saud University,Riyadh,Saudi Arabiaby the Science and Technology Research Programof Chongqing Municipal Education Commission(Grant No.KJQN202400813)by the Graduate Research Innovation Project(Grant Nos.yjscxx2025-269-193 and CYS25618).
文摘Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features in multimodal image analysis of ophthalmology,as well as the existence of information redundancy in cross-modal data fusion,this paper proposes amultimodal fusion framework based on cross-modal collaboration and weighted attention mechanism.In terms of feature extraction,the framework collaboratively extracts local fine-grained features and global structural dependencies through a parallel dual-branch architecture,overcoming the limitations of traditional single-modality models in capturing either local or global information;in terms of fusion strategy,the framework innovatively designs a cross-modal dynamic fusion strategy,combining overlappingmulti-head self-attention modules with a bidirectional feature alignment mechanism,addressing the bottlenecks of low feature interaction efficiency and excessive attention fusion computations in traditional parallel fusion,and further introduces cross-domain local integration technology,which enhances the representation ability of the lesion area through pixel-level feature recalibration and optimizes the diagnostic robustness of complex cases.Experiments show that the framework exhibits excellent feature expression and generalization performance in cross-domain scenarios of ophthalmic medical images and natural images,providing a high-precision,low-redundancy fusion paradigm for multimodal medical image analysis,and promoting the upgrade of intelligent diagnosis and treatment fromsingle-modal static analysis to dynamic decision-making.
文摘Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.
基金supported by the National Key Research and Development Program of China(2020YFC2200901)。
文摘As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.
文摘The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.
基金supported by the National Key R&D Program of China(No.2022YFB4301102).
文摘Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions.
基金supported by the National Natural Science Foundation of China(62073140,62073141)the Shanghai Rising-Star Program(21QA1401800).
文摘Fault diagnosis is important for maintaining the safety and effectiveness of chemical process.Considering the multivariate,nonlinear,and dynamic characteristic of chemical process,many time-series-based data-driven fault diagnosis methods have been developed in recent years.However,the existing methods have the problem of long-term dependency and are difficult to train due to the sequential way of training.To overcome these problems,a novel fault diagnosis method based on time-series and the hierarchical multihead self-attention(HMSAN)is proposed for chemical process.First,a sliding window strategy is adopted to construct the normalized time-series dataset.Second,the HMSAN is developed to extract the time-relevant features from the time-series process data.It improves the basic self-attention model in both width and depth.With the multihead structure,the HMSAN can pay attention to different aspects of the complicated chemical process and obtain the global dynamic features.However,the multiple heads in parallel lead to redundant information,which cannot improve the diagnosis performance.With the hierarchical structure,the redundant information is reduced and the deep local time-related features are further extracted.Besides,a novel many-to-one training strategy is introduced for HMSAN to simplify the training procedure and capture the long-term dependency.Finally,the effectiveness of the proposed method is demonstrated by two chemical cases.The experimental results show that the proposed method achieves a great performance on time-series industrial data and outperforms the state-of-the-art approaches.
基金supported by the National Natural Science Foundation of China (6202201562088101)+1 种基金Shanghai Municipal Science and Technology Major Project (2021SHZDZX0100)Shanghai Municip al Commission of Science and Technology Project (19511132101)。
文摘Aerial threat assessment is a crucial link in modern air combat, whose result counts a great deal for commanders to make decisions. With the consideration that the existing threat assessment methods have difficulties in dealing with high dimensional time series target data, a threat assessment method based on self-attention mechanism and gated recurrent unit(SAGRU) is proposed. Firstly, a threat feature system including air combat situations and capability features is established. Moreover, a data augmentation process based on fractional Fourier transform(FRFT) is applied to extract more valuable information from time series situation features. Furthermore, aiming to capture key characteristics of battlefield evolution, a bidirectional GRU and SA mechanisms are designed for enhanced features.Subsequently, after the concatenation of the processed air combat situation and capability features, the target threat level will be predicted by fully connected neural layers and the softmax classifier. Finally, in order to validate this model, an air combat dataset generated by a combat simulation system is introduced for model training and testing. The comparison experiments show the proposed model has structural rationality and can perform threat assessment faster and more accurately than the other existing models based on deep learning.
基金supported by the National Natural Science Foundation of China under Grant 62177029the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX21_0740),China.
文摘Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications.
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
基金supported by the National Key Research and Development Program of China[grant No.2016YF B0502200]the Postdoctoral Research Foundation of China[grant No.2020M682480]the Fundamental Research Funds for the Central Universities[grant No.2042021kf0009]。
文摘The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.
基金Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia,Grant/Award Number:NU/IFC/02/SERC/-/31Institutional Funding Committee at Najran University,Kingdom of Saudi Arabia。
文摘Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(such as medical,transportation,military,etc.)which are reachable targets for hostile intruders due to their openness and varied structure.Intrusion Detection Systems(IDS)based on Machine Learning(ML)and Deep Learning(DL)techniques have got significant attention.However,existing ML and DL-based IDS still face a number of obstacles that must be overcome.For instance,the existing DL approaches necessitate a substantial quantity of data for effective performance,which is not feasible to run on low-power and low-memory devices.Imbalanced and fewer data potentially lead to low performance on existing IDS.This paper proposes a self-attention convolutional neural network(SACNN)architecture for the detection of malicious activity in IIoT networks and an appropriate feature extraction method to extract the most significant features.The proposed architecture has a self-attention layer to calculate the input attention and convolutional neural network(CNN)layers to process the assigned attention features for prediction.The performance evaluation of the proposed SACNN architecture has been done with the Edge-IIoTset and X-IIoTID datasets.These datasets encompassed the behaviours of contemporary IIoT communication protocols,the operations of state-of-the-art devices,various attack types,and diverse attack scenarios.
基金supported by Graduate Funded Project(No.JY2022A017).
文摘The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random missing(RM)that differs significantly from common missing patterns of RTT-AT.The method for solving the RM may experience performance degradation or failure when applied to RTT-AT imputation.Conventional autoregressive deep learning methods are prone to error accumulation and long-term dependency loss.In this paper,a non-autoregressive imputation model that addresses the issue of missing value imputation for two common missing patterns in RTT-AT is proposed.Our model consists of two probabilistic sparse diagonal masking self-attention(PSDMSA)units and a weight fusion unit.It learns missing values by combining the representations outputted by the two units,aiming to minimize the difference between the missing values and their actual values.The PSDMSA units effectively capture temporal dependencies and attribute correlations between time steps,improving imputation quality.The weight fusion unit automatically updates the weights of the output representations from the two units to obtain a more accurate final representation.The experimental results indicate that,despite varying missing rates in the two missing patterns,our model consistently outperforms other methods in imputation performance and exhibits a low frequency of deviations in estimates for specific missing entries.Compared to the state-of-the-art autoregressive deep learning imputation model Bidirectional Recurrent Imputation for Time Series(BRITS),our proposed model reduces mean absolute error(MAE)by 31%~50%.Additionally,the model attains a training speed that is 4 to 8 times faster when compared to both BRITS and a standard Transformer model when trained on the same dataset.Finally,the findings from the ablation experiments demonstrate that the PSDMSA,the weight fusion unit,cascade network design,and imputation loss enhance imputation performance and confirm the efficacy of our design.
文摘In the application of aerial target recognition,on the one hand,the recognition error produced by the single measurement of the sensor is relatively large due to the impact of noise.On the other hand,it is difficult to apply machine learning methods to improve the intelligence and recognition effect due to few or no actual measurement samples.Aiming at these problems,an aerial target recognition algorithm based on self-attention and Long Short-Term Memory Network(LSTM)is proposed.LSTM can effectively extract temporal dependencies.The attention mechanism calculates the weight of each input element and applies the weight to the hidden state of the LSTM,thereby adjusting the LSTM’s attention to the input.This combination retains the learning ability of LSTM and introduces the advantages of the attention mechanism,making the model have stronger feature extraction ability and adaptability when processing sequence data.In addition,based on the prior information of the multidimensional characteristics of the target,the three-point estimation method is adopted to simulate an aerial target recognition dataset to train the recognition model.The experimental results show that the proposed algorithm achieves more than 91%recognition accuracy,lower false alarm rate and higher robustness compared with the multi-attribute decision-making(MADM)based on fuzzy numbers.
基金supported in part by the Research Fund of Guangxi Key Lab of Multi-Source Information Mining&Security(MIMS21-M-02).
文摘False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work usually trains a detection model by fusing the data-driven features from diverse power data streams.Data-driven features,however,cannot effectively capture the differences between noisy data and attack samples.As a result,slight noise disturbances in the power grid may cause a large number of false detections for FDIA attacks.To address this problem,this paper designs a deep collaborative self-attention network to achieve robust FDIA detection,in which the spatio-temporal features of cascaded FDIA attacks are fully integrated.Firstly,a high-order Chebyshev polynomials-based graph convolution module is designed to effectively aggregate the spatio information between grid nodes,and the spatial self-attention mechanism is involved to dynamically assign attention weights to each node,which guides the network to pay more attention to the node information that is conducive to FDIA detection.Furthermore,the bi-directional Long Short-Term Memory(LSTM)network is introduced to conduct time series modeling and long-term dependence analysis for power grid data and utilizes the temporal self-attention mechanism to describe the time correlation of data and assign different weights to different time steps.Our designed deep collaborative network can effectively mine subtle perturbations from spatiotemporal feature information,efficiently distinguish power grid noise from FDIA attacks,and adapt to diverse attack intensities.Extensive experiments demonstrate that our method can obtain an efficient detection performance over actual load data from New York Independent System Operator(NYISO)in IEEE 14,IEEE 39,and IEEE 118 bus systems,and outperforms state-of-the-art FDIA detection schemes in terms of detection accuracy and robustness.
基金supported by the General Program under grant funded by the National Natural Science Foundation of China(NSFC)(No.62171307)the Basic Research Program of Shanxi Province under grant funded by the Department of Science and Technology of Shanxi Province(China)(No.202103021224113).
文摘Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the accuracy of diagnostic classification methods based on deep learning.To address this issue,our study designed a deep data amplification model named Progressive Conditional Generative Adversarial Network with Efficient Approximating Self Attention(PCGAN-EASA),which incrementally improves the quality of generated EEG features.This network can yield full-scale,fine-grained EEG features from the low-scale,coarse ones.Specially,to overcome the limitations of traditional generative models that fail to generate features tailored to individual patient characteristics,we developed an encoder with an effective approximating self-attention mechanism.This encoder not only automatically extracts relevant features across different patients but also reduces the computational resource consumption.Furthermore,the adversarial loss and reconstruction loss functions were redesigned to better align with the training characteristics of the network and the spatial correlations among electrodes.Extensive experimental results demonstrate that PCGAN-EASA provides the highest generation quality and the lowest computational resource usage compared to several existing approaches.Additionally,it significantly improves the accuracy of subsequent stroke classification tasks.
基金supported by the National Natural Science Foundation of China under Grant 51977004the Beijing Natural Science Foundation under Grant 4212042.
文摘To predict renewable energy sources such as solar power in microgrids more accurately,a hybrid power prediction method is presented in this paper.First,the self-attention mechanism is introduced based on a bidirectional gated recurrent neural network(BiGRU)to explore the time-series characteristics of solar power output and consider the influence of different time nodes on the prediction results.Subsequently,an improved quantum particle swarm optimization(QPSO)algorithm is proposed to optimize the hyperparameters of the combined prediction model.The final proposed LQPSO-BiGRU-self-attention hybrid model can predict solar power more effectively.In addition,considering the coordinated utilization of various energy sources such as electricity,hydrogen,and renewable energy,a multi-objective optimization model that considers both economic and environmental costs was constructed.A two-stage adaptive multi-objective quantum particle swarm optimization algorithm aided by a Lévy flight,named MO-LQPSO,was proposed for the comprehensive optimal scheduling of a multi-energy microgrid system.This algorithm effectively balances the global and local search capabilities and enhances the solution of complex nonlinear problems.The effectiveness and superiority of the proposed scheme are verified through comparative simulations.
基金supported in part by the National Nature Science Foundation of China(Nos.62173091,62073082)in part by the Natural Science Foundation of Fujian Province(No.2023J01268)in part by the Taishan Scholar Program of Shandong Province.
文摘Separable nonlinear models are widely used in various fields such as time series analysis, system modeling, and machine learning, due to their flexible structures and ability to capture nonlinear behavior of data. However, identifying the parameters of these models is challenging, especially when sparse models with better interpretability are desired by practitioners. Previous theoretical and practical studies have shown that variable projection (VP) is an efficient method for identifying separable nonlinear models, but these are based on \(L_2\) penalty of model parameters, which cannot be directly extended to deal with sparse constraint. Based on the exploration of the structural characteristics of separable models, this paper proposes gradient-based and trust-region-based variable projection algorithms, which mainly solve two key problems: how to eliminate linear parameters under sparse constraint;and how to deal with the coupling relationship between linear and nonlinear parameters in the model. Finally, numerical experiments on synthetic data and real time series data are conducted to verify the effectiveness of the proposed algorithms.