期刊文献+
共找到546,797篇文章
< 1 2 250 >
每页显示 20 50 100
Automatic infrared image recognition method for substation equipment based on a deep self-attention network and multi-factor similarity calculation 被引量:1
1
作者 Yaocheng Li Yongpeng Xu +4 位作者 Mingkai Xu Siyuan Wang Zhicheng Xie Zhe Li Xiuchen Jiang 《Global Energy Interconnection》 EI CAS CSCD 2022年第4期397-408,共12页
Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpret... Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpretable,and have limited training data.To address these limitations,this paper proposes an automatic infrared image recognition framework,which includes an object recognition module based on a deep self-attention network and a temperature distribution identification module based on a multi-factor similarity calculation.First,the features of an input image are extracted and embedded using a multi-head attention encoding-decoding mechanism.Thereafter,the embedded features are used to predict the equipment component category and location.In the located area,preliminary segmentation is performed.Finally,similar areas are gradually merged,and the temperature distribution of the equipment is obtained to identify a fault.Our experiments indicate that the proposed method demonstrates significantly improved accuracy compared with other related methods and,hence,provides a good reference for the automation of power equipment inspection. 展开更多
关键词 Substation equipment Infrared image intelligent recognition Deep self-attention network Multi-factor similarity calculation
在线阅读 下载PDF
The brief self-attention module for lightweight convolution neural networks
2
作者 YAN Jie WEI Yingmei +3 位作者 XIE Yuxiang GONG Quanzhi ZOU Shiwei LUAN Xidao 《Journal of Systems Engineering and Electronics》 2025年第6期1389-1397,共9页
Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by le... Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets. 展开更多
关键词 self-attention lightweight neural network deep learning
在线阅读 下载PDF
Image compressed sensing reconstruction network based on self-attention mechanism
3
作者 LIU Yuhong LIU Xiaoyan CHEN Manyin 《Journal of Measurement Science and Instrumentation》 2025年第4期537-546,共10页
For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high com... For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high computational complexity,and long reconstruction time.An image compressed sensing reconstruction network based on self-attention mechanism(SAMNet)was proposed.For the compressed sampling,self-attention convolution was designed,which was conducive to capturing richer features,so that the compressed sensing measurement value retained more image structure information.For the reconstruction,a self-attention mechanism was introduced in the convolutional neural network.A reconstruction network including residual blocks,bottleneck transformer(BoTNet),and dense blocks was proposed,which strengthened the transfer of image features and reduced the amount of parameters dramatically.Under the Set5 dataset,when the measurement rates are 0.01,0.04,0.10,and 0.25,the average peak signal-to-noise ratio(PSNR)of SAMNet is improved by 1.27,1.23,0.50,and 0.15 dB,respectively,compared to the CSNet+.The running time of reconstructing a 256×256 image is reduced by 0.1473,0.1789,0.2310,and 0.2524 s compared to ReconNet.Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time. 展开更多
关键词 convolutional neural network compressed sensing self-attention mechanism dense block image reconstruction
在线阅读 下载PDF
A precise magnetic modeling method for scientific satellites based on a self-attention mechanism and Kolmogorov-Arnold Networks
4
作者 Ye Liu Xingjian Shi +2 位作者 Wenzhe Yang Zhiming Cai Huawang Li 《Astronomical Techniques and Instruments》 2025年第1期1-9,共9页
As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additi... As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase. 展开更多
关键词 Magnetic dipole model self-attention mechanism Kolmogorov-Arnold networks Alternating current magnetic fields
在线阅读 下载PDF
Dual Self-attention Fusion Message Neural Network for Virtual Screening in Drug Discovery by Molecular Property Prediction
5
作者 Jingjing Wang Kangming Hou +2 位作者 Hao Chen Jing Fang Hongzhen Li 《Journal of Bionic Engineering》 2025年第1期354-369,共16页
The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiment... The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper. 展开更多
关键词 Directed message passing network Deep learning Molecular property prediction self-attention mechanism
暂未订购
3D medical image segmentation using the serial-parallel convolutional neural network and transformer based on crosswindow self-attention 被引量:1
6
作者 Bin Yu Quan Zhou +3 位作者 Li Yuan Huageng Liang Pavel Shcherbakov Xuming Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期337-348,共12页
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu... Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance. 展开更多
关键词 convolution neural network cross window self‐attention medical image segmentation transformer
在线阅读 下载PDF
Double Self-Attention Based Fully Connected Feature Pyramid Network for Field Crop Pest Detection
7
作者 Zijun Gao Zheyi Li +2 位作者 Chunqi Zhang Ying Wang Jingwen Su 《Computers, Materials & Continua》 2025年第6期4353-4371,共19页
Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of intersp... Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of interspecies similarity,multi-scale,and background complexity of pests.To address these problems,this study proposes an FD-YOLO pest target detection model.The FD-YOLO model uses a Fully Connected Feature Pyramid Network(FC-FPN)instead of a PANet in the neck,which can adaptively fuse multi-scale information so that the model can retain small-scale target features in the deep layer,enhance large-scale target features in the shallow layer,and enhance the multiplexing of effective features.A dual self-attention module(DSA)is then embedded in the C3 module of the neck,which captures the dependencies between the information in both spatial and channel dimensions,effectively enhancing global features.We selected 16 types of pests that widely damage field crops in the IP102 pest dataset,which were used as our dataset after data supplementation and enhancement.The experimental results showed that FD-YOLO’s mAP@0.5 improved by 6.8%compared to YOLOv5,reaching 82.6%and 19.1%–5%better than other state-of-the-art models.This method provides an effective new approach for detecting similar or multiscale pests in field crops. 展开更多
关键词 Pest detection YOLOv5 feature pyramid network transformer attention module
在线阅读 下载PDF
MSSTGCN: Multi-Head Self-Attention and Spatial-Temporal Graph Convolutional Network for Multi-Scale Traffic Flow Prediction
8
作者 Xinlu Zong Fan Yu +1 位作者 Zhen Chen Xue Xia 《Computers, Materials & Continua》 2025年第2期3517-3537,共21页
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ... Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks. 展开更多
关键词 Graph convolutional network traffic flow prediction multi-scale traffic flow spatial-temporal model
在线阅读 下载PDF
SEFormer:A Lightweight CNN-Transformer Based on Separable Multiscale Depthwise Convolution and Efficient Self-Attention for Rotating Machinery Fault Diagnosis 被引量:1
9
作者 Hongxing Wang Xilai Ju +1 位作者 Hua Zhu Huafeng Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期1417-1437,共21页
Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine... Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment. 展开更多
关键词 CNN-Transformer separable multiscale depthwise convolution efficient self-attention fault diagnosis
在线阅读 下载PDF
Spatio-temporal prediction of groundwater vulnerability based on CNN-LSTM model with self-attention mechanism:A case study in Hetao Plain,northern China 被引量:2
10
作者 Yifu Zhao Liangping Yang +4 位作者 Hongjie Pan Yanlong Li Yongxu Shao Junxia Li Xianjun Xie 《Journal of Environmental Sciences》 2025年第7期128-142,共15页
Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowad... Located in northern China,the Hetao Plain is an important agro-economic zone and population centre.The deterioration of local groundwater quality has had a serious impact on human health and economic development.Nowadays,the groundwater vulnerability assessment(GVA)has become an essential task to identify the current status and development trend of groundwater quality.In this study,the Convolutional Neural Network(CNN)and Long Short-Term Memory(LSTM)models are integrated to realize the spatio-temporal prediction of regional groundwater vulnerability by introducing the Self-attention mechanism.The study firstly builds the CNN-LSTM modelwith self-attention(SA)mechanism and evaluates the prediction accuracy of the model for groundwater vulnerability compared to other common machine learning models such as Support Vector Machine(SVM),Random Forest(RF),and Extreme Gradient Boosting(XGBoost).The results indicate that the CNNLSTM model outperforms thesemodels,demonstrating its significance in groundwater vulnerability assessment.It can be posited that the predictions indicate an increased risk of groundwater vulnerability in the study area over the coming years.This increase can be attributed to the synergistic impact of global climate anomalies and intensified local human activities.Moreover,the overall groundwater vulnerability risk in the entire region has increased,evident fromboth the notably high value and standard deviation.This suggests that the spatial variability of groundwater vulnerability in the area is expected to expand in the future due to the sustained progression of climate change and human activities.The model can be optimized for diverse applications across regional environmental assessment,pollution prediction,and risk statistics.This study holds particular significance for ecological protection and groundwater resource management. 展开更多
关键词 Groundwater vulnerability assessment Convolutional Neural network Long Short-Term Memory self-attention mechanism
原文传递
Hashtag Recommendation Using LSTM Networks with Self-Attention 被引量:2
11
作者 Yatian Shen Yan Li +5 位作者 Jun Sun Wenke Ding Xianjin Shi Lei Zhang Xiajiong Shen Jing He 《Computers, Materials & Continua》 SCIE EI 2019年第9期1261-1269,共9页
On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend ha... On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend hashtags for tweets has received wide attention.The previous hashtag recommendation methods were to convert the task into a multi-class classification problem.However,these methods can only recommend hashtags that appeared in historical information,and cannot recommend the new ones.In this work,we extend the self-attention mechanism to turn the hashtag recommendation task into a sequence labeling task.To train and evaluate the proposed method,we used the real tweet data which is collected from Twitter.Experimental results show that the proposed method can be significantly better than the most advanced method.Compared with the state-of-the-art methods,the accuracy of our method has been increased 4%. 展开更多
关键词 Hashtags recommendation self-attention neural networks sequence labeling
在线阅读 下载PDF
A Self-Attention Based Dynamic Resource Management for Satellite-Terrestrial Networks 被引量:1
12
作者 Lin Tianhao Luo Zhiyong 《China Communications》 SCIE CSCD 2024年第4期136-150,共15页
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor... The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks. 展开更多
关键词 mobile edge computing resource management satellite-terrestrial networks self-attention
在线阅读 下载PDF
Prediction Method of Equipment Remaining Life Based on Self-Attention Long Short-Term Memory Neural Network 被引量:1
13
作者 曹现刚 雷卓 +2 位作者 李彦川 张梦园 段欣宇 《Journal of Shanghai Jiaotong university(Science)》 EI 2023年第5期652-664,共13页
Aiming at the problem of insufficient consideration of the correlation between components in the prediction of the remaining life of mechanical equipment,the method of remaining life prediction that combines the self-... Aiming at the problem of insufficient consideration of the correlation between components in the prediction of the remaining life of mechanical equipment,the method of remaining life prediction that combines the self-attention mechanism with the long short-term memory neural network(LSTM-NN)is proposed,called Self-Attention-LSTM.First,the auto-encoder is used to obtain the component-level state information;second,the state information of each component is input into the self-attention mechanism to learn the correlation between components;then,the multi-component correlation matrix is added to the LSTM input gate,and the LSTM-NN is used for life prediction.Finally,combined with the commercial modular aero-propulsion system simulation data set(C-MAPSS),the experiment was carried out and compared with the existing methods.Research results show that the proposed method can achieve better prediction accuracy and verify the feasibility of the method. 展开更多
关键词 equipment remaining life prediction self-attention long short-term memory neural network(LSTMNN) correlation analysis
原文传递
FCN-Attention:A deep learning UWB NLOS/LOS classification algorithm using fully convolution neural network with self-attention mechanism 被引量:3
14
作者 Yu Pei Ruizhi Chen +2 位作者 Deren Li Xiongwu Xiao Xingyu Zheng 《Geo-Spatial Information Science》 CSCD 2024年第4期1162-1181,共20页
The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accur... The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper. 展开更多
关键词 Ultra Wideband(UWB) None-line-of-sight(NLOS)identification channel impulse response(CIR) deep learning fully convolution network self-attention mechanism
原文传递
Joint Self-Attention Based Neural Networks for Semantic Relation Extraction 被引量:1
15
作者 Jun Sun Yan Li +5 位作者 Yatian Shen Wenke Ding Xianjin Shi Lei Zhang Xiajiong Shen Jing He 《Journal of Information Hiding and Privacy Protection》 2019年第2期69-75,共7页
Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this pape... Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this paper,we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM(SA-Bi-LSTM)to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information,and capture Long-distance dependence on semantics.We conduct experiments using the SemEval-2010 Task 8 dataset.Extensive experiments and the results demonstrated that the proposed method is effective against relation classification,which can obtain state-ofthe-art classification accuracy just with minimal feature engineering. 展开更多
关键词 self-attention relation extraction neural networks
在线阅读 下载PDF
Self-Attention Spatio-Temporal Deep Collaborative Network for Robust FDIA Detection in Smart Grids 被引量:1
16
作者 Tong Zu Fengyong Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1395-1417,共23页
False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work u... False data injection attack(FDIA)can affect the state estimation of the power grid by tampering with the measured value of the power grid data,and then destroying the stable operation of the smart grid.Existing work usually trains a detection model by fusing the data-driven features from diverse power data streams.Data-driven features,however,cannot effectively capture the differences between noisy data and attack samples.As a result,slight noise disturbances in the power grid may cause a large number of false detections for FDIA attacks.To address this problem,this paper designs a deep collaborative self-attention network to achieve robust FDIA detection,in which the spatio-temporal features of cascaded FDIA attacks are fully integrated.Firstly,a high-order Chebyshev polynomials-based graph convolution module is designed to effectively aggregate the spatio information between grid nodes,and the spatial self-attention mechanism is involved to dynamically assign attention weights to each node,which guides the network to pay more attention to the node information that is conducive to FDIA detection.Furthermore,the bi-directional Long Short-Term Memory(LSTM)network is introduced to conduct time series modeling and long-term dependence analysis for power grid data and utilizes the temporal self-attention mechanism to describe the time correlation of data and assign different weights to different time steps.Our designed deep collaborative network can effectively mine subtle perturbations from spatiotemporal feature information,efficiently distinguish power grid noise from FDIA attacks,and adapt to diverse attack intensities.Extensive experiments demonstrate that our method can obtain an efficient detection performance over actual load data from New York Independent System Operator(NYISO)in IEEE 14,IEEE 39,and IEEE 118 bus systems,and outperforms state-of-the-art FDIA detection schemes in terms of detection accuracy and robustness. 展开更多
关键词 False data injection attacks smart grid deep learning self-attention mechanism spatio-temporal fusion
在线阅读 下载PDF
Self-attention transfer networks for speech emotion recognition 被引量:4
17
作者 Ziping ZHAO Keru Wang +6 位作者 Zhongtian BAO Zixing ZHANG Nicholas CUMMINS Shihuang SUN Haishuai WANG Jianhua TAO Björn WSCHULLER 《Virtual Reality & Intelligent Hardware》 2021年第1期43-54,共12页
Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in s... Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model. 展开更多
关键词 Speech emotion recognition Attention transfer self-attention Temporal convolutional neural networks(TCNs)
在线阅读 下载PDF
Conditional self-attention generative adversarial network with differential evolution algorithm for imbalanced data classification 被引量:3
18
作者 Jiawei NIU Zhunga LIU +2 位作者 Quan PAN Yanbo YANG Yang LI 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第3期303-315,共13页
Imbalanced data classification is an important research topic in real-world applications,like fault diagnosis in an aircraft manufacturing system.The over-sampling method is often used to solve this problem.It generat... Imbalanced data classification is an important research topic in real-world applications,like fault diagnosis in an aircraft manufacturing system.The over-sampling method is often used to solve this problem.It generates samples according to the distance between minority data.However,the traditional over-sampling method may change the original data distribution,which is harmful to the classification performance.In this paper,we propose a new method called Conditional SelfAttention Generative Adversarial Network with Differential Evolution(CSAGAN-DE)for imbalanced data classification.The new method aims at improving the classification performance of minority data by enhancing the quality of the generation of minority data.In CSAGAN-DE,the minority data are fed into the self-attention generative adversarial network to approximate the data distribution and create new data for the minority class.Then,the differential evolution algorithm is employed to automatically determine the number of generated minority data for achieving a satisfactory classification performance.Several experiments are conducted to evaluate the performance of the new CSAGAN-DE method.The results show that the new method can efficiently improve the classification performance compared with other related methods. 展开更多
关键词 Classification Generative adversarial network Imbalanced data Optimization OVER-SAMPLING
原文传递
Self-attention and convolutional feature fusion for real-time intelligent fault detection of high-speed railway pantographs
19
作者 Xufeng LI Jien MA +3 位作者 Ping TAN Lanfen LIN Lin QIU Youtong FANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 2025年第10期997-1009,共13页
Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operati... Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions. 展开更多
关键词 High-speed railway pantograph self-attention Convolutional neural network(CNN) REAL-TIME Feature fusion Faultdetection
原文传递
WMA:A Multi-Scale Self-Attention Feature Extraction Network Based on Weight Sharing for VQA 被引量:1
20
作者 Yue Li Jin Liu Shengjie Shang 《Journal on Big Data》 2021年第3期111-118,共8页
Visual Question Answering(VQA)has attracted extensive research focus and has become a hot topic in deep learning recently.The development of computer vision and natural language processing technology has contributed t... Visual Question Answering(VQA)has attracted extensive research focus and has become a hot topic in deep learning recently.The development of computer vision and natural language processing technology has contributed to the advancement of this research area.Key solutions to improve the performance of VQA system exist in feature extraction,multimodal fusion,and answer prediction modules.There exists an unsolved issue in the popular VQA image feature extraction module that extracts the fine-grained features from objects of different scale difficultly.In this paper,a novel feature extraction network that combines multi-scale convolution and self-attention branches to solve the above problem is designed.Our approach achieves the state-of-the-art performance of a single model on Pascal VOC 2012,VQA 1.0,and VQA 2.0 datasets. 展开更多
关键词 VQA feature extraction self-attention FINE-GRAINED
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部