期刊文献+
共找到206,408篇文章
< 1 2 250 >
每页显示 20 50 100
Short‐term and long‐term memory self‐attention network for segmentation of tumours in 3D medical images
1
作者 Mingwei Wen Quan Zhou +3 位作者 Bo Tao Pavel Shcherbakov Yang Xu Xuming Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1524-1537,共14页
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap... Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS. 展开更多
关键词 3D medical images convolutional neural network self‐attention network TRANSFORMER tumor segmentation
在线阅读 下载PDF
3D medical image segmentation using the serial-parallel convolutional neural network and transformer based on crosswindow self-attention 被引量:1
2
作者 Bin Yu Quan Zhou +3 位作者 Li Yuan Huageng Liang Pavel Shcherbakov Xuming Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期337-348,共12页
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu... Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance. 展开更多
关键词 convolution neural network cross window self‐attention medical image segmentation transformer
在线阅读 下载PDF
基于音视频信息融合与Self-Attention-DSC-CNN6网络的鲈鱼摄食强度分类方法 被引量:4
3
作者 李道亮 李万超 杜壮壮 《农业机械学报》 北大核心 2025年第1期16-24,共9页
摄食强度识别分类是实现水产养殖精准投喂的重要环节。现有的投喂方式存在过度依赖人工经验判断、投喂量不精确、饲料浪费严重等问题。基于多模态融合的鱼类摄食程度分类能够综合不同类型的数据(如:视频、声音和水质参数),为鱼群的投喂... 摄食强度识别分类是实现水产养殖精准投喂的重要环节。现有的投喂方式存在过度依赖人工经验判断、投喂量不精确、饲料浪费严重等问题。基于多模态融合的鱼类摄食程度分类能够综合不同类型的数据(如:视频、声音和水质参数),为鱼群的投喂提供更加全面精准的决策依据。因此,提出了一种融合视频和音频数据的多模态融合框架,旨在提升鲈鱼摄食强度分类性能。将预处理后的Mel频谱图(Mel Spectrogram)和视频帧图像分别输入到Self-Attention-DSC-CNN6(Self-attention-depthwise separable convolution-CNN6)优化模型进行高层次的特征提取,并将提取的特征进一步拼接融合,最后将拼接后的特征经分类器分类。针对Self-Attention-DSC-CNN6优化模型,基于CNN6算法进行了改进,将传统卷积层替换为深度可分离卷积(Depthwise separable convolution,DSC)来达到减少计算复杂度的效果,并引入Self-Attention注意力机制以增强特征提取能力。实验结果显示,本文所提出的多模态融合框架鲈鱼摄食强度分类准确率达到90.24%,模型可以有效利用不同数据源信息,提升了对复杂环境中鱼群行为的理解,增强了模型决策能力,确保了投喂策略的及时性与准确性,从而有效减少了饲料浪费。 展开更多
关键词 鲈鱼 摄食强度分类 多模态融合 self-attention-DSC-CNN6
在线阅读 下载PDF
基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法
4
作者 刘凯伦 孙广玲 陆小锋 《工业控制计算机》 2026年第1期122-124,共3页
随着光伏发电在全球能源体系中占比不断提升,超短期光伏发电量预测对电力系统调度与安全运行至关重要。然而,光伏发电量受多因素影响,具有显著随机性与波动性。为此,提出了一种基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法... 随着光伏发电在全球能源体系中占比不断提升,超短期光伏发电量预测对电力系统调度与安全运行至关重要。然而,光伏发电量受多因素影响,具有显著随机性与波动性。为此,提出了一种基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法。首先通过皮尔逊相关分析筛选关键特征,并利用孤立森林算法检测异常值,结合线性插值法和标准化完成数据预处理。随后,通过时间卷积网络(Temporal Convolutional Network,TCN)提取时序特征,再利用双向长短期记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)网络捕获前后向时间依赖关系,并在输出端引入注意力机制聚焦关键时间步特征。最后,在Desert Knowledge Australia Solar Centre(DKASC)数据集上的对比实验表明,与传统LSTM、BiLSTM模型相比,提出的TCN-BiLSTM-Attention模型在预测精度、稳定性等方面均表现出一定优势。 展开更多
关键词 TCN BiLSTM attention 发电量超短期预测
在线阅读 下载PDF
基于Self-Attention和TextCNN-BiLSTM的中文评论文本情感分析模型 被引量:4
5
作者 龙宇 李秋生 《石河子大学学报(自然科学版)》 北大核心 2025年第1期111-121,共11页
目前关于中文评论文本的情感分类方法大都无法充分捕捉到句子的全局语义信息,同时也在长距离的语义连接或者情感转折理解上具有局限性,因而导致情感分析的准确度不高。针对这个问题,本文提出一种融合SelfAttention和TextCNN-BiLSTM的文... 目前关于中文评论文本的情感分类方法大都无法充分捕捉到句子的全局语义信息,同时也在长距离的语义连接或者情感转折理解上具有局限性,因而导致情感分析的准确度不高。针对这个问题,本文提出一种融合SelfAttention和TextCNN-BiLSTM的文本情感分析方法。该方法首先采用文本卷积神经网络(TextCNN)来提取局部特征,并利用双向长短期记忆网络(BiLSTM)来捕捉序列信息,从而综合考虑了全局和局部信息,在特征融合阶段,再采用自注意力机制来动态地融合不同层次的特征表示,对不同尺度特征进行加权,从而提高重要特征的响应。实验结果表明,所提出的模型在家电商品中文评论语料和谭松波酒店评论语料数据集上的准确率分别达到93.79%和90.05%,相较于基准模型分别提高0.69%~3.59%和4.44%~11.70%,优于传统的基于卷积神经网络(Convolutional Neural Networks, CNN)、BiLSTM或CNN-BiLSTM等的情感分析模型。 展开更多
关键词 自注意力机制 中文评论文本 深度学习 情感分析
在线阅读 下载PDF
基于VMD-Self-attention-LSTM的水闸深基坑变形智能预测方法 被引量:2
6
作者 张伟 仇建春 +5 位作者 夏国春 姚兆仁 吴昊 刘占午 王昱锦 朱新宇 《水电能源科学》 北大核心 2025年第1期99-102,196,共5页
针对水闸深基坑变形监测数据具有非稳定性的特点,提出了基于VMD-Self-attention-LSTM的水闸深基坑变形预测方法。该方法分为三个主要模块,第一模块采用VMD算法自适应调整变形分解模数,将原始变形数据分解为若干具有明显周期规律的分量,... 针对水闸深基坑变形监测数据具有非稳定性的特点,提出了基于VMD-Self-attention-LSTM的水闸深基坑变形预测方法。该方法分为三个主要模块,第一模块采用VMD算法自适应调整变形分解模数,将原始变形数据分解为若干具有明显周期规律的分量,有效解决变形数据的非稳定性,为提升变形预测精度奠定初步基础;第二模块,在传统LSTM算法基础上进一步发展Self-attention-LSTM方法,提升模型对基坑变形样本的时序关系挖掘能力,进而提升基坑变形预测精度;第三模块,将各分量对应的变形预测结果重构得到最终预测值。实例分析可知,所提方法有效解决了非稳定特性给变形预测精度带来的不利影响,与VMD-LSTM、Self-attention-LSTM、LSTM等深度学习方法相比,VMD-Self-attention-LSTM的预测精度最大提升41.49%,与BP、ELM等传统机器学习算法相比,预测精度最大提升50.43%,为水闸深基坑安全监控模型的构建提供了新思路。 展开更多
关键词 水闸深基坑 变形预测 VMD self-attention-LSTM
原文传递
基于CNN-LSTM-Self attention的园区负荷多尺度预测研究 被引量:4
7
作者 杨澜倩 郭锦敏 +3 位作者 田慧丽 黄畅 刘敏 蔡阳 《综合智慧能源》 2025年第2期79-87,共9页
准确预测负荷对提高零碳智慧园区的能源利用效率和盈利能力至关重要,受到小时级数值天气预报数据难以获取以及需要在不同时间尺度进行预测的双重挑战,常规负荷预测技术的应用受到制约。在缺失气象预报数据的条件下,提出利用卷积神经网络... 准确预测负荷对提高零碳智慧园区的能源利用效率和盈利能力至关重要,受到小时级数值天气预报数据难以获取以及需要在不同时间尺度进行预测的双重挑战,常规负荷预测技术的应用受到制约。在缺失气象预报数据的条件下,提出利用卷积神经网络(CNN)提取多元负荷之间的耦合空间特征,将重构的特征输入长短期记忆(LSTM)神经网络实现负荷时间特征提取,再利用自注意力(Self attention)机制强化模型提取特征信息,最后通过全连接网络进行负荷预测,构建基于CNN-LSTM-Self attention的多元负荷多时间尺度预测模型。以某园区为实例对象,对该园区未来1 h,1 d和1周的冷热电负荷进行预测。试验结果表明:在多个时间尺度上,CNN-LSTM-Self attention模型比CNN,LSTM,CNN-LSTM模型预测更为精确;其中,1 h尺度负荷预测时CNN-LSTM-Self attention模型的优势尤为明显,冷、热、电负荷预测的平均绝对百分比误差(MAPE)比CNN-LSTM模型分别提升了16.25%,19.16%,10.24%。 展开更多
关键词 零碳智慧园区 负荷预测 多时间尺度 卷积神经网络 长短期记忆神经网络 自注意力机制
在线阅读 下载PDF
SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention
8
作者 Seyong Jin Muhammad Fayaz +2 位作者 L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 2026年第1期511-533,共23页
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b... Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation. 展开更多
关键词 attention mechanism brain tumor segmentation channel-wise attention decoder deep learning medical imaging MRI TRANSFORMER U-Net
在线阅读 下载PDF
GFL-SAR: Graph Federated Collaborative Learning Framework Based on Structural Amplification and Attention Refinement
9
作者 Hefei Wang Ruichun Gu +2 位作者 Jingyu Wang Xiaolin Zhang Hui Wei 《Computers, Materials & Continua》 2026年第1期1683-1702,共20页
Graph Federated Learning(GFL)has shown great potential in privacy protection and distributed intelligence through distributed collaborative training of graph-structured data without sharing raw information.However,exi... Graph Federated Learning(GFL)has shown great potential in privacy protection and distributed intelligence through distributed collaborative training of graph-structured data without sharing raw information.However,existing GFL approaches often lack the capability for comprehensive feature extraction and adaptive optimization,particularly in non-independent and identically distributed(NON-IID)scenarios where balancing global structural understanding and local node-level detail remains a challenge.To this end,this paper proposes a novel framework called GFL-SAR(Graph Federated Collaborative Learning Framework Based on Structural Amplification and Attention Refinement),which enhances the representation learning capability of graph data through a dual-branch collaborative design.Specifically,we propose the Structural Insight Amplifier(SIA),which utilizes an improved Graph Convolutional Network(GCN)to strengthen structural awareness and improve modeling of topological patterns.In parallel,we propose the Attentive Relational Refiner(ARR),which employs an enhanced Graph Attention Network(GAT)to perform fine-grained modeling of node relationships and neighborhood features,thereby improving the expressiveness of local interactions and preserving critical contextual information.GFL-SAR effectively integrates multi-scale features from every branch via feature fusion and federated optimization,thereby addressing existing GFL limitations in structural modeling and feature representation.Experiments on standard benchmark datasets including Cora,Citeseer,Polblogs,and Cora_ML demonstrate that GFL-SAR achieves superior performance in classification accuracy,convergence speed,and robustness compared to existing methods,confirming its effectiveness and generalizability in GFL tasks. 展开更多
关键词 Graph federated learning GCN GNNs attention mechanism
在线阅读 下载PDF
DAUNet: Unsupervised Neural Network Based on Dual Attention for Clock Synchronization in Multi-Agent Wireless Ad Hoc Networks
10
作者 Haihao He Xianzhou Dong +2 位作者 Shuangshuang Wang Chengzhang Zhu Xiaotong Zhao 《Computers, Materials & Continua》 2026年第1期847-869,共23页
Clock synchronization has important applications in multi-agent collaboration(such as drone light shows,intelligent transportation systems,and game AI),group decision-making,and emergency rescue operations.Synchroniza... Clock synchronization has important applications in multi-agent collaboration(such as drone light shows,intelligent transportation systems,and game AI),group decision-making,and emergency rescue operations.Synchronization method based on pulse-coupled oscillators(PCOs)provides an effective solution for clock synchronization in wireless networks.However,the existing clock synchronization algorithms in multi-agent ad hoc networks are difficult to meet the requirements of high precision and high stability of synchronization clock in group cooperation.Hence,this paper constructs a network model,named DAUNet(unsupervised neural network based on dual attention),to enhance clock synchronization accuracy in multi-agent wireless ad hoc networks.Specifically,we design an unsupervised distributed neural network framework as the backbone,building upon classical PCO-based synchronization methods.This framework resolves issues such as prolonged time synchronization message exchange between nodes,difficulties in centralized node coordination,and challenges in distributed training.Furthermore,we introduce a dual-attention mechanism as the core module of DAUNet.By integrating a Multi-Head Attention module and a Gated Attention module,the model significantly improves information extraction capabilities while reducing computational complexity,effectively mitigating synchronization inaccuracies and instability in multi-agent ad hoc networks.To evaluate the effectiveness of the proposed model,comparative experiments and ablation studies were conducted against classical methods and existing deep learning models.The research results show that,compared with the deep learning networks based on DASA and LSTM,DAUNet can reduce the mean normalized phase difference(NPD)by 1 to 2 orders of magnitude.Compared with the attention models based on additive attention and self-attention mechanisms,the performance of DAUNet has improved by more than ten times.This study demonstrates DAUNet’s potential in advancing multi-agent ad hoc networking technologies. 展开更多
关键词 Clock synchronization deep learning dual attention mechanism pulse-coupled oscillator
在线阅读 下载PDF
Syntax-Aware Hierarchical Attention Networks for Code Vulnerability Detection
11
作者 Yongbo Jiang Shengnan Huang +1 位作者 Tao Feng Baofeng Duan 《Computers, Materials & Continua》 2026年第1期2252-2273,共22页
In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false ... In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates.This paper proposes a Syntax-Aware Hierarchical Attention Network(SAHAN)model,which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms.The SAHAN model first generates Syntax Independent Units(SIUs),which slices the code based on Abstract Syntax Tree(AST)and predefined grammar rules,retaining vulnerability-sensitive contexts.Following this,through a hierarchical attention mechanism,the local syntax-aware layer encodes fine-grained patterns within SIUs,while the global semantic correlation layer captures vulnerability chains across SIUs,achieving synergistic modeling of syntax and semantics.Experiments show that on benchmark datasets like QEMU,SAHAN significantly improves detection performance by 4.8%to 13.1%on average compared to baseline models such as Devign and VulDeePecker. 展开更多
关键词 Vulnerability detection abstract syntax tree syntax rule slicing hierarchical attention mechanism deep learning
在线阅读 下载PDF
Interactive Dynamic Graph Convolution with Temporal Attention for Traffic Flow Forecasting
12
作者 Zitong Zhao Zixuan Zhang Zhenxing Niu 《Computers, Materials & Continua》 2026年第1期1049-1064,共16页
Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating In... Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods. 展开更多
关键词 Traffic flow prediction interactive dynamic graph convolution graph convolution temporal multi-head trend-aware attention self-attention mechanism
在线阅读 下载PDF
LS-DDI:融合LSTM和Self-Attention的药物-药物相互作用预测研究
13
作者 陈星鑫 聂斌 +1 位作者 苗震 杨洋 《现代信息科技》 2025年第14期21-26,31,共7页
多药联合使用可能导致药物不良反应,引起身体健康问题。因此,预测潜在的药物相互作用非常重要。文章提出了一种融合LSTM(长短期记忆网络)和Self-Attention(自注意力机制)的算法(LS-DDI)用于预测药物相互作用,通过高斯相似性分别计算子... 多药联合使用可能导致药物不良反应,引起身体健康问题。因此,预测潜在的药物相互作用非常重要。文章提出了一种融合LSTM(长短期记忆网络)和Self-Attention(自注意力机制)的算法(LS-DDI)用于预测药物相互作用,通过高斯相似性分别计算子结构、靶标和酶三个不同特征,形成相似性矩阵,后接LSTM进行上下文信息的提取,Self-Attention作用在三个特征上赋予不同权重,最后进行预测研究。通过五折交叉验证,在两个不同数据集上的实验结果表明,LS-DDI的结果优于其他四个对比模型,证明了LS-DDI具有良好的性能。最后通过Torasemide,Cannabidiol和Dexamethasone三个药物的案例研究,证明了文章所提出模型在预测未知药物相互作用的有效性。 展开更多
关键词 长短期记忆网络 自注意力机制 药物相互作用 药物不良反应
在线阅读 下载PDF
Automatic diagnosis of keratitis from low-quality slit-lamp images using feature vector quantization and self-attention mechanisms
14
作者 JIANG Jiewei XIN Yu +3 位作者 DING Ke ZHU Mingmin CHEN Yi LI Zhongwen 《Optoelectronics Letters》 2025年第10期612-618,共7页
This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 bac... This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application. 展开更多
关键词 KERATITIS low quality images adaptive average pooling densenet backbone networkfollowed self attention mechanism feature vector quantization diagnosis keratitis automatic diagnosis
原文传递
基于改进HHO-LSTM-Self-Attention的质子交换膜燃料电池剩余使用寿命预测 被引量:1
15
作者 蒋剑 杜董生 苏林 《综合智慧能源》 2025年第6期47-56,共10页
质子交换膜燃料电池(PEMFC)在诸多领域有着广泛应用,但其性能衰退会降低功率输出和能源转换效率、缩短使用寿命,准确预测剩余使用寿命对维护系统、降低成本及保障供电稳定极为关键。基于PEMFC功率随时间的变化趋势,提出了一种结合改进... 质子交换膜燃料电池(PEMFC)在诸多领域有着广泛应用,但其性能衰退会降低功率输出和能源转换效率、缩短使用寿命,准确预测剩余使用寿命对维护系统、降低成本及保障供电稳定极为关键。基于PEMFC功率随时间的变化趋势,提出了一种结合改进的哈里斯鹰优化(HHO)算法、长短期记忆(LSTM)网络和自注意力(Self-Attention)机制的PEMFC剩余使用寿命预测模型。基于电流和电压数据关系得出时间-功率变化曲线,采用小波自适应去噪和指数平滑相结合的方法对时间-功率数据进行分解去噪和重构;针对LSTM训练参数过多、计算量大等不足,提出了一种Logistics混沌映射与HHO算法相结合来优化LSTM的方法,以提高模型的训练速度和预测精度;基于Self-Attention具有聚焦关键信息和提高模型训练准确率的优点,构建了HHO-LSTM-Self-Attention预测模型。试验结果表明,与HHO-LSTM,LSTM,麻雀搜索算法(SSA)-LSTM,粒子群优化(PSO)-LSTM等预测模型相比,该模型具有更高的预测精度。 展开更多
关键词 质子交换膜燃料电池 剩余使用寿命预测 哈里斯鹰优化算法 长短期记忆神经网络 自注意力机制
在线阅读 下载PDF
基于MSCNN+Attention模型的轴承故障诊断方法研究
16
作者 付志鹏 么洪飞 《齐齐哈尔大学学报(自然科学版)》 2026年第1期9-16,43,共9页
针对传统故障诊断方法特征提取能力不足以及诊断精度低的问题,提出一种融合通道注意力与自注意力机制的轴承故障诊断模型。该模型通过多层卷积与注意力机制提取关键特征,并利用自注意力模块进行全局特征融合,构建残差结构增强特征表达能... 针对传统故障诊断方法特征提取能力不足以及诊断精度低的问题,提出一种融合通道注意力与自注意力机制的轴承故障诊断模型。该模型通过多层卷积与注意力机制提取关键特征,并利用自注意力模块进行全局特征融合,构建残差结构增强特征表达能力,诊断模型通过Softmax分类器识别故障。通过凯斯西储大学的轴承数据验证窗口长度与优化器选择的合理性,结果表明,当窗口长度为1024,采用Adam优化器(学习率0.001)时模型性能最佳。通过准确率、ROC曲线和混淆矩阵指标对模型性能进行全面评估。实验结果显示,模型的故障识别准确率达99.4%~100%,显著优于RF模型(96.8%)、GRU模型(97.5%)和LSTM模型(92.3%),在窗口长度为1024时,分类准确率提升最明显,且AUC均超过0.99,综合分析表明该模型的特征提取能力和诊断精度相比传统模型显著提升。 展开更多
关键词 注意力机制 滚动轴承 特征提取 卷积神经网络
在线阅读 下载PDF
Self-reduction multi-head attention module for defect recognition of power equipment in substation
17
作者 Yifeng Han Donglian Qi Yunfeng Yan 《Global Energy Interconnection》 2025年第1期82-91,共10页
Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the b... Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the blurred features of defect images,the current defect recognition algorithm has poor fine-grained recognition ability.Visual attention can achieve fine-grained recognition with its abil-ity to model long-range dependencies while introducing extra computational complexity,especially for multi-head attention in vision transformer structures.Under these circumstances,this paper proposes a self-reduction multi-head attention module that can reduce computational complexity and be easily combined with a Convolutional Neural Network(CNN).In this manner,local and global fea-tures can be calculated simultaneously in our proposed structure,aiming to improve the defect recognition performance.Specifically,the proposed self-reduction multi-head attention can reduce redundant parameters,thereby solving the problem of limited computational resources.Experimental results were obtained based on the defect dataset collected from the substation.The results demonstrated the efficiency and superiority of the proposed method over other advanced algorithms. 展开更多
关键词 Multi-Head attention Defect recognition Power equipment Computational complexity
在线阅读 下载PDF
Double Self-Attention Based Fully Connected Feature Pyramid Network for Field Crop Pest Detection
18
作者 Zijun Gao Zheyi Li +2 位作者 Chunqi Zhang Ying Wang Jingwen Su 《Computers, Materials & Continua》 2025年第6期4353-4371,共19页
Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of intersp... Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of interspecies similarity,multi-scale,and background complexity of pests.To address these problems,this study proposes an FD-YOLO pest target detection model.The FD-YOLO model uses a Fully Connected Feature Pyramid Network(FC-FPN)instead of a PANet in the neck,which can adaptively fuse multi-scale information so that the model can retain small-scale target features in the deep layer,enhance large-scale target features in the shallow layer,and enhance the multiplexing of effective features.A dual self-attention module(DSA)is then embedded in the C3 module of the neck,which captures the dependencies between the information in both spatial and channel dimensions,effectively enhancing global features.We selected 16 types of pests that widely damage field crops in the IP102 pest dataset,which were used as our dataset after data supplementation and enhancement.The experimental results showed that FD-YOLO’s mAP@0.5 improved by 6.8%compared to YOLOv5,reaching 82.6%and 19.1%–5%better than other state-of-the-art models.This method provides an effective new approach for detecting similar or multiscale pests in field crops. 展开更多
关键词 Pest detection YOLOv5 feature pyramid network transformer attention module
在线阅读 下载PDF
基于小波变换及Self-Attention-CNN的行星齿轮箱故障诊断方法
19
作者 唐尚冲 魏邦旭 《起重运输机械》 2025年第12期38-43,62,共7页
针对传统故障诊断技术在快速精确地解决行星齿轮箱故障信号面临的挑战,文中提出了一种基于小波变换和自注意力增强卷积神经网络(CNN)的行星齿轮箱故障诊断方法。该方法有效结合了Self-Attention建立全局依赖的能力和CNN对局部特征处理... 针对传统故障诊断技术在快速精确地解决行星齿轮箱故障信号面临的挑战,文中提出了一种基于小波变换和自注意力增强卷积神经网络(CNN)的行星齿轮箱故障诊断方法。该方法有效结合了Self-Attention建立全局依赖的能力和CNN对局部特征处理的优势。首先,利用小波变换将采集的振动数据转换为包含信号的二维时频图,再利用改进的CNN模型对图像数据进行处理,以实现对行星齿轮箱故障状态的诊断。实验结果表明:该方法具备一定的稳定性和准确性,在行星齿轮箱故障诊断领域有一定的应用潜力。 展开更多
关键词 小波变换 自注意力 CNN 故障诊断 行星齿轮箱
在线阅读 下载PDF
基于CEEMDAN和HBA-BiGRU-SelfAttention的短期负荷预测
20
作者 朱婷 颜七笙 《南京信息工程大学学报》 北大核心 2025年第4期478-493,共16页
针对电力负荷数据存在非线性、时序性等多方面因素导致的预测精度不足等问题,本文提出一种基于CEEMDAN和HBA-BiGRU-SelfAttention的短期负荷预测模型.首先,采用随机森林(RF)算法对气象因素进行特征提取,在保证数据特征的同时,降低数据... 针对电力负荷数据存在非线性、时序性等多方面因素导致的预测精度不足等问题,本文提出一种基于CEEMDAN和HBA-BiGRU-SelfAttention的短期负荷预测模型.首先,采用随机森林(RF)算法对气象因素进行特征提取,在保证数据特征的同时,降低数据的复杂度;其次,采用自适应噪声完备集合经验模态分解(CEEMDAN)算法对原始负荷数据进行分解,得到若干较为平稳的模态分量;然后,将经过特征提取的气象因素和模态分量作为输入数据,利用BiGRU(双向门控循环单元)-SelfAttention(自注意力机制)模型进行预测,并针对BiGRU-SelfAttention模型的超参数难以选取最优解的问题,引入蜜獾算法(HBA)对BiGRU-SelfAttention模型的超参数进行寻优;最后,将子序列预测结果叠加,得到最终预测结果.以某地实际电力负荷数据为数据集进行对比试验,结果表明,本文所提出的模型具有较高的预测精度,可以为电力系统稳定运行提供可靠依据. 展开更多
关键词 短期负荷预测 随机森林 自适应噪声完备集合经验模态分解 蜜獾算法 双向门控循环单元 自注意力机制
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部