期刊文献+
共找到468篇文章
< 1 2 24 >
每页显示 20 50 100
Efficient Parameterization for Knowledge Graph Embedding Using Hierarchical Attention Network
1
作者 Zhen-Yu Chen Feng-Chi Liu +2 位作者 Xin Wang Cheng-Hsiung Lee Ching-Sheng Lin 《Computers, Materials & Continua》 2025年第3期4287-4300,共14页
In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with l... In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure. 展开更多
关键词 Knowledge graph embedding parameter efficiency representation learning reserved entity and relation sets hierarchical attention network
在线阅读 下载PDF
Interactive Dynamic Graph Convolution with Temporal Attention for Traffic Flow Forecasting
2
作者 Zitong Zhao Zixuan Zhang Zhenxing Niu 《Computers, Materials & Continua》 2026年第1期1049-1064,共16页
Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating In... Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods. 展开更多
关键词 Traffic flow prediction interactive dynamic graph convolution graph convolution temporal multi-head trend-aware attention self-attention mechanism
在线阅读 下载PDF
Multi-Head Attention Enhanced Parallel Dilated Convolution and Residual Learning for Network Traffic Anomaly Detection 被引量:1
3
作者 Guorong Qi Jian Mao +2 位作者 Kai Huang Zhengxian You Jinliang Lin 《Computers, Materials & Continua》 2025年第2期2159-2176,共18页
Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract loc... Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract local and global features, as well as the lack of effective mechanisms to capture complex interactions between features;Additionally, when increasing the receptive field to obtain deeper feature representations, the reliance on increasing network depth leads to a significant increase in computational resource consumption, affecting the efficiency and performance of detection. Based on these issues, firstly, this paper proposes a network traffic anomaly detection model based on parallel dilated convolution and residual learning (Res-PDC). To better explore the interactive relationships between features, the traffic samples are converted into two-dimensional matrix. A module combining parallel dilated convolutions and residual learning (res-pdc) was designed to extract local and global features of traffic at different scales. By utilizing res-pdc modules with different dilation rates, we can effectively capture spatial features at different scales and explore feature dependencies spanning wider regions without increasing computational resources. Secondly, to focus and integrate the information in different feature subspaces, further enhance and extract the interactions among the features, multi-head attention is added to Res-PDC, resulting in the final model: multi-head attention enhanced parallel dilated convolution and residual learning (MHA-Res-PDC) for network traffic anomaly detection. Finally, comparisons with other machine learning and deep learning algorithms are conducted on the NSL-KDD and CIC-IDS-2018 datasets. The experimental results demonstrate that the proposed method in this paper can effectively improve the detection performance. 展开更多
关键词 Network traffic anomaly detection multi-head attention parallel dilated convolution residual learning
在线阅读 下载PDF
基于TiBERT+Multi-Head Attention的藏文医疗实体关系联合抽取
4
作者 仁欠扎西 安见才让 曼拉才让 《电脑与电信》 2025年第10期45-49,54,共6页
实体关系抽取是自然语言处理的关键任务之一,在藏医药领域的应用对于构建藏医药知识图谱、智能辅助诊断和药物研发具有重要意义。针对藏文医疗文本实体关系抽取任务,提出一种基于预训练模型TiBERT加多头注意力的联合抽取方法。该方法通... 实体关系抽取是自然语言处理的关键任务之一,在藏医药领域的应用对于构建藏医药知识图谱、智能辅助诊断和药物研发具有重要意义。针对藏文医疗文本实体关系抽取任务,提出一种基于预训练模型TiBERT加多头注意力的联合抽取方法。该方法通过TiBERT模型对藏文医疗文本进行编码处理,生成包含上下文信息的特征向量,再利用多头注意力机制增强特征表示能力,捕捉不同实体之间的关联信息。实验结果表明,该模型在藏医文本数据集上的F1值达到81.81%,显著优于其他对比模型,证明了其有效性。 展开更多
关键词 TiBERT模型 multi-head attention 实体关系抽取 自然语言处理
在线阅读 下载PDF
Event-Aware Sarcasm Detection in Chinese Social Media Using Multi-Head Attention and Contrastive Learning
5
作者 Kexuan Niu Xiameng Si +1 位作者 Xiaojie Qi Haiyan Kang 《Computers, Materials & Continua》 2025年第10期2051-2070,共20页
Sarcasm detection is a complex and challenging task,particularly in the context of Chinese social media,where it exhibits strong contextual dependencies and cultural specificity.To address the limitations of existing ... Sarcasm detection is a complex and challenging task,particularly in the context of Chinese social media,where it exhibits strong contextual dependencies and cultural specificity.To address the limitations of existing methods in capturing the implicit semantics and contextual associations in sarcastic expressions,this paper proposes an event-aware model for Chinese sarcasm detection,leveraging a multi-head attention(MHA)mechanism and contrastive learning(CL)strategies.The proposed model employs a dual-path Bidirectional Encoder Representations from Transformers(BERT)encoder to process comment text and event context separately and integrates an MHA mechanism to facilitate deep interactions between the two,thereby capturing multidimensional semantic associations.Additionally,a CL strategy is introduced to enhance feature representation capabilities,further improving the model’s performance in handling class imbalance and complex contextual scenarios.The model achieves state-of-the-art performance on the Chinese sarcasm dataset,with significant improvements in accuracy(79.55%),F1-score(84.22%),and an area under the curve(AUC,84.35%). 展开更多
关键词 Sarcasm detection event-aware multi-head attention contrastive learning NLP
在线阅读 下载PDF
Self-reduction multi-head attention module for defect recognition of power equipment in substation
6
作者 Yifeng Han Donglian Qi Yunfeng Yan 《Global Energy Interconnection》 2025年第1期82-91,共10页
Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the b... Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the blurred features of defect images,the current defect recognition algorithm has poor fine-grained recognition ability.Visual attention can achieve fine-grained recognition with its abil-ity to model long-range dependencies while introducing extra computational complexity,especially for multi-head attention in vision transformer structures.Under these circumstances,this paper proposes a self-reduction multi-head attention module that can reduce computational complexity and be easily combined with a Convolutional Neural Network(CNN).In this manner,local and global fea-tures can be calculated simultaneously in our proposed structure,aiming to improve the defect recognition performance.Specifically,the proposed self-reduction multi-head attention can reduce redundant parameters,thereby solving the problem of limited computational resources.Experimental results were obtained based on the defect dataset collected from the substation.The results demonstrated the efficiency and superiority of the proposed method over other advanced algorithms. 展开更多
关键词 multi-head attention Defect recognition Power equipment Computational complexity
在线阅读 下载PDF
MAMGBR: Group-Buying Recommendation Model Based on Multi-Head Attention Mechanism and Multi-Task Learning
7
作者 Zongzhe Xu Ming Yu 《Computers, Materials & Continua》 2025年第8期2805-2826,共22页
As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as... As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as a new challenge in the field of recommendation systems.This paper introduces a group-buying recommendation model based on multi-head attention mechanisms and multi-task learning,termed the Multi-head Attention Mechanisms and Multi-task Learning Group-Buying Recommendation(MAMGBR)model,specifically designed to optimize group-buying recommendations on e-commerce platforms.The core dataset of this study comes from the Chinese maternal and infant e-commerce platform“Beibei,”encompassing approximately 430,000 successful groupbuying actions and over 120,000 users.Themodel focuses on twomain tasks:recommending items for group organizers(Task Ⅰ)and recommending participants for a given group-buying event(Task Ⅱ).In model evaluation,MAMGBR achieves an MRR@10 of 0.7696 for Task I,marking a 20.23%improvement over baseline models.Furthermore,in Task II,where complex interaction patterns prevail,MAMGBR utilizes auxiliary loss functions to effectively model the multifaceted roles of users,items,and participants,leading to a 24.08%increase in MRR@100 under a 1:99 sample ratio.Experimental results show that compared to benchmark models,such as NGCF and EATNN,MAMGBR’s integration ofmulti-head attentionmechanisms,expert networks,and gating mechanisms enables more accurate modeling of user preferences and social associations within group-buying scenarios,significantly enhancing recommendation accuracy and platform group-buying success rates. 展开更多
关键词 Group-buying recommendation multi-head attention mechanism multi-task learning
在线阅读 下载PDF
SSA-LSTM-Multi-Head Attention Modelling Approach for Prediction of Coal Dust Maximum Explosion Pressure Based on the Synergistic Effect of Particle Size and Concentration
8
作者 Yongli Liu Weihao Li +1 位作者 Haitao Wang Taoren Du 《Computer Modeling in Engineering & Sciences》 2025年第5期2261-2286,共26页
Coal dust explosions are severe safety accidents in coal mine production,posing significant threats to life and property.Predicting the maximum explosion pressure(Pm)of coal dust using deep learning models can effecti... Coal dust explosions are severe safety accidents in coal mine production,posing significant threats to life and property.Predicting the maximum explosion pressure(Pm)of coal dust using deep learning models can effectively assess potential risks and provide a scientific basis for preventing coal dust explosions.In this study,a 20-L explosion sphere apparatus was used to test the maximum explosion pressure of coal dust under seven different particle sizes and ten mass concentrations(Cdust),resulting in a dataset of 70 experimental groups.Through Spearman correlation analysis and random forest feature selection methods,particle size(D_(10),D_(20),D_(50))and mass concentration(Cdust)were identified as critical feature parameters from the ten initial parameters of the coal dust samples.Based on this,a hybrid Long Short-Term Memory(LSTM)network model incorporating a Multi-Head Attention Mechanism and the Sparrow Search Algorithm(SSA)was proposed to predict the maximum explosion pressure of coal dust.The results demonstrate that the SSA-LSTM-Multi-Head Attention model excels in predicting the maximum explosion pressure of coal dust.The four evaluation metrics indicate that the model achieved a coefficient of determination(R^(2)),root mean square error(RMSE),mean absolute percentage error(MAPE),and mean absolute error(MAE)of 0.9841,0.0030,0.0074,and 0.0049,respectively,in the training set.In the testing set,these values were 0.9743,0.0087,0.0108,and 0.0069,respectively.Compared to artificial neural networks(ANN),random forest(RF),support vector machines(SVM),particle swarm optimized-SVM(PSO-SVM)neural networks,and the traditional single-model LSTM,the SSA-LSTM-Multi-Head Attention model demonstrated superior generalization capability and prediction accuracy.The findings of this study not only advance the application of deep learning in coal dust explosion prediction but also provide robust technical support for the prevention and risk assessment of coal dust explosions. 展开更多
关键词 Coal dust explosion deep learning maximum explosion pressure predictive model SSA-LSTM multi-head attention mechanism
在线阅读 下载PDF
A local-global dynamic hypergraph convolution with multi-head flow attention for traffic flow forecasting
9
作者 ZHANG Hong LI Yang +3 位作者 LUO Shengjun ZHANG Pengcheng ZHANG Xijun YI Min 《High Technology Letters》 2025年第3期246-256,共11页
Traffic flow prediction is a crucial element of intelligent transportation systems.However,accu-rate traffic flow prediction is quite challenging because of its highly nonlinear,complex,and dynam-ic characteristics.To... Traffic flow prediction is a crucial element of intelligent transportation systems.However,accu-rate traffic flow prediction is quite challenging because of its highly nonlinear,complex,and dynam-ic characteristics.To address the difficulties in simultaneously capturing local and global dynamic spatiotemporal correlations in traffic flow,as well as the high time complexity of existing models,a multi-head flow attention-based local-global dynamic hypergraph convolution(MFA-LGDHC)pre-diction model is proposed.which consists of multi-head flow attention(MHFA)mechanism,graph convolution network(GCN),and local-global dynamic hypergraph convolution(LGHC).MHFA is utilized to extract the time dependency of traffic flow and reduce the time complexity of the model.GCN is employed to catch the spatial dependency of traffic flow.LGHC utilizes down-sampling con-volution and isometric convolution to capture the local and global spatial dependencies of traffic flow.And dynamic hypergraph convolution is used to model the dynamic higher-order relationships of the traffic road network.Experimental results indicate that the MFA-LGDHC model outperforms current popular baseline models and exhibits good prediction performance. 展开更多
关键词 traffic flow prediction multi-head flow attention graph convolution hypergraph learning dynamic spatio-temporal properties
在线阅读 下载PDF
基于GOA-AE-LSTM-Attention组合的煤价预测
10
作者 王琦 郑晓亮 《安阳工学院学报》 2025年第6期65-73,共9页
动力煤价格作为能源市场的重要经济指标,其波动性对宏观经济、能源结构调控及相关产业链条均会产生深远影响。针对煤价走势中对历史依赖、关键时点与动态变化考虑不足等问题,作者提出一种鹅优化算法-自编码器-长短期记忆网络-高效加法... 动力煤价格作为能源市场的重要经济指标,其波动性对宏观经济、能源结构调控及相关产业链条均会产生深远影响。针对煤价走势中对历史依赖、关键时点与动态变化考虑不足等问题,作者提出一种鹅优化算法-自编码器-长短期记忆网络-高效加法注意力机制(GOA-AE-LSTM-Attention)的组合预测模型。该模型以历史煤价数据为输入,通过自编码器提取潜在时序特征,LSTM捕捉其长期依赖关系,并引入注意力机制以增强模型对关键时点的特征感知能力;同时,利用GOA优化模型超参数,提高预测性能。实验选取河北省曹妃甸港口、安徽省淮南市及内蒙古自治区包头市的煤价数据进行跨区域测试,验证模型的泛化能力。结果表明,所提出的GOA-AE-LSTM-Attention组合预测模型在上述3个地区均展现出良好的预测精度和稳健性,相较其他模型具有更好的适应性和推广价值。 展开更多
关键词 动力煤 煤价预测 自编码器 高效加法注意力机制
在线阅读 下载PDF
基于Multi-head Attention和Bi-LSTM的实体关系分类 被引量:12
11
作者 刘峰 高赛 +1 位作者 于碧辉 郭放达 《计算机系统应用》 2019年第6期118-124,共7页
关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采... 关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采用单层注意力机制,特征表达相对单一.因此本文在已有研究基础上,引入多头注意力机制(Multi-head attention),旨在让模型从不同表示空间上获取关于句子更多层面的信息,提高模型的特征表达能力.同时在现有的词向量和位置向量作为网络输入的基础上,进一步引入依存句法特征和相对核心谓词依赖特征,其中依存句法特征包括当前词的依存关系值和所依赖的父节点位置,从而使模型进一步获取更多的文本句法信息.在SemEval-2010 任务8 数据集上的实验结果证明,该方法相较之前的深度学习模型,性能有进一步提高. 展开更多
关键词 关系分类 Bi-LSTM 句法特征 self-attention multi-head attention
在线阅读 下载PDF
基于EfficientNetV2的PCB缺陷检测算法 被引量:3
12
作者 尹嘉超 吕耀文 +1 位作者 索科 黄玺 《计算机辅助设计与图形学学报》 北大核心 2025年第7期1260-1269,共10页
印刷电路板(PCB)是一种高精密的电子元器件,其优良与否对电子产品的质量有着重要影响.但现有的PCB缺陷检测算法存在着检测精度不高,特别是缺陷定位不够精确等问题.针对以上问题,提出一种基于EfficientNetV2的PCB缺陷检测算法.在Faster R... 印刷电路板(PCB)是一种高精密的电子元器件,其优良与否对电子产品的质量有着重要影响.但现有的PCB缺陷检测算法存在着检测精度不高,特别是缺陷定位不够精确等问题.针对以上问题,提出一种基于EfficientNetV2的PCB缺陷检测算法.在Faster R-CNN的基础上,通过选用特征提取能力更强的EfficientNetV2_M作为特征提取网络,同时使用通道注意力机制(ECA)对特征融合网络FPN进行优化,提高了细节信息提取能力.在北京大学智能机器人开放实验室发布的PCB瑕疵数据集上的实验结果表明,相较于目前检测效果最好的PCB缺陷检测算法LWN-Net,改进后的缺陷检测算法在IoU=0.50时mAP由99.58%提升到99.66%;在IoU=[0.50:0.95]时mAP由52.6%提升到79.4%.该网络在提升了PCB的检测精度的同时,解决了缺陷定位不够精确的问题,实现了高精度的PCB缺陷检测,具有一定的实际意义.代码已经开源在https://github.com/ChaO989/Defect_detection. 展开更多
关键词 印刷电路板 efficientNetV2 缺陷检测 Faster R-CNN 高效通道注意力
在线阅读 下载PDF
Application of the improved dung beetle optimizer,muti-head attention and hybrid deep learning algorithms to groundwater depth prediction in the Ningxia area,China 被引量:1
13
作者 Jiarui Cai Bo Sun +5 位作者 Huijun Wang Yi Zheng Siyu Zhou Huixin Li Yanyan Huang Peishu Zong 《Atmospheric and Oceanic Science Letters》 2025年第1期18-23,共6页
Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in th... Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in the prediction of groundwater depth in many areas.In this study,two new models are applied to the prediction of groundwater depth in the Ningxia area,China.The two models combine the improved dung beetle optimizer(DBO)algorithm with two deep learning models:The Multi-head Attention-Convolution Neural Network-Long Short Term Memory networks(MH-CNN-LSTM)and the Multi-head Attention-Convolution Neural Network-Gated Recurrent Unit(MH-CNN-GRU).The models with DBO show better prediction performance,with larger R(correlation coefficient),RPD(residual prediction deviation),and lower RMSE(root-mean-square error).Com-pared with the models with the original DBO,the R and RPD of models with the improved DBO increase by over 1.5%,and the RMSE decreases by over 1.8%,indicating better prediction results.In addition,compared with the multiple linear regression model,a traditional statistical model,deep learning models have better prediction performance. 展开更多
关键词 Groundwater depth multi-head attention Improved dung beetle optimizer CNN-LSTM CNN-GRU Ningxia
在线阅读 下载PDF
An Intelligent Framework for Resilience Recovery of FANETs with Spatio-Temporal Aggregation and Multi-Head Attention Mechanism 被引量:1
14
作者 Zhijun Guo Yun Sun +2 位作者 YingWang Chaoqi Fu Jilong Zhong 《Computers, Materials & Continua》 SCIE EI 2024年第5期2375-2398,共24页
Due to the time-varying topology and possible disturbances in a conflict environment,it is still challenging to maintain the mission performance of flying Ad hoc networks(FANET),which limits the application of Unmanne... Due to the time-varying topology and possible disturbances in a conflict environment,it is still challenging to maintain the mission performance of flying Ad hoc networks(FANET),which limits the application of Unmanned Aerial Vehicle(UAV)swarms in harsh environments.This paper proposes an intelligent framework to quickly recover the cooperative coveragemission by aggregating the historical spatio-temporal network with the attention mechanism.The mission resilience metric is introduced in conjunction with connectivity and coverage status information to simplify the optimization model.A spatio-temporal node pooling method is proposed to ensure all node location features can be updated after destruction by capturing the temporal network structure.Combined with the corresponding Laplacian matrix as the hyperparameter,a recovery algorithm based on the multi-head attention graph network is designed to achieve rapid recovery.Simulation results showed that the proposed framework can facilitate rapid recovery of the connectivity and coverage more effectively compared to the existing studies.The results demonstrate that the average connectivity and coverage results is improved by 17.92%and 16.96%,respectively compared with the state-of-the-art model.Furthermore,by the ablation study,the contributions of each different improvement are compared.The proposed model can be used to support resilient network design for real-time mission execution. 展开更多
关键词 RESILIENCE cooperative mission FANET spatio-temporal node pooling multi-head attention graph network
在线阅读 下载PDF
Using Recurrent Neural Network Structure and Multi-Head Attention with Convolution for Fraudulent Phone Text Recognition 被引量:1
15
作者 Junjie Zhou Hongkui Xu +3 位作者 Zifeng Zhang Jiangkun Lu Wentao Guo Zhenye Li 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2277-2297,共21页
Fraud cases have been a risk in society and people’s property security has been greatly threatened.In recent studies,many promising algorithms have been developed for social media offensive text recognition as well a... Fraud cases have been a risk in society and people’s property security has been greatly threatened.In recent studies,many promising algorithms have been developed for social media offensive text recognition as well as sentiment analysis.These algorithms are also suitable for fraudulent phone text recognition.Compared to these tasks,the semantics of fraudulent words are more complex and more difficult to distinguish.Recurrent Neural Networks(RNN),the variants ofRNN,ConvolutionalNeuralNetworks(CNN),and hybrid neural networks to extract text features are used by most text classification research.However,a single network or a simple network combination cannot obtain rich characteristic knowledge of fraudulent phone texts relatively.Therefore,a new model is proposed in this paper.In the fraudulent phone text,the knowledge that can be learned by the model includes the sequence structure of sentences,the correlation between words,the correlation of contextual semantics,the feature of keywords in sentences,etc.The new model combines a bidirectional Long-Short Term Memory Neural Network(BiLSTM)or a bidirectional Gate Recurrent United(BiGRU)and a Multi-Head attention mechanism module with convolution.A normalization layer is added after the output of the final hidden layer.BiLSTM or BiGRU is used to build the encoding and decoding layer.Multi-head attention mechanism module with convolution(MHAC)enhances the ability of the model to learn global interaction information and multi-granularity local interaction information in fraudulent sentences.A fraudulent phone text dataset is produced by us in this paper.The THUCNews data sets and fraudulent phone text data sets are used in experiments.Experiment results show that compared with the baseline model,the proposed model(LMHACL)has the best experiment results in terms of Accuracy,Precision,Recall,and F1 score on the two data sets.And the performance indexes on fraudulent phone text data sets are all above 0.94. 展开更多
关键词 BiLSTM BiGRU multi-head attention mechanism CNN
在线阅读 下载PDF
Multi-head attention-based long short-term memory model for speech emotion recognition 被引量:1
16
作者 Zhao Yan Zhao Li +3 位作者 Lu Cheng Li Sunan Tang Chuangao Lian Hailun 《Journal of Southeast University(English Edition)》 EI CAS 2022年第2期103-109,共7页
To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model ... To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model uses frame-level features and takes the temporal information of emotion speech as the input of the LSTM layer.Here,a multi-head time-dimension attention(MHTA)layer was employed to linearly project the output of the LSTM layer into different subspaces for the reduced-dimension context vectors.To provide relative vital information from other dimensions,the output of MHTA,the output of feature-dimension attention,and the last time-step output of LSTM were utilized to form multiple context vectors as the input of the fully connected layer.To improve the performance of multiple vectors,feature-dimension attention was employed for the all-time output of the first LSTM layer.The proposed model was evaluated on the eNTERFACE and GEMEP corpora,respectively.The results indicate that the proposed model outperforms LSTM by 14.6%and 10.5%for eNTERFACE and GEMEP,respectively,proving the effectiveness of the proposed model in SER tasks. 展开更多
关键词 speech emotion recognition long short-term memory(LSTM) multi-head attention mechanism frame-level features self-attention
在线阅读 下载PDF
Pyramid–MixNet: Integrate Attention into Encoder-Decoder Transformer Framework for Automatic Railway Surface Damage Segmentation
17
作者 Hui Luo Wenqing Li Wei Zeng 《Computers, Materials & Continua》 2025年第7期1567-1580,共14页
Rail surface damage is a critical component of high-speed railway infrastructure,directly affecting train operational stability and safety.Existing methods face limitations in accuracy and speed for small-sample,multi... Rail surface damage is a critical component of high-speed railway infrastructure,directly affecting train operational stability and safety.Existing methods face limitations in accuracy and speed for small-sample,multi-category,and multi-scale target segmentation tasks.To address these challenges,this paper proposes Pyramid-MixNet,an intelligent segmentation model for high-speed rail surface damage,leveraging dataset construction and expansion alongside a feature pyramid-based encoder-decoder network with multi-attention mechanisms.The encoding net-work integrates Spatial Reduction Masked Multi-Head Attention(SRMMHA)to enhance global feature extraction while reducing trainable parameters.The decoding network incorporates Mix-Attention(MA),enabling multi-scale structural understanding and cross-scale token group correlation learning.Experimental results demonstrate that the proposed method achieves 62.17%average segmentation accuracy,80.28%Damage Dice Coefficient,and 56.83 FPS,meeting real-time detection requirements.The model’s high accuracy and scene adaptability significantly improve the detection of small-scale and complex multi-scale rail damage,offering practical value for real-time monitoring in high-speed railway maintenance systems. 展开更多
关键词 Pyramid vision transformer encoder–decoder architecture railway damage segmentation masked multi-head attention mix-attention
在线阅读 下载PDF
Structured Multi-Head Attention Stock Index Prediction Method Based Adaptive Public Opinion Sentiment Vector
18
作者 Cheng Zhao Zhe Peng +2 位作者 Xuefeng Lan Yuefeng Cen Zuxin Wang 《Computers, Materials & Continua》 SCIE EI 2024年第1期1503-1523,共21页
The present study examines the impact of short-term public opinion sentiment on the secondary market,with a focus on the potential for such sentiment to cause dramatic stock price fluctuations and increase investment ... The present study examines the impact of short-term public opinion sentiment on the secondary market,with a focus on the potential for such sentiment to cause dramatic stock price fluctuations and increase investment risk.The quantification of investment sentiment indicators and the persistent analysis of their impact has been a complex and significant area of research.In this paper,a structured multi-head attention stock index prediction method based adaptive public opinion sentiment vector is proposed.The proposedmethod utilizes an innovative approach to transform numerous investor comments on social platforms over time into public opinion sentiment vectors expressing complex sentiments.It then analyzes the continuous impact of these vectors on the market through the use of aggregating techniques and public opinion data via a structured multi-head attention mechanism.The experimental results demonstrate that the public opinion sentiment vector can provide more comprehensive feedback on market sentiment than traditional sentiment polarity analysis.Furthermore,the multi-head attention mechanism is shown to improve prediction accuracy through attention convergence on each type of input information separately.Themean absolute percentage error(MAPE)of the proposedmethod is 0.463%,a reduction of 0.294% compared to the benchmark attention algorithm.Additionally,the market backtesting results indicate that the return was 24.560%,an improvement of 8.202% compared to the benchmark algorithm.These results suggest that themarket trading strategy based on thismethod has the potential to improve trading profits. 展开更多
关键词 Public opinion sentiment structured multi-head attention stock index prediction deep learning
在线阅读 下载PDF
Discharge Summaries Based Sentiment Detection Using Multi-Head Attention and CNN-BiGRU
19
作者 Samer Abdulateef Waheeb 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期981-998,共18页
Automatic extraction of the patient’s health information from the unstructured data concerning the discharge summary remains challenging.Discharge summary related documents contain various aspects of the patient heal... Automatic extraction of the patient’s health information from the unstructured data concerning the discharge summary remains challenging.Discharge summary related documents contain various aspects of the patient health condition to examine the quality of treatment and thereby help improve decision-making in the medical field.Using a sentiment dictionary and feature engineering,the researchers primarily mine semantic text features.However,choosing and designing features requires a lot of manpower.The proposed approach is an unsupervised deep learning model that learns a set of clusters embedded in the latent space.A composite model including Active Learning(AL),Convolutional Neural Network(CNN),BiGRU,and Multi-Attention,called ACBMA in this research,is designed to measure the quality of treatment based on discharge summaries text sentiment detection.CNN is utilized for extracting the set of local features of text vectors.Then BiGRU network was utilized to extract the text’s global features to solve the issues that a single CNN cannot obtain global semantic information and the traditional Recurrent Neural Network(RNN)gradient disappearance.Experiments prove that the ACBMA method can demonstrate the effectiveness of the suggested method,achieve comparable results to state-of-arts methods in sentiment detection,and outperform them with accurate benchmarks.Finally,several algorithm studies ultimately determined that the ACBMA method is more precise for discharge summaries sentiment analysis. 展开更多
关键词 Sentiment analysis LEXICON discharge summaries active learning multi-head attention mechanism
在线阅读 下载PDF
Posture Detection of Heart Disease Using Multi-Head Attention Vision Hybrid(MHAVH)Model
20
作者 Hina Naz Zuping Zhang +3 位作者 Mohammed Al-Habib Fuad A.Awwad Emad A.A.Ismail Zaid Ali Khan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2673-2696,共24页
Cardiovascular disease is the leading cause of death globally.This disease causes loss of heart muscles and is also responsible for the death of heart cells,sometimes damaging their functionality.A person’s life may ... Cardiovascular disease is the leading cause of death globally.This disease causes loss of heart muscles and is also responsible for the death of heart cells,sometimes damaging their functionality.A person’s life may depend on receiving timely assistance as soon as possible.Thus,minimizing the death ratio can be achieved by early detection of heart attack(HA)symptoms.In the United States alone,an estimated 610,000 people die fromheart attacks each year,accounting for one in every four fatalities.However,by identifying and reporting heart attack symptoms early on,it is possible to reduce damage and save many lives significantly.Our objective is to devise an algorithm aimed at helping individuals,particularly elderly individuals living independently,to safeguard their lives.To address these challenges,we employ deep learning techniques.We have utilized a vision transformer(ViT)to address this problem.However,it has a significant overhead cost due to its memory consumption and computational complexity because of scaling dot-product attention.Also,since transformer performance typically relies on large-scale or adequate data,adapting ViT for smaller datasets is more challenging.In response,we propose a three-in-one steam model,theMulti-Head Attention Vision Hybrid(MHAVH).Thismodel integrates a real-time posture recognition framework to identify chest pain postures indicative of heart attacks using transfer learning techniques,such as ResNet-50 and VGG-16,renowned for their robust feature extraction capabilities.By incorporatingmultiple heads into the vision transformer to generate additional metrics and enhance heart-detection capabilities,we leverage a 2019 posture-based dataset comprising RGB images,a novel creation by the author that marks the first dataset tailored for posture-based heart attack detection.Given the limited online data availability,we segmented this dataset into gender categories(male and female)and conducted testing on both segmented and original datasets.The training accuracy of our model reached an impressive 99.77%.Upon testing,the accuracy for male and female datasets was recorded at 92.87%and 75.47%,respectively.The combined dataset accuracy is 93.96%,showcasing a commendable performance overall.Our proposed approach demonstrates versatility in accommodating small and large datasets,offering promising prospects for real-world applications. 展开更多
关键词 Image analysis posture of heart attack(PHA)detection hybrid features VGG-16 ResNet-50 vision transformer advance multi-head attention layer
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部