Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of m...Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.展开更多
Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their...Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ...Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.展开更多
目前,空管各类安全管理信息化平台积累了大量非结构化文本数据,但未得到充分利用,为了挖掘空管不正常事件中潜藏的风险,研究利用收集的四千余条空管站不正常事件数据和自构建的4836个空管领域专业术语词,提出了一个基于空管专业信息词...目前,空管各类安全管理信息化平台积累了大量非结构化文本数据,但未得到充分利用,为了挖掘空管不正常事件中潜藏的风险,研究利用收集的四千余条空管站不正常事件数据和自构建的4836个空管领域专业术语词,提出了一个基于空管专业信息词抽取的双向编码器表征法和双向长短时记忆网络的深度学习模型(Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory,BERT-BiLSTM)。该模型通过对不正常事件文本进行信息抽取,过滤其中无用信息,并将双向编码器表征法(Bidirectional Encoder Representations from Transformers,BERT)模型输出的特征向量序列作为双向长短时记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)的输入序列,以对空管不正常事件文本风险识别任务进行对比试验。试验结果显示,在风险识别试验中,基于空管专业信息词抽取的BERT-BiLSTM模型相比于通用领域的BERT模型,风险识别准确率提升了3百分点。可以看出该模型有效提升了空管安全信息处理能力,能够有效识别空管部门日常运行中出现的不正常事件所带来的风险,同时可以为空管安全领域信息挖掘相关任务提供基础参考。展开更多
为了实现在城市内涝舆情信息中快速、精准地识别相关风险要素,首先基于新浪微博平台,对用户评论信息及媒体发布信息进行采集、整理及标注,构建了城市内涝灾害事件语料数据集。进而针对城市内涝舆情信息格式不统一、语义复杂且风险要素...为了实现在城市内涝舆情信息中快速、精准地识别相关风险要素,首先基于新浪微博平台,对用户评论信息及媒体发布信息进行采集、整理及标注,构建了城市内涝灾害事件语料数据集。进而针对城市内涝舆情信息格式不统一、语义复杂且风险要素识别的专业性、精准度要求较高等问题,结合自然灾害系统理论的风险要素框架,提出了一种基于双向编码器表征法-双向长短期记忆-条件随机场(Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory-Conditional Random Field,BERT-BiLSTM-CRF)的识别方法,并开展了一系列模型验证试验。对比试验结果表明,该模型在准确率、召回率、F_(1)三项指标上均有较好表现,其中准确率为84.62%,召回率为86.19%,F_(1)为85.35%,优于其他对比模型。消融试验结果表明,BERT预训练模型对于该模型性能有着更为显著的影响。综合上述试验结果,可以验证该模型能够有效识别城市内涝舆情信息中的各类风险要素,进而为城市内涝灾害风险管控的数智化转型提供研究依据。展开更多
协议转换通常用于解决不同协议之间的数据交互问题,它的本质是寻找不同协议字段之间的映射关系。传统的协议转换方法存在以下缺点:转换大多是在特定协议的基础上设计的,因而这些转换是静态的,灵活性较差,不适用于多协议转换的场景;一旦...协议转换通常用于解决不同协议之间的数据交互问题,它的本质是寻找不同协议字段之间的映射关系。传统的协议转换方法存在以下缺点:转换大多是在特定协议的基础上设计的,因而这些转换是静态的,灵活性较差,不适用于多协议转换的场景;一旦协议发生改变,就需要再次分析协议的结构和字段语义以重新构建字段之间的映射关系,从而产生指数级的工作量,降低了协议转换的效率。因此,提出基于语义相似度的通用协议转换方法,旨在通过智能的方法发掘字段间的映射关系,进而提高协议转换的效率。首先,通过BERT(Bidirectional Encoder Representations from Transformers)模型分类协议字段,并排除“不应该”存在映射关系的字段;其次,通过计算字段之间的语义相似度,推理字段之间的映射关系,进而构建字段映射表;最后,提出基于语义相似度的通用协议转换框架,并定义相关协议以进行验证。仿真实验结果表明:所提方法的字段分类精准率达到了94.44%;映射关系识别精准率达到了90.70%,相较于基于知识抽取的方法提高了13.93%。以上结果验证了所提方法的有可行性,该方法可以快速识别不同协议字段之间的映射关系,适用于无人协同中多协议转换的场景。展开更多
目前在高校C语言编程课程中,使用客观评价的题目难度考验学生的学习情况是非常重要的手段。目前大部分难度评估方法都针对特有科目和特有题型,而对中文编程题目的难度评估存在不足。因此,提出一种融合题目文本和知识点标签的基于BERT(Bi...目前在高校C语言编程课程中,使用客观评价的题目难度考验学生的学习情况是非常重要的手段。目前大部分难度评估方法都针对特有科目和特有题型,而对中文编程题目的难度评估存在不足。因此,提出一种融合题目文本和知识点标签的基于BERT(Bidirectional Encoder Representations from Transformers)和双向长短时记忆(Bi-LSTM)模型的C语言题目难度预测模型FTKB-BiLSTM(Fusion of Title and Knowledge based on BERT and Bi-LSTM)。首先,利用BERT的中文预训练模型获得题目文本和知识点的词向量;其次,融合模块将融合后的信息通过BERT处理得到文本的信息表示,并输入Bi-LSTM模型中学习其中的序列信息,提取更丰富的特征;最后,把经Bi-LSTM模型得到的特征表示通过全连接层并经过Softmax函数处理得到题目难度分类结果。在Leetcode中文数据集和ZjgsuOJ平台数据集上的实验结果表明,相较于XLNet等主流的深度学习模型,所提模型的准确率更优,具有较强的分类能力。展开更多
针对现有情感分类模型在深层情感理解上的局限性、传统注意力机制的单向性束缚以及自然语言处理(NLP)中的类别不平衡等问题,提出一种融合多尺度BERT(Bidirectional Encoder Representations from Transformers)特征和双向交叉注意力机...针对现有情感分类模型在深层情感理解上的局限性、传统注意力机制的单向性束缚以及自然语言处理(NLP)中的类别不平衡等问题,提出一种融合多尺度BERT(Bidirectional Encoder Representations from Transformers)特征和双向交叉注意力机制的情感分类模型M-BCA(Multi-scale BERT features with Bidirectional Cross Attention)。首先,从BERT的低层、中层和高层分别提取多尺度特征,以捕捉句子文本的表面信息、语法信息和深层语义信息;其次,利用三通道门控循环单元(GRU)进一步提取深层语义特征,从而增强模型对文本的理解能力;最后,为促进不同尺度特征之间的交互与学习,引入双向交叉注意力机制,从而增强多尺度特征之间的相互作用。此外,针对不平衡数据问题,设计数据增强策略,并采用混合损失函数优化模型对少数类别样本的学习。实验结果表明,在细粒度情感分类任务中,M-BCA表现优异。M-BCA在处理分布不平衡的多分类情感数据集时,它的性能显著优于大多数基线模型。此外,M-BCA在少数类别样本的分类任务中表现突出,尤其是在NLPCC 2014与Online_Shopping_10_Cats数据集上,MBCA的少数类别的Macro-Recall领先其他所有对比模型。可见,该模型在细粒度情感分类任务中取得了显著的性能提升,并适用于处理不平衡数据集。展开更多
基金The National Key R&D Program of China supported this study(2017YFC1700303).
文摘Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.
文摘Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
基金funded by Scientific Research Deanship at University of Ha’il-Saudi Arabia through Project Number RG-23092。
文摘Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
文摘目前,空管各类安全管理信息化平台积累了大量非结构化文本数据,但未得到充分利用,为了挖掘空管不正常事件中潜藏的风险,研究利用收集的四千余条空管站不正常事件数据和自构建的4836个空管领域专业术语词,提出了一个基于空管专业信息词抽取的双向编码器表征法和双向长短时记忆网络的深度学习模型(Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory,BERT-BiLSTM)。该模型通过对不正常事件文本进行信息抽取,过滤其中无用信息,并将双向编码器表征法(Bidirectional Encoder Representations from Transformers,BERT)模型输出的特征向量序列作为双向长短时记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)的输入序列,以对空管不正常事件文本风险识别任务进行对比试验。试验结果显示,在风险识别试验中,基于空管专业信息词抽取的BERT-BiLSTM模型相比于通用领域的BERT模型,风险识别准确率提升了3百分点。可以看出该模型有效提升了空管安全信息处理能力,能够有效识别空管部门日常运行中出现的不正常事件所带来的风险,同时可以为空管安全领域信息挖掘相关任务提供基础参考。
文摘该研究致力于构建一个高质量的数据集,用于南美白对虾养殖领域的命名实体识别(named entity recognition,NER)任务,命名为VamNER。为确保数据集的多样性,从CNKI数据库中收集了近10年的高质量论文,并结合权威书籍进行语料构建。邀请专家讨论实体类型,并经过专业培训的标注人员使用IOB2标注格式进行标注,标注过程分为预标注和正式标注两个阶段以提高效率。在预标注阶段,标注者间一致性(inter-annotation agreement,IAA)达到0.87,表明标注人员的一致性较高。最终,VamNER包含6115个句子,总字符数达384602,涵盖10个实体类型,共有12814个实体。研究通过与多个通用领域数据集和一个特定领域数据集进行比较,揭示了VamNER的独特特性。在实验中使用了预训练的基于变换器的双向编码器表示(bidirectional encoder representations from Transformers,BERT)模型、双向长短期记忆神经网络(bidirectional long short-term memory network,BiLSTM)和条件随机场模型(conditional random fields,CRF),最优模型在测试集上的F1值达到82.8%。VamNER成为首个专注于南美白对虾养殖领域的NER数据集,为中文特定领域NER研究提供了丰富资源,有望推动水产养殖领域NER研究的发展。
文摘为了实现在城市内涝舆情信息中快速、精准地识别相关风险要素,首先基于新浪微博平台,对用户评论信息及媒体发布信息进行采集、整理及标注,构建了城市内涝灾害事件语料数据集。进而针对城市内涝舆情信息格式不统一、语义复杂且风险要素识别的专业性、精准度要求较高等问题,结合自然灾害系统理论的风险要素框架,提出了一种基于双向编码器表征法-双向长短期记忆-条件随机场(Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory-Conditional Random Field,BERT-BiLSTM-CRF)的识别方法,并开展了一系列模型验证试验。对比试验结果表明,该模型在准确率、召回率、F_(1)三项指标上均有较好表现,其中准确率为84.62%,召回率为86.19%,F_(1)为85.35%,优于其他对比模型。消融试验结果表明,BERT预训练模型对于该模型性能有着更为显著的影响。综合上述试验结果,可以验证该模型能够有效识别城市内涝舆情信息中的各类风险要素,进而为城市内涝灾害风险管控的数智化转型提供研究依据。
文摘针对现有的中文命名实体识别算法没有充分考虑实体识别任务的数据特征,存在中文样本数据的类别不平衡、训练数据中的噪声太大和每次模型生成数据的分布差异较大的问题,提出了一种以BERT-BiLSTM-CRF(Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory-Conditional Random Field)为基线改进的中文命名实体识别模型。首先在BERT-BiLSTM-CRF模型上结合P-Tuning v2技术,精确提取数据特征,然后使用3个损失函数包括聚焦损失(Focal Loss)、标签平滑(Label Smoothing)和KL Loss(Kullback-Leibler divergence loss)作为正则项参与损失计算。实验结果表明,改进的模型在Weibo、Resume和MSRA(Microsoft Research Asia)数据集上的F 1得分分别为71.13%、96.31%、95.90%,验证了所提算法具有更好的性能,并且在不同的下游任务中,所提算法易于与其他的神经网络结合与扩展。
文摘协议转换通常用于解决不同协议之间的数据交互问题,它的本质是寻找不同协议字段之间的映射关系。传统的协议转换方法存在以下缺点:转换大多是在特定协议的基础上设计的,因而这些转换是静态的,灵活性较差,不适用于多协议转换的场景;一旦协议发生改变,就需要再次分析协议的结构和字段语义以重新构建字段之间的映射关系,从而产生指数级的工作量,降低了协议转换的效率。因此,提出基于语义相似度的通用协议转换方法,旨在通过智能的方法发掘字段间的映射关系,进而提高协议转换的效率。首先,通过BERT(Bidirectional Encoder Representations from Transformers)模型分类协议字段,并排除“不应该”存在映射关系的字段;其次,通过计算字段之间的语义相似度,推理字段之间的映射关系,进而构建字段映射表;最后,提出基于语义相似度的通用协议转换框架,并定义相关协议以进行验证。仿真实验结果表明:所提方法的字段分类精准率达到了94.44%;映射关系识别精准率达到了90.70%,相较于基于知识抽取的方法提高了13.93%。以上结果验证了所提方法的有可行性,该方法可以快速识别不同协议字段之间的映射关系,适用于无人协同中多协议转换的场景。
文摘目前在高校C语言编程课程中,使用客观评价的题目难度考验学生的学习情况是非常重要的手段。目前大部分难度评估方法都针对特有科目和特有题型,而对中文编程题目的难度评估存在不足。因此,提出一种融合题目文本和知识点标签的基于BERT(Bidirectional Encoder Representations from Transformers)和双向长短时记忆(Bi-LSTM)模型的C语言题目难度预测模型FTKB-BiLSTM(Fusion of Title and Knowledge based on BERT and Bi-LSTM)。首先,利用BERT的中文预训练模型获得题目文本和知识点的词向量;其次,融合模块将融合后的信息通过BERT处理得到文本的信息表示,并输入Bi-LSTM模型中学习其中的序列信息,提取更丰富的特征;最后,把经Bi-LSTM模型得到的特征表示通过全连接层并经过Softmax函数处理得到题目难度分类结果。在Leetcode中文数据集和ZjgsuOJ平台数据集上的实验结果表明,相较于XLNet等主流的深度学习模型,所提模型的准确率更优,具有较强的分类能力。
文摘针对现有情感分类模型在深层情感理解上的局限性、传统注意力机制的单向性束缚以及自然语言处理(NLP)中的类别不平衡等问题,提出一种融合多尺度BERT(Bidirectional Encoder Representations from Transformers)特征和双向交叉注意力机制的情感分类模型M-BCA(Multi-scale BERT features with Bidirectional Cross Attention)。首先,从BERT的低层、中层和高层分别提取多尺度特征,以捕捉句子文本的表面信息、语法信息和深层语义信息;其次,利用三通道门控循环单元(GRU)进一步提取深层语义特征,从而增强模型对文本的理解能力;最后,为促进不同尺度特征之间的交互与学习,引入双向交叉注意力机制,从而增强多尺度特征之间的相互作用。此外,针对不平衡数据问题,设计数据增强策略,并采用混合损失函数优化模型对少数类别样本的学习。实验结果表明,在细粒度情感分类任务中,M-BCA表现优异。M-BCA在处理分布不平衡的多分类情感数据集时,它的性能显著优于大多数基线模型。此外,M-BCA在少数类别样本的分类任务中表现突出,尤其是在NLPCC 2014与Online_Shopping_10_Cats数据集上,MBCA的少数类别的Macro-Recall领先其他所有对比模型。可见,该模型在细粒度情感分类任务中取得了显著的性能提升,并适用于处理不平衡数据集。