Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo...Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.展开更多
面向工艺设计的智能化需求,领域大模型的构建方法成为关键研究方向.尽管大规模语言模型(Large Language Models,LLMs)的发展极大地推动了自然语言处理技术的进步,但工艺设计领域的数据通常存在样本稀缺、格式复杂以及缺乏结构化标签等问...面向工艺设计的智能化需求,领域大模型的构建方法成为关键研究方向.尽管大规模语言模型(Large Language Models,LLMs)的发展极大地推动了自然语言处理技术的进步,但工艺设计领域的数据通常存在样本稀缺、格式复杂以及缺乏结构化标签等问题,使得通用LLMs训练方法难以直接适用.此外,传统注意力机制在处理长文本和复杂任务时仍然面临计算复杂度高、资源消耗大、全局语义不稳定等挑战,进一步限制了大模型在工艺设计任务中的适应性.为解决这一问题,本研究提出了一种面向工艺设计的领域大模型构建方法,并在此基础上训练了具备100亿参数规模的工艺设计大模型——鲁班-10B.该方法引入混合稀疏注意力机制,通过保留起始词元的注意力权重,并基于查询内容动态选取相关性最高的若干历史词元,避免对完整序列计算密集型注意力矩阵,在有效降低计算复杂度的同时,提升模型对长文本中关键信息的建模能力.实验结果表明,鲁班-10B能够有效提升领域大模型在工艺设计任务中的适应能力与生成表现,为智能化工艺设计提供了新的技术路径与支撑.展开更多
当前推特等国外社交平台,已成为从事网络黑灰产犯罪不可或缺的工具,对推特上黑灰产账号进行发现、检测和分类对于打击网络犯罪、维护社会稳定具有重大意义。现有的推文分类模型双向长短时记忆网络(bi-directional long short-term memor...当前推特等国外社交平台,已成为从事网络黑灰产犯罪不可或缺的工具,对推特上黑灰产账号进行发现、检测和分类对于打击网络犯罪、维护社会稳定具有重大意义。现有的推文分类模型双向长短时记忆网络(bi-directional long short-term memory,BiLSTM)可以学习推文的上下文信息,却无法学习局部关键信息,卷积神经网络(convolution neural network,CNN)模型可以学习推文的局部关键信息,却无法学习推文的上下文信息。结合BiLSTM与CNN两种模型的优势,提出了BiLSTM-CNN推文分类模型,该模型将推文进行向量化后,输入BiLSTM模型学习推文的上下文信息,再在BiLSTM模型后引入CNN层,进行局部特征的提取,最后使用全连接层将经过池化的特征连接在一起,并应用softmax函数进行四分类。模型在自主构建的中文推特黑灰产推文数据集上进行实验,并使用TextCNN、TextRNN、TextRCNN三种分类模型作为对比实验,实验结果显示,所提的BiLSTM-CNN推文分类模型在对四类推文进行分类的宏准确率为98.32%,明显高于TextCNN、TextRNN和TextRCNN三种模型的准确率。展开更多
基金Fundamental Research Funds for the Central University,China(No.2232018D3-17)。
文摘Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.
文摘面向工艺设计的智能化需求,领域大模型的构建方法成为关键研究方向.尽管大规模语言模型(Large Language Models,LLMs)的发展极大地推动了自然语言处理技术的进步,但工艺设计领域的数据通常存在样本稀缺、格式复杂以及缺乏结构化标签等问题,使得通用LLMs训练方法难以直接适用.此外,传统注意力机制在处理长文本和复杂任务时仍然面临计算复杂度高、资源消耗大、全局语义不稳定等挑战,进一步限制了大模型在工艺设计任务中的适应性.为解决这一问题,本研究提出了一种面向工艺设计的领域大模型构建方法,并在此基础上训练了具备100亿参数规模的工艺设计大模型——鲁班-10B.该方法引入混合稀疏注意力机制,通过保留起始词元的注意力权重,并基于查询内容动态选取相关性最高的若干历史词元,避免对完整序列计算密集型注意力矩阵,在有效降低计算复杂度的同时,提升模型对长文本中关键信息的建模能力.实验结果表明,鲁班-10B能够有效提升领域大模型在工艺设计任务中的适应能力与生成表现,为智能化工艺设计提供了新的技术路径与支撑.