Purpose:Mo ve recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language units.To improve the performance of move recognition in scientific abstracts,...Purpose:Mo ve recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language units.To improve the performance of move recognition in scientific abstracts,a novel model of move recognition is proposed that outperforms the BERT-based method.Design/methodology/approach:Prevalent models based on BERT for sentence classification often classify sentences without considering the context of the sentences.In this paper,inspired by the BERT masked language model(MLM),we propose a novel model called the masked sentence model that integrates the content and contextual information of the sentences in move recognition.Experiments are conducted on the benchmark dataset PubMed 20K RCT in three steps.Then,we compare our model with HSLN-RNN,BERT-based and SciBERT using the same dataset.Findings:Compared with the BERT-based and SciBERT models,the F1 score of our model outperforms them by 4.96%and 4.34%,respectively,which shows the feasibility and effectiveness of the novel model and the result of our model comes closest to the state-of-theart results of HSLN-RNN at present.Research limitations:The sequential features of move labels are not considered,which might be one of the reasons why HSLN-RNN has better performance.Our model is restricted to dealing with biomedical English literature because we use a dataset from PubMed,which is a typical biomedical database,to fine-tune our model.Practical implications:The proposed model is better and simpler in identifying move structures in scientific abstracts and is worthy of text classification experiments for capturing contextual features of sentences.Originality/value:T he study proposes a masked sentence model based on BERT that considers the contextual features of the sentences in abstracts in a new way.The performance of this classification model is significantly improved by rebuilding the input layer without changing the structure of neural networks.展开更多
Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective...Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective way of sorting.Owing to significant domain gaps between natural images and kitchen waste images,it is difficult to reflect the characteristics of diverse scales and dense distribution in kitchen waste based on an ImageNet pre-trained model,leading to poor generalisation.In this article,the authors propose the first pre-trained model for kitchen waste sorting called KitWaSor,which combines both contrastive learning(CL)and masked image modelling(MIM)through self-supervised learning(SSL).First,to address the issue of diverse scales,the authors propose a mixed masking strategy by introducing an incomplete masking branch based on the original random masking branch.It prevents the complete loss of small-scale objects while avoiding excessive leakage of large-scale object pixels.Second,to address the issue of dense distribution,the authors introduce semantic consistency constraints on the basis of the mixed masking strategy.That is,object semantic reasoning is performed through semantic consistency constraints to compensate for the lack of contextual information.To train KitWaSor,the authors construct the first million-level kitchen waste dataset across seasonal and regional distributions,named KWD-Million.Extensive experiments show that KitWaSor achieves state-of-the-art(SOTA)performance on the two most relevant downstream tasks for kitchen waste sorting(i.e.image classification and object detection),demonstrating the effectiveness of the proposed KitWaSor.展开更多
【目的】工业控制系统(industrial control system,ICS)中设备间通信过程高度依赖工控协议来实现,协议安全性对保障ICS稳定运行起到关键作用。漏洞挖掘与入侵检测等作为ICS安全防御体系的核心技术组件,其有效性依赖于对工控协议结构及...【目的】工业控制系统(industrial control system,ICS)中设备间通信过程高度依赖工控协议来实现,协议安全性对保障ICS稳定运行起到关键作用。漏洞挖掘与入侵检测等作为ICS安全防御体系的核心技术组件,其有效性依赖于对工控协议结构及语义功能的精确解析。协议逆向分析作为解析协议结构与语义功能的关键技术,其核心环节语义推断精度直接决定协议理解的准确性。然而,受限于工控协议文档缺失、格式异构性强等现实条件,现有语义推断方法普遍依赖专家经验,存在自动化水平不足、跨协议泛化性能有限等固有瓶颈,难以适应实际工业环境中多源异构协议的高精度解析需求。【方法】为解决上述问题,本文提出mBERT协同多源领域自适应与结构化掩码策略的语义推断方法。通过mBERT模型实现跨协议通用语义表示;利用结合注意力权重与位置编码设计的结构化掩码策略,增强模型对协议结构和语义内在联系的表示能力,提高语义推断方法的自动化程度和效率;利用结合对抗训练的多源领域自适应逐步微调策略,提升模型对多个源协议的语义通用表示能力,增强其在多种工控协议上的适用性,实现关键字语义的有效推断。【结果】在辽宁省石油化工行业信息安全重点实验室的典型能源企业攻防演练靶场中开展实验验证,采集了S7comm、Modbus/TCP和EtherNet/IP三种工控协议数据,并利用协议复杂度评分机制组建训练数据集。结果表明,多源领域自适应逐步微调策略能够显著提升模型性能,将其与结构化掩码策略结合,进一步提高了语义推断精度,且本文方法在精确度、召回率与F_(1)分数指标上均显著优于现有基线方法。【结论】本文提出了mBERT协同多源领域自适应与结构化掩码策略的语义推断方法,在语义推断中采用高维球面映射与多任务损失函数,增强了不同语义类别的区分度与模型对协议语义的深层辨识能力。本文方法不仅显著降低了对人工先验知识的依赖,也提升了语义推断效率与跨协议适用性,为工控协议逆向分析及工业系统安全防护提供了具备理论支撑的新路径。展开更多
基金supported by the project “The demonstration system of rich semantic search application in scientific literature” (Grant No. 1734) from the Chinese Academy of Sciences
文摘Purpose:Mo ve recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language units.To improve the performance of move recognition in scientific abstracts,a novel model of move recognition is proposed that outperforms the BERT-based method.Design/methodology/approach:Prevalent models based on BERT for sentence classification often classify sentences without considering the context of the sentences.In this paper,inspired by the BERT masked language model(MLM),we propose a novel model called the masked sentence model that integrates the content and contextual information of the sentences in move recognition.Experiments are conducted on the benchmark dataset PubMed 20K RCT in three steps.Then,we compare our model with HSLN-RNN,BERT-based and SciBERT using the same dataset.Findings:Compared with the BERT-based and SciBERT models,the F1 score of our model outperforms them by 4.96%and 4.34%,respectively,which shows the feasibility and effectiveness of the novel model and the result of our model comes closest to the state-of-theart results of HSLN-RNN at present.Research limitations:The sequential features of move labels are not considered,which might be one of the reasons why HSLN-RNN has better performance.Our model is restricted to dealing with biomedical English literature because we use a dataset from PubMed,which is a typical biomedical database,to fine-tune our model.Practical implications:The proposed model is better and simpler in identifying move structures in scientific abstracts and is worthy of text classification experiments for capturing contextual features of sentences.Originality/value:T he study proposes a masked sentence model based on BERT that considers the contextual features of the sentences in abstracts in a new way.The performance of this classification model is significantly improved by rebuilding the input layer without changing the structure of neural networks.
基金National Key Research and Development Program of China,Grant/Award Number:2021YFC1910402。
文摘Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective way of sorting.Owing to significant domain gaps between natural images and kitchen waste images,it is difficult to reflect the characteristics of diverse scales and dense distribution in kitchen waste based on an ImageNet pre-trained model,leading to poor generalisation.In this article,the authors propose the first pre-trained model for kitchen waste sorting called KitWaSor,which combines both contrastive learning(CL)and masked image modelling(MIM)through self-supervised learning(SSL).First,to address the issue of diverse scales,the authors propose a mixed masking strategy by introducing an incomplete masking branch based on the original random masking branch.It prevents the complete loss of small-scale objects while avoiding excessive leakage of large-scale object pixels.Second,to address the issue of dense distribution,the authors introduce semantic consistency constraints on the basis of the mixed masking strategy.That is,object semantic reasoning is performed through semantic consistency constraints to compensate for the lack of contextual information.To train KitWaSor,the authors construct the first million-level kitchen waste dataset across seasonal and regional distributions,named KWD-Million.Extensive experiments show that KitWaSor achieves state-of-the-art(SOTA)performance on the two most relevant downstream tasks for kitchen waste sorting(i.e.image classification and object detection),demonstrating the effectiveness of the proposed KitWaSor.
文摘【目的】工业控制系统(industrial control system,ICS)中设备间通信过程高度依赖工控协议来实现,协议安全性对保障ICS稳定运行起到关键作用。漏洞挖掘与入侵检测等作为ICS安全防御体系的核心技术组件,其有效性依赖于对工控协议结构及语义功能的精确解析。协议逆向分析作为解析协议结构与语义功能的关键技术,其核心环节语义推断精度直接决定协议理解的准确性。然而,受限于工控协议文档缺失、格式异构性强等现实条件,现有语义推断方法普遍依赖专家经验,存在自动化水平不足、跨协议泛化性能有限等固有瓶颈,难以适应实际工业环境中多源异构协议的高精度解析需求。【方法】为解决上述问题,本文提出mBERT协同多源领域自适应与结构化掩码策略的语义推断方法。通过mBERT模型实现跨协议通用语义表示;利用结合注意力权重与位置编码设计的结构化掩码策略,增强模型对协议结构和语义内在联系的表示能力,提高语义推断方法的自动化程度和效率;利用结合对抗训练的多源领域自适应逐步微调策略,提升模型对多个源协议的语义通用表示能力,增强其在多种工控协议上的适用性,实现关键字语义的有效推断。【结果】在辽宁省石油化工行业信息安全重点实验室的典型能源企业攻防演练靶场中开展实验验证,采集了S7comm、Modbus/TCP和EtherNet/IP三种工控协议数据,并利用协议复杂度评分机制组建训练数据集。结果表明,多源领域自适应逐步微调策略能够显著提升模型性能,将其与结构化掩码策略结合,进一步提高了语义推断精度,且本文方法在精确度、召回率与F_(1)分数指标上均显著优于现有基线方法。【结论】本文提出了mBERT协同多源领域自适应与结构化掩码策略的语义推断方法,在语义推断中采用高维球面映射与多任务损失函数,增强了不同语义类别的区分度与模型对协议语义的深层辨识能力。本文方法不仅显著降低了对人工先验知识的依赖,也提升了语义推断效率与跨协议适用性,为工控协议逆向分析及工业系统安全防护提供了具备理论支撑的新路径。